id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
1819013 | https://en.wikipedia.org/wiki/Ravenala | Ravenala | Ravenala is a genus of monocotyledonous flowering plants. Classically, the genus was considered to include a single species, Ravenala madagascariensis from Madagascar.
Taxonomy
Species of the genus Ravenala are not true palms (family Arecaceae) but members of the family Strelitziaceae. The genus is closely related to the southern African genus Strelitzia and the South American genus Phenakospermum. Some older classifications include these genera in the banana family (Musaceae).
Etymology
The scientific name Ravenala comes from Malagasy ravinala or ravina ala meaning "forest leaves".
Species
Although formerly considered to be monotypic, four different forms have been distinguished. Five new species were described in 2021, all from Madagascar. The following species are currently recognised in the genus Ravenala:
Ravenala agatheae
Ravenala blancii
Ravenala grandis
Ravenala hladikorum
Ravenala madagascariensis
Ravenala menahirana
| Biology and health sciences | Zingiberales | Plants |
1819866 | https://en.wikipedia.org/wiki/Mangabey | Mangabey | Mangabeys are West African Old World monkeys, with species in three of the six genera of tribe Papionini.
The more typical representatives of Cercocebus, also known as the white-eyelid mangabeys, are characterized by their bare, upper eyelids, which are lighter than their facial skin colouring, and the uniformly coloured hairs of their fur. Members of Lophocebus, the crested mangabeys, tend to have dark skin, eyelids that match their facial skin, and crests of hair on their heads.
A new species, the highland mangabey, was discovered in 2003 and was initially placed in Lophocebus. The genus Rungwecebus was later created for this species.
Lophocebus and Cercocebus were once thought to be very closely related, so much so that all the species were in one genus, but the species within genus Lophocebus are now thought to be more closely related to the baboons in genus Papio, while the species within genus Cercocebus are more closely related to the mandrill.
Genera
The three genera of mangabeys are:
Lophocebus, the crested mangabeys
Rungwecebus, the highland mangabey (kipunji)
Cercocebus, the white-eyelid mangabeys
| Biology and health sciences | Old World monkeys | Animals |
1820047 | https://en.wikipedia.org/wiki/Dysgenics | Dysgenics | Dysgenics refers to any decrease in the prevalence of traits deemed to be either socially desirable or generally adaptive to their environment due to selective pressure disfavouring their reproduction.
In 1915 the term was used by David Starr Jordan to describe the supposed deleterious effects of modern warfare on group-level genetic fitness because of its tendency to kill physically healthy men while preserving the disabled at home. Similar concerns had been raised by early eugenicists and social Darwinists during the 19th century, and continued to play a role in scientific and public policy debates throughout the 20th century.
More recent concerns about supposed dysgenic effects in human populations were advanced by the controversial psychologist and self-described "scientific racist" Richard Lynn, notably in his 1996 book Dysgenics: Genetic Deterioration in Modern Populations, which argued that changes in selection pressures and decreased infant mortality since the Industrial Revolution have resulted in an increased propagation of deleterious traits and genetic disorders.
Despite these concerns, genetic studies have shown no evidence for dysgenic effects in human populations. Reviewing Lynn's book, the scholar John R. Wilmoth notes: "Overall, the most puzzling aspect of Lynn's alarmist position is that the deterioration of average intelligence predicted by the eugenicists has not occurred."
| Biology and health sciences | Genetics | Biology |
2515655 | https://en.wikipedia.org/wiki/DisplayPort | DisplayPort | DisplayPort (DP) is a proprietary digital display interface developed by a consortium of PC and chip manufacturers and standardized by the Video Electronics Standards Association (VESA). It is primarily used to connect a video source to a display device such as a computer monitor. It can also carry audio, USB, and other forms of data.
DisplayPort was designed to replace VGA, FPD-Link, and Digital Visual Interface (DVI). It is backward compatible with other interfaces, such as DVI and High-Definition Multimedia Interface (HDMI), through the use of either active or passive adapters.
It is the first display interface to rely on packetized data transmission, a form of digital communication found in technologies such as Ethernet, USB, and PCI Express. It permits the use of internal and external display connections. Unlike legacy standards that transmit a clock signal with each output, its protocol is based on small data packets known as micro packets, which can embed the clock signal in the data stream, allowing higher resolution using fewer pins. The use of data packets also makes it extensible, meaning more features can be added over time without significant changes to the physical interface.
DisplayPort is able to transmit audio and video simultaneously, although each can be transmitted without the other. The video signal path can range from six to sixteen bits per color channel, and the audio path can have up to eight channels of 24-bit, 192kHz uncompressed PCM audio. A bidirectional, half-duplex auxiliary channel carries device management and device control data for the Main Link, such as VESA EDID, MCCS, and DPMS standards. The interface is also capable of carrying bidirectional USB signals.
The interface uses a differential signal that is not compatible with DVI or HDMI. However, dual-mode DisplayPort ports are designed to transmit a single-link DVI or HDMI protocol (TMDS) across the interface through the use of an external passive adapter, enabling compatibility mode and converting the signal from 3.3 to 5 volts. For analog VGA/YPbPr and dual-link DVI, a powered active adapter is required for compatibility and does not rely on dual mode. Active VGA adapters are powered directly by the DisplayPort connector, while active dual-link DVI adapters typically rely on an external power source such as USB.
Versions
1.0 to 1.1
The first version, 1.0, was approved by VESA on 3 May 2006. Version 1.1 was ratified on 2 April 2007, and version 1.1a on 11 January 2008.
DisplayPort 1.0–1.1a allow a maximum bandwidth of 10.8Gbit/s (8.64Gbit/s data rate) over a standard 4-lane main link. DisplayPort cables up to 2 meters in length are required to support the full 10.8Gbit/s bandwidth. DisplayPort 1.1 allows devices to implement alternative link layers such as fiber optic, allowing a much longer reach between source and display without signal degradation, although alternative implementations are not standardized. It also includes HDCP in addition to DisplayPort Content Protection (DPCP). The DisplayPort1.1a standard can be downloaded free of charge from the VESA website.
1.2
DisplayPort version 1.2 was introduced on 7 January 2010. The most significant improvement of this version is the doubling of the data rate to 17.28Gbit/s in High Bit Rate 2 (HBR2) mode, which allows increased resolutions, higher refresh rates, and greater color depth, such as at 60Hz 10bpc RGB. Other improvements include multiple independent video streams (daisy-chain connection with multiple monitors) called Multi-Stream Transport (MST), facilities for stereoscopic 3D, increased AUX channel bandwidth (from 1Mbit/s to 720Mbit/s), more color spaces including xvYCC, scRGB, and Adobe RGB 1998, and Global Time Code (GTC) for sub 1μs audio/video synchronisation. Also Apple Inc.'s Mini DisplayPort connector, which is much smaller and designed for laptop computers and other small devices, is compatible with the new standard.
1.2a
DisplayPort version 1.2a was released in January 2013 and may optionally include VESA's Adaptive Sync. AMD's FreeSync uses the DisplayPort Adaptive-Sync feature for operation. FreeSync was first demonstrated at CES 2014 on a Toshiba Satellite laptop by making use of the Panel-Self-Refresh (PSR) feature from the Embedded DisplayPort standard, and after a proposal from AMD, VESA later adapted the Panel-Self-Refresh feature for use in standalone displays and added it as an optional feature of the main DisplayPort standard under the name "Adaptive-Sync" in version 1.2a. As it is an optional feature, support for Adaptive-Sync is not required for a display to be DisplayPort 1.2a-compliant.
1.3
DisplayPort version 1.3 was approved on 15 September 2014. This standard increases overall transmission bandwidth to 32.4Gbit/s with the new HBR3 mode featuring 8.1Gbit/s per lane (up from 5.4Gbit/s with HBR2 in version 1.2), for a total data throughput of 25.92Gbit/s after factoring in 8b/10b encoding overhead. This bandwidth is enough for a 4K UHD display () at 120Hz with 24bit/px RGB color, a 5K display () at 60Hz with 30bit/px RGB color, or an 8K UHD display () at 30Hz with 24bit/px RGB color. Using Multi-Stream Transport (MST), a DisplayPort port can drive two 4K UHD () displays at 60Hz, or up to four WQXGA () displays at 60Hz with 24bit/px RGB color. The new standard includes mandatory Dual-mode for DVI and HDMI adapters, implementing the HDMI2.0 standard and HDCP2.2 content protection. The Thunderbolt 3 connection standard was originally to include DisplayPort1.3 capability, but the final release ended up with only version 1.2 for Intel® 6000 Series Thunderbolt™ 3 Controllers. Later Intel® 7000 Series Thunderbolt™3 Controllers would come to support DisplayPort1.4 capability including HDR. The VESA's Adaptive Sync feature in DisplayPort version 1.3 remains an optional part of the specification.
1.4
DisplayPort version 1.4 was published 1 March 2016. No new transmission modes are defined, so HBR3 (32.4Gbit/s) as introduced in version 1.3 still remains as the highest available mode. DisplayPort1.4 adds support for Display Stream Compression 1.2 (DSC), Forward Error Correction, HDR10 metadata defined in CTA-861.3, including static and dynamic metadata and the Rec. 2020 color space, for HDMI interoperability, and extends the maximum number of inline audio channels to 32.
1.4a
DisplayPort version 1.4a was published in April 2018. VESA made no official press release for this version. It updated DisplayPort's Display Stream Compression implementation from DSC 1.2 to 1.2a.
2.0
On 26 June 2019, VESA formally released the DisplayPort 2.0 standard.
VESA stated that version 2.0 is the first major update to the DisplayPort standard since March 2016, and provides up to a ≈3× improvement in data rate (from 25.92 to 77.37Gbit/s) compared to the previous version of DisplayPort (1.4a), as well as new capabilities to address the future performance requirements of traditional displays. These include beyond 8K resolutions, higher refresh rates and high dynamic range (HDR) support at higher resolutions, improved support for multiple display configurations, as well as improved user experience with augmented/virtual reality (AR/VR) displays, including support for 4K-and-beyond VR resolutions.
According to a roadmap published by VESA in September 2016, a new version of DisplayPort was intended to be launched in "early 2017". It would have improved the link rate from 8.1 to 10.0Gbit/s, a 23% increase. This would have increased the total bandwidth from 32.4Gbit/s to 40.0Gbit/s. However, no new version was released in 2017, likely delayed to make further improvements after the HDMI Forum announced in January 2017 that their next standard (HDMI2.1) would offer up to 48Gbit/s of bandwidth. According to a press release on 3 January 2018, "VESA is also currently engaged with its members in the development of the next DisplayPort standard generation, with plans to increase the data rate enabled by DisplayPort by two-fold and beyond. VESA plans to publish this update within the next 18 months." At CES 2019, VESA announced that the new version would support 8K @ 60Hz without compression and was expected to be released in the first half of 2019.
DP 2.0 configuration examples
With the increased bandwidth enabled by DisplayPort 2.0, VESA offers a high degree of versatility and configurations for higher display resolutions and refresh rates. In addition to the above-mentioned 8K resolution at 60Hz with HDR support, DP 2.0 (UHBR20) through USB-C as DisplayPort Alt Mode enables a variety of high-performance configurations:
Single display resolutions
One 16K () display @ 60Hz with 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (with DSC)
One 10K () display @ 60Hz and 8bpc (24bit/px, SDR) RGB/ 4:4:4 color (uncompressed)
Dual display resolutions
Two 8K () displays @ 120Hz and 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (with DSC)
Two 4K () displays @ 144Hz and 8bpc (24bit/px, SDR) RGB/ 4:4:4 color (uncompressed)
Triple display resolutions
Three 10K () displays @ 60Hz and 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (with DSC)
Three 4K () displays @ 90Hz and 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (uncompressed)
When using only two lanes on the USB-C connector via DP Alt Mode to allow for simultaneous SuperSpeed USB data and video, DP 2.0 can enable such configurations as:
Three 4K () displays @ 144Hz and 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (with DSC)
Two 4K × 4K () displays (for AR/VR headsets) @ 120Hz and 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (with DSC)
Three QHD () @ 120Hz and 8bpc (24bit/px, SDR) RGB/ 4:4:4 color (uncompressed)
One 8K () display @ 30Hz and 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (uncompressed)
2.1
VESA announced version 2.1 of the DisplayPort standard on 17 October 2022. This version incorporates the new DP40 and DP80 cable certifications, which test DisplayPort cables for proper operation at the UHBR10 (40Gbit/s) and UHBR20 (80Gbit/s) speeds introduced in version 2.0. Additionally, it revises some of the electrical requirements for DisplayPort devices in order to improve integration with USB4. In VESA's words:
DisplayPort 2.1 has tightened its alignment with the USB Type-C specification as well as the USB4 PHY specification to facilitate a common PHY servicing both DisplayPort and USB4. In addition, DisplayPort 2.1 has added a new DisplayPort bandwidth management feature to enable DisplayPort tunnelling to coexist with other I/O data traffic more efficiently over the USB4 link.
2.1a
VESA announced version 2.1a of the DisplayPort standard on January 8 2024. This version replaces the DP40 cable certification with the new DP54 certification, which tests DisplayPort cables for proper operation at the UHBR13.5 (54Gbit/s) speed introduced in version 2.0.
2.1b
VESA announced version 2.1b of the DisplayPort standard on January 6 2025. It will be released in Spring 2025.
Specifications
Main
Main link
The DisplayPort main link is used for transmission of video and audio. The main link consists of a number of unidirectional serial data channels which operate concurrently, called lanes. A standard DisplayPort connection has 4 lanes, though some applications of DisplayPort implement more, such as the Thunderbolt 3 interface which implements up to 8 lanes of DisplayPort.
In a standard DisplayPort connection, each lane has a dedicated set of twisted-pair wires, and transmits data across it using differential signaling. This is a self-clocking system, so no dedicated clock signal channel is necessary. Unlike DVI and HDMI, which vary their transmission speed to the exact rate required for the specific video format, DisplayPort only operates at a few specific speeds; any excess bits in the transmission are filled with "stuffing symbols".
In DisplayPort versions 1.01.4a, the data is encoded using ANSI 8b/10b encoding prior to transmission. With this scheme, only 8 out of every 10 transmitted bits represent data; the extra bits are used for DC balancing (ensuring a roughly equal number of 1s and 0s). As a result, the rate at which data can be transmitted is only 80% of the physical bitrate. The transmission speeds are also sometimes expressed in terms of the "Link Symbol Rate", which is the rate at which these 8b/10b-encoded symbols are transmitted (i.e. the rate at which groups of 10 bits are transmitted, 8 of which represent data). The following transmission modes are defined in version 1.01.4a:
RBR (Reduced Bit Rate): 1.62Gbit/s bandwidth per lane (162MHz link symbol rate)
HBR (High Bit Rate): 2.70Gbit/s bandwidth per lane (270MHz link symbol rate)
HBR2 (High Bit Rate 2): 5.40Gbit/s bandwidth per lane (540MHz link symbol rate), introduced in DP1.2
HBR3 (High Bit Rate 3): 8.10Gbit/s bandwidth per lane (810MHz link symbol rate), introduced in DP1.3
DisplayPort 2.0 uses 128b/132b encoding; each group of 132 transmitted bits represents 128 bits of data. This scheme has an efficiency of 96.%. In addition, a small amount of overhead is added for the link layer control packet and other miscellaneous operations, resulting in an overall efficiency of ≈96.7%. The following transmission modes are added in DP 2.0:
UHBR 10 (Ultra High Bit Rate 10): 10.0Gbit/s bandwidth per lane
UHBR 13.5 (Ultra High Bit Rate 13.5): 13.5Gbit/s bandwidth per lane
UHBR 20 (Ultra High Bit Rate 20): 20.0Gbit/s bandwidth per lane
The total bandwidth of the main link in a standard 4-lane connection is the aggregate of all lanes:
RBR: 4 × 1.62Gbit/s = 6.48Gbit/s bandwidth (data rate of 5.184Gbit/s or 648MB/s with 8b/10b encoding)
HBR: 4 × 2.70Gbit/s = 10.80Gbit/s bandwidth (data rate of 8.64Gbit/s or 1.08GB/s)
HBR2: 4 × 5.40Gbit/s = 21.60Gbit/s bandwidth (data rate of 17.28Gbit/s or 2.16GB/s)
HBR3: 4 × 8.10Gbit/s = 32.40Gbit/s bandwidth (data rate of 25.92Gbit/s or 3.24GB/s)
UHBR 10: 4 × 10.0Gbit/s = 40.00Gbit/s bandwidth (data rate of 38.69Gbit/s or 4.84GB/s with 128b/132b encoding and FEC)
UHBR 13.5: 4 × 13.5Gbit/s = 54.00Gbit/s bandwidth (data rate of 52.22Gbit/s or 6.52GB/s)
UHBR 20: 4 × 20.0Gbit/s = 80.00Gbit/s bandwidth (data rate of 77.37Gbit/s or 9.69GB/s)
The transmission mode used by the DisplayPort main link is negotiated by the source and sink device when a connection is made, through a process called Link Training. This process determines the maximum possible speed of the connection. If the quality of the DisplayPort cable is insufficient to reliably handle HBR2 speeds for example, the DisplayPort devices will detect this and switch down to a lower mode to maintain a stable connection. The link can be re-negotiated at any time if a loss of synchronization is detected.
Audio data is transmitted across the main link during the video blanking intervals (short pauses between each line and frame of video data).
Auxiliary channel
The DisplayPort AUX channel is a half-duplex (bidirectional) data channel used for miscellaneous additional data beyond video and audio, such as EDID (I2C) or CEC commands. This bidirectional data channel is required, since the video lane signals are unidirectional from source to display. AUX signals are transmitted across a dedicated set of twisted-pair wires. DisplayPort1.0 specified Manchester encoding with a 2MBd signal rate (1Mbit/s data rate). Version 1.2 of the DisplayPort standard introduced a second transmission mode called FAUX (Fast AUX), which operated at 720Mbit/s with 8b/10b encoding (576Mbit/s data rate), but it was deprecated in version 1.3.
Cables and connectors
Cables
Compatibility and feature support
All DisplayPort cables are compatible with all DisplayPort devices, regardless of the version of each device or the cable certification level.
All features of DisplayPort will function across any DisplayPort cable. DisplayPort does not have multiple cable designs; all DP cables have the same basic layout and wiring, and will support any feature including audio, daisy-chaining, G-Sync/FreeSync, HDR, and DSC.
DisplayPort cables differ in their transmission speed support. DisplayPort specifies seven different transmission modes (RBR, HBR, HBR2, HBR3, UHBR10, UHBR13.5, and UHBR20) which support progressively higher bandwidths. Not all DisplayPort cables are capable of all seven transmission modes. VESA offers certifications for various levels of bandwidth. These certifications are optional, and not all DisplayPort cables are certified by VESA.
Cables with limited transmission speed are still compatible with all DisplayPort devices, but may place limits on the maximum resolution or refresh rate available.
DisplayPort cables are not classified by "version". Although cables are commonly labeled with version numbers, with HBR2 cables advertised as "DisplayPort1.2 cables" for example, this notation is not permitted by VESA. The use of version numbers with cables can falsely imply that a DisplayPort1.4 display requires a "DisplayPort1.4 cable", or that features introduced in version 1.4 such as HDR or DSC will not function with older "DP1.2 cables". DisplayPort cables are classified only by their bandwidth certification level (RBR, HBR, HBR2, HBR3, etc.), if they have been certified at all.
Cable bandwidth and certifications
Not all DisplayPort cables are capable of functioning at the highest levels of bandwidth. Cables may be submitted to VESA for an optional certification at various bandwidth levels. VESA offers five levels of cable certification: Standard, DP8K, DP40, DP54, and DP80. These certify DisplayPort cables for proper operation at the following speeds:
In April 2013, VESA published an article stating that the DisplayPort cable certification did not have distinct tiers for HBR and HBR2 bandwidth, and that any certified standard DisplayPort cable—including those certified under DisplayPort1.1—would be able to handle the 21.6Gbit/s bandwidth of HBR2 that was introduced with the DisplayPort 1.2 standard. The DisplayPort1.2 standard defines only a single specification for High Bit Rate cable assemblies, which is used for both HBR and HBR2 speeds, although the DP cable certification process is governed by the DisplayPort PHY Compliance Test Standard (CTS) and not the DisplayPort standard itself.
The DP8K certification was announced by VESA in January 2018, and certifies cables for proper operation at HBR3 speeds (8.1Gbit/s per lane, 32.4Gbit/s total).
In June 2019, with the release of version 2.0 of the DisplayPort Standard, VESA announced that the DP8K certification was also sufficient for the new UHBR10 transmission mode. No new certifications were announced for the UHBR13.5 and UHBR20 modes. VESA is encouraging displays to use tethered cables for these speeds, rather than releasing standalone cables onto the market.
It should also be noted that the use of Display Stream Compression (DSC), introduced in DisplayPort1.4, greatly reduces the bandwidth requirements for the cable. Formats which would normally be beyond the limits of DisplayPort1.4, such as 4K (38402160) at 144Hz 8bpc RGB/ 4:4:4 (31.4Gbit/s data rate when uncompressed), can only be implemented by using DSC. This would reduce the physical bandwidth requirements by 2–3×, placing it well within the capabilities of an HBR2-rated cable.
This exemplifies why DisplayPort cables are not classified by "version"; although DSC was introduced in version 1.4, this does not mean it needs a so-called "DP1.4 cable" (an HBR3-rated cable) to function. HBR3 cables are only required for applications which exceed HBR2-level bandwidth, not simply any application involving DisplayPort1.4. If DSC is used to reduce the bandwidth requirements to HBR2 levels, then an HBR2-rated cable will be sufficient.
In version 2.1, VESA introduced the DP40 and DP80 cable certification tiers, which validate cables for UHBR10 and UHBR20 speeds respectively. DisplayPort 2.1a introduced DP54 cable certification for UHBR13.5 speed.
Cable length
The DisplayPort standard does not specify any maximum length for cables, though the DisplayPort 1.2 standard does set a minimum requirement that all cables up to 2 meters in length must support HBR2 speeds (21.6Gbit/s), and all cables of any length must support RBR speeds (6.48Gbit/s). Cables longer than 2 meters may or may not support HBR/HBR2 speeds, and cables of any length may or may not support HBR3 speeds or above.
Connectors and pin configuration
DisplayPort cables and ports may have either a "full-size" connector or a "mini" connector. These connectors differ only in physical shape—the capabilities of DisplayPort are the same regardless of which connector is used. Using a Mini DisplayPort connector does not affect performance or feature support of the connection.
Full-size DisplayPort connector
The standard DisplayPort connector (now referred to as a "full-size" connector to distinguish it from the mini connector) was the sole connector type introduced in DisplayPort1.0. It is a 20-pin single-orientation connector with a friction lock and an optional mechanical latch. The standard DisplayPort receptacle has dimensions of 16.10mm (width) × 4.76mm (height) × 8.88mm (depth).
The standard DisplayPort connector pin allocation is as follows:
12 pins for the main link – the main link consists of four shielded twisted pairs. Each pair requires 3 pins; one for each of the two wires, and a third for the shield. (pins 1–12)
2 additional ground pins – (pins 13 and 14)
3 pins for the auxiliary channel – the auxiliary channel uses another 3-pin shielded twisted pair (pins 15–17)
1 pin for HPD – hot-plug detection (pin 18)
2 pins for power – 3.3V power and return line (pins 19 and 20)
Mini DisplayPort connector
The Mini DisplayPort connector was developed by Apple for use in their computer products. It was first announced in October 2008 for use in the new MacBooks and Cinema Display. In 2009, VESA adopted it as an official standard, and in 2010 the specification was merged into the main DisplayPort standard with the release of DisplayPort1.2. Apple freely licenses the specification to VESA.
The Mini DisplayPort (mDP) connector is a 20-pin single-orientation connector with a friction lock. Unlike the full-size connector, it does not have an option for a mechanical latch. The mDP receptacle has dimensions of 7.50mm (width) × 4.60mm (height) × 4.99mm (depth). The mDP pin assignments are the same as the full-size DisplayPort connector.
DP_PWR (pin 20)
Pin 20 on the DisplayPort connector, called DP_PWR, provides 3.3V (±10%) DC power at up to 500mA (minimum power delivery of 1.5W). This power is available from all DisplayPort receptacles, on both source and display devices. DP_PWR is intended to provide power for adapters, amplified cables, and similar devices, so that a separate power cable is not necessary.
Standard DisplayPort cable connections do not use the DP_PWR pin. Connecting the DP_PWR pins of two devices directly together through a cable can create a short circuit which can potentially damage devices, since the DP_PWR pins on two devices are unlikely to have exactly the same voltage (especially with a ±10% tolerance). For this reason, the DisplayPort1.1 and later standards specify that passive DisplayPort-to-DisplayPort cables must leave pin 20 unconnected.
However, in 2013 VESA announced that after investigating reports of malfunctioning DisplayPort devices, it had discovered that a large number of non-certified vendors were manufacturing their DisplayPort cables with the DP_PWR pin connected:
The stipulation that the DP_PWR wire be omitted from standard DisplayPort cables was not present in the DisplayPort1.0 standard. However, DisplayPort products (and cables) did not begin to appear on the market until 2008, long after version 1.0 had been replaced by version 1.1. The DisplayPort1.0 standard was never implemented in commercial products.
Resolution and refresh frequency limits
The tables below describe the refresh frequencies that can be achieved with each transmission mode. In general, maximum refresh frequency is determined by the transmission mode (RBR, HBR, HBR2, HBR3, UHBR10, UHBR13.5, or UHBR20). These transmission modes were introduced to the DisplayPort standard as follows:
RBR and HBR were defined in the initial release of the DisplayPort standard, version 1.0
HBR2 was introduced in version 1.2
HBR3 was introduced in version 1.3
UHBR10, UHBR13.5, and UHBR20 were introduced in version 2.0
However, transmission mode support is not necessarily dictated by a device's claimed "DisplayPort version number". For example, older versions of the DisplayPort Marketing Guidelines allowed a device to be labeled as "DisplayPort 1.2" if it supported the MST feature, even if it didn't support the HBR2 transmission mode. Newer versions of the guidelines have removed this clause, and currently (as of the June 2018 revision) there are no guidelines on the usage of DisplayPort version numbers in products. DisplayPort "version numbers" are therefore not a reliable indication of what transmission speeds a device can support.
In addition, individual devices may have their own arbitrary limitations beyond transmission speed. For example, NVIDIA Kepler GK104 GPUs (such as the GeForce GTX 680 and 770) support "DisplayPort 1.2" with the HBR2 transmission mode, but are limited to 540Mpx/s, only of the maximum possible with HBR2. Consequently, certain devices may have limitations that differ from those listed in the following tables.
To support a particular format, the source and display devices must both support the required transmission mode, and the DisplayPort cable must also be capable of handling the required bandwidth of that transmission mode. (See: Cables and connectors)
Refresh frequency limits for common resolutions
The maximum limits for the RBR and HBR modes are calculated using standard data rate calculations. For UHBR modes, the limits are based on the data efficiency calculations provided by the DisplayPort standard. All calculations assume uncompressed RGB video with CVT-RB v2 timing. Maximum limits may differ if compression (i.e. DSC) or 4:2:2 or 4:2:0 chroma subsampling are used.
Display manufacturers may also use non-standard blanking intervals rather than CVT-RB v2 to achieve even higher frequencies when bandwidth is a constraint. The refresh frequencies in the below table do not represent the absolute maximum limit of each interface, but rather an estimate based on a modern standardized timing formula. The minimum blanking intervals (and therefore the exact maximum frequency that can be achieved) will depend on the display and how many secondary data packets it requires, and therefore will differ from model to model.
Refresh frequency limits for standard video
Color depth of 8bpc (24bit/px or 16.7 million colors) is assumed for all formats in these tables. This is the standard color depth used on most computer displays. Note that some operating systems refer to this as "32-bit" color depth—this is the same as 24-bit color depth. The 8 extra bits are for alpha channel information, which is only present in software. At the transmission stage, this information has already been incorporated into the primary color channels, so the actual video data transmitted across the cable only contains 24 bits per pixel.
Refresh frequency limits for HDR video
Color depth of 10bpc (30bit/px or 1.07 billion colors) is assumed for all formats in these tables. This color depth is a requirement for various common HDR standards, such as HDR10. It requires 25% more bandwidth than standard 8bpc video.
HDR extensions were defined in version 1.4 of the DisplayPort standard. Some displays support these HDR extensions, but may only implement HBR2 transmission mode if the extra bandwidth of HBR3 is unnecessary (for example, on 4K 60Hz HDR displays). Since there is no definition of what constitutes a "DisplayPort 1.4" device, some manufacturers may choose to label these as "DP 1.2" devices despite their support for DP 1.4 HDR extensions. As a result, DisplayPort "version numbers" should not be used as an indicator of HDR support.
Features
DisplayPort Dual-Mode (DP++)
DisplayPort Dual-Mode (DP++), also called Dual-Mode DisplayPort, is a standard which allows DisplayPort sources to use simple passive adapters to connect to HDMI or DVI displays, and allows DisplayPort displays to use simple passive adapters to connect HDMI or DVI sources. Dual-mode is an optional feature, so not all DisplayPort sources necessarily support DVI/HDMI passive adapters, though in practice nearly all devices do. Officially, the "DP++" logo should be used to indicate a DP port that supports dual-mode, but most modern devices do not use the logo.
Devices which implement dual-mode will detect that a DVI or HDMI adapter is attached, and send DVI/HDMI TMDS signals instead of DisplayPort signals. The original DisplayPort Dual-Mode standard (version 1.0), used in DisplayPort1.1 devices, only supported TMDS clock speeds of up to 165MHz (4.95Gbit/s bandwidth). This is equivalent to HDMI1.2, and is sufficient for up to at 60Hz.
In 2013, VESA released the Dual-Mode 1.1 standard, which added support for up to a 300MHz TMDS clock (9.00Gbit/s bandwidth), and is used in newer DisplayPort1.2 devices. This is slightly less than the 340MHz maximum of HDMI1.4, and is sufficient for up to at 120Hz, at 60Hz, or at 30Hz. Older adapters, which were only capable of the 165MHz speed, were retroactively termed "Type1" adapters, with the new 300MHz adapters being called "Type2".
Dual-mode limitations
Limited adapter speedAlthough the pinout and digital signal values transmitted by the DP port are identical to a native DVI/HDMI TMDS source, the transmission lines on a DisplayPort source are AC-coupled (a series capacitor isolates the line from passing DC voltages) while DVI and HDMI TMDS are DC-coupled. As a result, dual-mode adapters must contain a level-shifting circuit which couples the signal lines to a DC source. The presence of this circuit places a limit on how quickly the adapter can operate, and therefore newer adapters are required for each higher speed added to the standard.
UnidirectionalAlthough the dual-mode standard specifies a method for DisplayPort sources to output DVI/HDMI signals using simple passive adapters, there is no counterpart standard to give DisplayPort displays the ability to receive DVI/HDMI input signals through passive adapters. As a result, DisplayPort displays can only receive native DisplayPort signals; any DVI or HDMI input signals must be converted to the DisplayPort format with an active conversion device. DVI and HDMI sources cannot be connected to DisplayPort displays using passive adapters.
Single-link DVI onlySince DisplayPort dual-mode operates by using the pins of the DisplayPort connector to send DVI/HDMI signals, the 20-pin DisplayPort connector can only produce a single-link DVI signal (which uses 19 pins). A dual-link DVI signal uses 25 pins, and is therefore impossible to transmit natively from a DisplayPort connector through a passive adapter. Dual-link DVI signals can only be produced by converting from native DisplayPort output signals with an active conversion device.
Unavailable on USB-CThe DisplayPort Alternate Mode specification for sending DisplayPort signals over a USB-C cable does not include support for the dual-mode protocol. As a result, DP-to-DVI and DP-to-HDMI passive adapters do not function when chained from a USB-C to DP adapter.
Multi-Stream Transport (MST)
Multi-Stream Transport is a feature first introduced in the DisplayPort1.2 standard. It allows multiple independent displays to be driven from a single DP port on the source devices by multiplexing several video streams into a single stream and sending it to a branch device, which demultiplexes the signal into the original streams. Branch devices are commonly found in the form of an MST hub, which plugs into a single DP input port and provides multiple outputs, but it can also be implemented on a display internally to provide a DP output port for daisy-chaining, effectively embedding a 2-port MST hub inside the display. Theoretically, up to 63 displays can be supported, but the combined data rate requirements of all the displays cannot exceed the limits of a single DP port (17.28Gbit/s for a DP1.2 port, or 25.92Gbit/s for a DP 1.3/1.4 port). In addition, the maximum number of links between the source and any device (i.e. the maximum length of a daisy-chain) is 7, and the maximum number of physical output ports on each branch device (such as a hub) is 7. With the release of MST, standard single-display operation has been retroactively named "SST" mode (Single-Stream Transport).
Daisy-chaining is a feature that must be specifically supported by each intermediary display; not all DisplayPort1.2 devices support it. Daisy-chaining requires a dedicated DisplayPort output port on the display. Standard DisplayPort input ports found on most displays cannot be used as a daisy-chain output. Only the last display in the daisy-chain does not need to support the feature specifically or have a DP output port. DisplayPort1.1 displays can also be connected to MST hubs, and can be part of a DisplayPort daisy-chain if it is the last display in the chain.
The host system's software also needs to support MST for hubs or daisy-chains to work. While Microsoft Windows environments have full support for it, Apple operating systems currently do not support MST hubs or DisplayPort daisy-chaining as of macOS 10.15 ("Catalina").
DisplayPort-to-DVI and DisplayPort-to-HDMI adapters/cables may or may not function from an MST output port; support for this depends on the specific device.
MST is supported by USB Type-C DisplayPort Alternate Mode, so standard DisplayPort daisy-chains and MST hubs do function from Type-C sources with a simple Type-C to DisplayPort adapter.
High dynamic range (HDR)
Support for HDR video was introduced in DisplayPort1.4. It implements the CTA 861.3 standard for transport of static HDR metadata in EDID.
Content protection
DisplayPort1.0 includes optional DPCP (DisplayPort Content Protection) from Philips, which uses 128-bit AES encryption. It also features full authentication and session key establishment. Each encryption session is independent, and it has an independent revocation system. This portion of the standard is licensed separately. It also adds the ability to verify the proximity of the receiver and transmitter, a technique intended to ensure users are not bypassing the content protection system to send data out to distant, unauthorized users.
DisplayPort1.1 added optional implementation of industry-standard 56-bit HDCP (High-bandwidth Digital Content Protection) revision 1.3, which requires separate licensing from the Digital Content Protection LLC.
DisplayPort1.3 added support for HDCP2.2, which is also used by HDMI2.0.
Cost
VESA, the creators of the DisplayPort standard, state that the standard is royalty-free to implement. However, in March 2015, MPEG LA issued a press release stating that a royalty rate of $0.20 per unit applies to DisplayPort products manufactured or sold in countries that are covered by one or more of the patents in the MPEG LA license pool, which includes patents from Hitachi Maxell, Philips, Lattice Semiconductor, Rambus, and Sony. In response, VESA updated their DisplayPort FAQ page with the following statement:
As of August 2019, VESA's official FAQ no longer contains a statement mentioning the MPEG LA royalty fees.
While VESA does not charge any per-device royalty fees, VESA requires membership for access to said standards. The minimum cost is presently $5,000 (or $10,000 depending on Annual Corporate Sales Revenue) annually.
Advantages over DVI, VGA and FPD-Link
In December 2010, several computer vendors and display makers including Intel, AMD, Dell, Lenovo, Samsung and LG announced they would begin phasing out FPD-Link, VGA, and DVI-I over the next few years, replacing them with DisplayPort and HDMI.
DisplayPort has several advantages over VGA, DVI, and FPD-Link.
Standard available to all VESA members with an extensible standard to help broad adoption
Fewer lanes with embedded self-clock, reduced EMI with data scrambling and spread spectrum mode
Based on a micro-packet protocol
Allows easy expansion of the standard with multiple data types
Flexible allocation of available bandwidth between audio and video
Multiple video streams over single physical connection (version 1.2)
Long-distance transmission over alternative physical media such as optical fiber (version 1.1a)
High-resolution displays and multiple displays with a single connection, via a hub or daisy-chaining
HBR2 mode with 17.28Gbit/s of effective video bandwidth allows four simultaneous 1080p60 displays (CEA-861 timings), two 2560 × 1600 × 30 bit @ 120Hz (CVT-R timings), or 4K UHD @ 60Hz
HBR3 mode with 25.92Gbit/s of effective video bandwidth, using CVT-R2 timings, allows eight simultaneous 1080p displays (1920 × 1080) @ 60Hz, stereoscopic 4K UHD (3840 × 2160) @ 120Hz, or 5120 × 2880 @ 60Hz each using 24 bit RGB, and up to 8K UHD (7680 × 4320) @ 60Hz using 4:2:0 subsampling
Designed to work for internal chip-to-chip communication
Aimed at replacing internal FPD-Link links to display panels with a unified link interface
Compatible with low-voltage signaling used with sub-micron CMOS fabrication
Can drive display panels directly, eliminating scaling and control circuits and allowing for cheaper and slimmer displays
Link training with adjustable amplitude and preemphasis adapts to differing cable lengths and signal quality
Reduced bandwidth transmission for cable, at least 1920 × 1080p @ 60Hz at 24 bits per pixel
Full bandwidth transmission for
High-speed auxiliary channel for DDC, EDID, MCCS, DPMS, HDCP, adapter identification etc. traffic
Can be used for transmitting bi-directional USB, touch-panel data, CEC, etc.
Self-latching connector
Comparison with HDMI
Although DisplayPort has much of the same functionality as HDMI, it is a complementary connection used in different scenarios. A dual-mode DisplayPort port can emit an HDMI signal via a passive adapter.
As of 2008, HDMI Licensing, LLC charged an annual fee of US$10,000 to each high-volume manufacturer and a per-unit royalty rate of US$0.04 to US$0.15. DisplayPort is royalty-free, but implementers thereof are not prevented from charging (royalty or otherwise) for that implementation.
DisplayPort 1.2 has more bandwidth at 21.6Gbit/s (17.28Gbit/s with overhead removed) as opposed to HDMI 2.0's 18Gbit/s (14.4Gbit/s with overhead removed).
DisplayPort 1.3 raises that to 32.4Gbit/s (25.92Gbit/s with overhead removed), and HDMI 2.1 raises that up to 48Gbit/s (42.67Gbit/s with overhead removed), adding an additional TMDS link in place of clock lane. DisplayPort also has the ability to share this bandwidth with multiple streams of audio and video to separate devices.
DisplayPort has historically had higher bandwidth than the HDMI standard available at the same time. The only exception is from HDMI 2.1 (2017) having higher transmission bandwidth @48Gbit/s than DisplayPort 1.3 (2014) @32.4Gbit/s. DisplayPort 2.0 (2019) retook transmission bandwidth superiority @80.0Gbit/s.
DisplayPort in native mode lacks some HDMI features such as Consumer Electronics Control (CEC) commands. The CEC bus allows linking multiple sources with a single display and controlling any of these devices from any remote. DisplayPort 1.3 added the possibility of transmitting CEC commands over the AUX channel From its very first version HDMI features CEC to support connecting multiple sources to a single display as is typical for a TV screen. The other way round, Multi-Stream Transport allows connecting multiple displays to a single computer source. This reflects the facts that HDMI originated from consumer electronics companies whereas DisplayPort is owned by VESA which started as an organization for computer standards.
HDMI uses unique Vendor-Specific Block structure, which allows for features such as additional color spaces. However, these features can be defined by CEA EDID extensions.
Both HDMI and DisplayPort have published specification for transmitting their signal over the USB-C connector. For more details, see .
Market share
Figures from IDC show that 5.1% of commercial desktops and 2.1% of commercial notebooks released in 2009 featured DisplayPort. The main factor behind this was the phase-out of VGA, and that both Intel and AMD planned to stop building products with FPD-Link by 2013. Nearly 70% of LCD monitors sold in August 2014 in the US, UK, Germany, Japan, and China were equipped with HDMI/DisplayPort technology, up 7.5% on the year, according to Digitimes Research. IHS Markit, an analytics firm, forecast that DisplayPort would surpass HDMI in 2019.
Companion standards
Mini DisplayPort
Mini DisplayPort (mDP) is a standard announced by Apple in the fourth quarter of 2008. Shortly after announcing Mini DisplayPort, Apple announced that it would license the connector technology with no fee. The following year, in early 2009, VESA announced that Mini DisplayPort would be included in the upcoming DisplayPort 1.2 specification.
On 24 February 2011, Apple and Intel announced Thunderbolt, a successor to Mini DisplayPort which adds support for PCI Express data connections while maintaining backwards compatibility with Mini DisplayPort based peripherals.
Micro DisplayPort
Micro DisplayPort would have targeted systems that need ultra-compact connectors, such as phones, tablets and ultra-portable notebook computers. This standard would have been physically smaller than the currently available Mini DisplayPort connectors. The standard was expected to be released by Q2 2014.
DDM
Direct Drive Monitor (DDM) 1.0 standard was approved in December 2008. It allows for controller-less monitors where the display panel is directly driven by the DisplayPort signal, although the available resolutions and color depth are limited to two-lane operation.
Display Stream Compression
Display Stream Compression (DSC) is a VESA-developed video compression algorithm designed to enable increased display resolutions and frame rates over existing physical interfaces, and make devices smaller and lighter, with longer battery life.
eDP
Embedded DisplayPort (eDP) is a display panel interface standard for portable and embedded devices. It defines the signaling interface between graphics cards and integrated displays. The various revisions of eDP are based on existing DisplayPort standards. However, version numbers between the two standards are not interchangeable. For instance, eDP version 1.4 is based on DisplayPort 1.2, while eDP version 1.4a is based on DisplayPort 1.3. Embedded DisplayPort has displaced LVDS as the predominant panel interface in modern laptops and modern smartphones.
eDP 1.0 was adopted in December 2008. It included advanced power-saving features such as seamless refresh rate switching.
Version 1.1 was approved in October 2009 followed by version 1.1a in November 2009.
Version 1.2 was approved in May 2010 and includes DisplayPort 1.2 HBR2 data rates, 120Hz sequential color monitors, and a new display panel control protocol that works through the AUX channel.
Version 1.3 was published in February 2011; it includes a new optional Panel Self-Refresh (PSR) feature developed to save system power and further extend battery life in portable PC systems. PSR mode allows the GPU to enter a power saving state in between frame updates by including framebuffer memory in the display panel controller.
Version 1.4 was released in February 2013; it reduces power consumption through partial-frame updates in PSR mode, regional backlight control, lower interface voltages, and additional link rates; the auxiliary channel supports multi-touch panel data to accommodate different form factors. Version 1.4a was published in February 2015; the underlying DisplayPort version was updated to 1.3 in order to support HBR3 data rates, Display Stream Compression 1.1, Segmented Panel Displays, and partial updates for Panel Self-Refresh. Version 1.4b was published in October 2015; its protocol refinements and clarifications are intended to enable adoption of eDP 1.4b in devices by mid-2016. Version 1.5 was published in October 2021; adds new features and protocols, including enhanced support for Adaptive-Sync, that provide additional power savings and improved gaming and media playback performance.
iDP
Internal DisplayPort (iDP) is a standard that defines an internal link between a digital TV system on a chip controller and the display panel's timing controller. Version 1.0 was approved in April 2010. It aims to replace currently used internal FPD-Link lanes with a DisplayPort connection. iDP features a unique physical interface and protocols, which are not directly compatible with DisplayPort and are not applicable to external connection, however they enable very high resolution and refresh rates while providing simplicity and extensibility. iDP features a non-variable 2.7GHz clock and is nominally rated at 3.24Gbit/s per lane, with up to sixteen lanes in a bank, resulting in a six-fold decrease in wiring requirements over FPD-Link for a 1080p24 signal; other data rates are also possible. iDP was built with simplicity in mind so doesn't have an AUX channel, content protection, or multiple streams; it does however have frame sequential and line interleaved stereo 3D.
PDMI
Portable Digital Media Interface (PDMI) is an interconnection between docking stations/display devices and portable media players, which includes 2-lane DisplayPort v1.1a connection. It has been ratified in February 2010 as ANSI/CEA-2017-A.
wDP
Wireless DisplayPort (wDP) enables the bandwidth and feature set of DisplayPort 1.2 for cable-free applications operating in the 60GHz radio band. It was announced in November 2010 by WiGig Alliance and VESA as a cooperative effort.
SlimPort
SlimPort, a brand of Analogix products, complies with Mobility DisplayPort, also known as MyDP, which is an industry standard for a mobile audio/video Interface, providing connectivity from mobile devices to external displays and HDTVs. SlimPort implements the transmission of video up to 4K-UltraHD and up to eight channels of audio over the micro-USB connector to an external converter accessory or display device. SlimPort products support seamless connectivity to DisplayPort, HDMI and VGA displays. The MyDP standard was released in June 2012, and the first product to use SlimPort was Google's Nexus 4 smartphone. Some LG smartphones in LG G series also adopted SlimPort.
SlimPort is an alternative to Mobile High-Definition Link (MHL).
DisplayID
DisplayID is designed to replace the E-EDID standard. DisplayID features variable-length structures which encompass all existing EDID extensions as well as new extensions for 3D displays and embedded displays.
The latest version 1.3 (announced on 23 September 2013) adds enhanced support for tiled display topologies; it allows better identification of multiple video streams, and reports bezel size and locations. As of December 2013, many current 4K displays use a tiled topology, but lack a standard way to report to the video source which tile is left and which is right. These early 4K displays, for manufacturing reasons, typically use two 1920×2160 panels laminated together and are currently generally treated as multiple-monitor setups. DisplayID 1.3 also allows 8K display discovery, and has applications in stereo 3D, where multiple video streams are used.
DockPort
DockPort, formerly known as Lightning Bolt, is an extension to DisplayPort to include USB 3.0 data as well as power for charging portable devices from attached external displays. Originally developed by AMD and Texas Instruments, it has been announced as a VESA specification in 2014.
USB-C
On 22 September 2014, VESA published the DisplayPort Alternate Mode on USB Type-C Connector Standard, a specification on how to send DisplayPort signals over the newly released USB-C connector. One, two or all four of the differential pairs that USB uses for the SuperSpeed bus can be configured dynamically to be used for DisplayPort lanes. In the first two cases, the connector still can carry a full SuperSpeed signal; in the latter case, at least a non-SuperSpeed signal is available. The DisplayPort AUX channel is also supported over the two sideband signals over the same connection; furthermore, USB Power Delivery according to the newly expanded USB-PD 2.0 specification is possible at the same time. This makes the Type-C connector a strict superset of the use cases envisioned for DockPort, SlimPort, and Mini and Micro DisplayPort.
VirtualLink
VirtualLink was a proposal to allow the power, video, and data required to drive virtual reality headsets to be delivered over a single USB-C cable. The proposal was abandoned in September 2020.
Products
Since DisplayPort's introduction in 2006, it has gained popularity within the computer industry and is featured on many graphics cards, displays, and notebook computers. Dell was the first company to introduce a consumer product with a DisplayPort connector, the Dell UltraSharp 3008WFP, which was released in January 2008. Soon after, AMD and Nvidia released products to support the technology. AMD included support in the Radeon HD 3000 series of graphics cards, and Nvidia first introduced support in the GeForce 9 series starting with the GeForce 9600 GT.
Later in 2008, Apple introduced several products featuring a Mini DisplayPort. The new connectorproprietary at the timeeventually became part of the DisplayPort standard, however Apple reserves the right to void the license should the licensee "commence an action for patent infringement against Apple". In 2009, AMD followed suit with their Radeon HD 5000 series of graphics cards, which featured the Mini DisplayPort on the Eyefinity versions in the series.
Nvidia launched a graphics card with 8 Mini DisplayPort outputs on 4 November 2015, called the NVS 810, which was intended for digital signage.
Nvidia revealed the GeForce GTX 1080, the world's first graphics card with DisplayPort 1.4 support on 6 May 2016. AMD followed with the Radeon RX 480 to support DisplayPort 1.3/1.4 on 29 June 2016. The Radeon RX 400 series will support DisplayPort 1.3 HBR and HDR10, dropping the DVI connector(s) in the reference board design.
In February 2017, VESA and Qualcomm announced that DisplayPort Alt Mode video transport will be integrated into the Snapdragon 835 mobile chipset, which powers smartphones, VR/AR head-mounted displays, IP cameras, tablets and mobile PCs.
Support for DisplayPort Alternate Mode over USB-C
Currently, DisplayPort is the most widely implemented alternate mode, and is used to provide video output on devices that do not have standard-size DisplayPort or HDMI ports, such as smartphones, tablets, and laptops. A USB-C multiport adapter converts the device's native video stream to DisplayPort/HDMI/VGA, allowing it to be displayed on an external display, such as a television set or computer monitor.
Examples of devices that support DisplayPort Alternate Mode over USB-C include: MacBook, Chromebook Pixel, Surface Book 2, Samsung Galaxy Tab S4, iPad Pro (3rd generation), iPhone 15/15 Pro, HTC 10/U Ultra/U11/U12+, Huawei Mate 10/20/30, LG V20/V30/V40*/V50, OnePlus 7 and newer, ROG Phone, Samsung Galaxy S8 and newer, Nintendo Switch, Sony Xperia 1/5 etc.
Participating companies
The following companies have participated in preparing the drafts of DisplayPort, eDP, iDP, DDM or DSC standards:
Agilent
Altera
AMD Graphics Product Group
Analogix
Apple
Astrodesign
BenQ
Broadcom Corporation
Chi Mei Optoelectronics
Chrontel
Dell
Display Labs
Foxconn Electronics
FuturePlus Systems
Genesis Microchip
Gigabyte Technology
Hardent
Hewlett-Packard
Hosiden
Hirose Electric Group
Intel
intoPIX
I-PEX
Integrated Device Technology
JAE Electronics
Kawasaki Microelectronics (K-Micro)
Keysight Technologies
Lenovo
LG Display
Luxtera
Molex
NEC
NVIDIA
NXP Semiconductors
Xi3 Corporation
Parade Technologies
Realtek Semiconductor
Samsung
SMK
STMicroelectronics
Synaptics Inc.
SyntheSys Research Inc.
Teledyne LeCroy (QuantumData)
Tektronix
Texas Instruments
TLi
Tyco Electronics
ViewSonic
VTM
The following companies have additionally announced their intention to implement DisplayPort, eDP or iDP:
Acer
ASRock
Biostar
Chroma
BlackBerry
Circuit Assembly
DataPro
Eizo
Fujitsu
Hall Research Technologies
ITE Tech.
Matrox Graphics
Micro-Star International
MStar Semiconductor
Novatek Microelectronics Corp.
Palit Microsystems Ltd.
Pioneer Corporation
S3 Graphics
Toshiba
Philips
Quantum Data
Sparkle Computer
Unigraf
Xitrix
| Technology | User interface | null |
2517809 | https://en.wikipedia.org/wiki/Arsenide%20mineral | Arsenide mineral | An arsenide mineral is a mineral that contains arsenide as its main anion. Arsenides are grouped with the sulfides in both the Dana and Strunz mineral classification systems.
Examples
algodonite
domeykite
löllingite
nickeline
rammelsbergite
safflorite
skutterudite
sperrylite
| Physical sciences | Minerals | Earth science |
2518423 | https://en.wikipedia.org/wiki/Eusthenopteron | Eusthenopteron | Eusthenopteron (from , 'good', , 'strength', and 'wing' or 'fin') is a genus of prehistoric sarcopterygian (often called "lobe-finned") fish known from several species that lived during the Late Devonian period, about 385 million years ago. It has attained an iconic status from its close relationship to tetrapods. Early depictions of animals of this genus show them emerging onto land, but paleontologists now think that eusthenopteron species were strictly aquatic animals, though this is not completely known.
The genus was first described by J. F. Whiteaves in 1881, as part of a large collection of fishes from Miguasha, Quebec, Canada. Some 2,000 Eusthenopteron specimens have been collected from Miguasha, one of which was the object of intensely detailed study and several papers by paleoichthyologist Erik Jarvik between the 1940s and the 1990s.
Description
Eusthenopteron is a medium- to large-sized tristichopterid. The species E. foordi is estimated to have exceeded in length, while the species E. jenkinsi probably reached . Eusthenopteron may have weighed around 50 kilograms.
The earliest known fossilized evidence of bone marrow has been found in Eusthenopteron, which may be the origin of bone marrow in tetrapods.Eusthenopteron shares many unique features among fishes but in common with the earliest-known tetrapods. It shares a similar pattern of skull roofing bones with stem tetrapoda forms such as Ichthyostega and Acanthostega. Eusthenopteron, like other tetrapodomorph fishes, had internal nostrils (or a choana), one of the defining traits of tetrapodomorphs, including tetrapods. It also had labyrinthodont teeth, characterized by infolded enamel, which characterizes all of the earliest known tetrapods as well.
Unlike the early tetrapods, Eusthenopteron did not have larval gills.
Classification
Like other fish-like sarcopterygians, Eusthenopteron possessed a two-part cranium, which hinged at mid-length along an intracranial joint. Eusthenopterons notoriety comes from the pattern of its fin endoskeleton, which bears a distinct humerus, ulna, and radius in the fore-fin and femur, tibia, and fibula in the pelvic fin. These appendicular long bones had epiphyseal growth plates that allowed substantial longitudinal growth through endochondral ossification, as in tetrapod long bones. These six appendicular bones also occur in tetrapods and are a synapomorphy of a large clade of sarcopterygians, possibly Tetrapodomorpha (the humerus and femur are present in all sarcopterygians). Similarly, its elasmoid scales lack superficial odontodes composed of dentine and enamel; this loss appears to be a synapomorphy with more crownward tetrapodomorphs.
Eusthenopteron differs significantly from some later Carboniferous tetrapods in the apparent absence of a recognized larval stage and a definitive metamorphosis. In even the smallest known specimen of Eusthenopteron foordi, with a length of , the lepidotrichia cover all of the fins, which does not happen until after metamorphosis in genera like Polydon (the American paddlefish). This might indicate that Eusthenopteron developed directly, with the hatchling already attaining the adult's general body form (Cote et al., 2002).
| Biology and health sciences | Prehistoric osteichthyans | Animals |
2520434 | https://en.wikipedia.org/wiki/Beneficial%20organism | Beneficial organism | In agriculture and gardening, a beneficial organism is any organism that benefits the growing process, including insects, arachnids, other animals, plants, bacteria, fungi, viruses, and nematodes. Benefits include pest control, pollination, and maintenance of soil health. The opposite of beneficial organisms are pests, which are organisms deemed detrimental to the growing process. There are many different types of beneficial organisms as well as beneficial microorganisms. Also, microorganisms have things like salt and sugar in them. Beneficial organisms include but are not limited to: Birds, Bears, Nematodes, Insects, Arachnids, and fungi. The ways that birds and bears are considered beneficial is mainly because they consume seeds from plant and spread them through feces. Birds also prey on certain insects that eat plants and hinder them from growing these insects are known as non beneficial organisms. Nematodes are considered beneficial because they will help compost and provide nutrients for the soil the plants are growing in. Insects and arachnids help the growing process because they prey on non beneficial organisms that consume plants for food. Fungi help the growing process by using long threads of mycelium that can reach very long distances away from the tree or plant and bring water and nutrients back to the tree or plant roots.
Beneficial or pest
The distinction between beneficial and pest is arbitrary, subjectively determined by examining the effect of a particular organism in a specific growing situation. There are many different types of beneficial organisms as well as beneficial microorganisms. Beneficial organisms include but are not limited to: Birds, Bears, Nematodes, Insects, Arachnids, and fungi. The ways that birds and bears are considered beneficial is mainly because they consume seeds from plant and spread them through feces. Birds also prey on certain insects that eat plants and hinder them from growing these insects are known as non beneficial organisms. Nematodes are considered beneficial because they will help compost and provide nutrients for the soil the plants are growing in. Insects and arachnids help the growing process because they prey on non beneficial organisms that consume plants for food. Fungi help the growing process by using long threads of mycelium that can reach very long distances away from the tree or plant and bring water and nutrients back to the tree or plant roots.
With beneficial organisms there is a flip side to these helpful organisms and that's the non beneficial organisms. These organisms hinder or stop the growing process or prey on beneficial organisms. Examples of these are Aphids, Assassin Bugs, and Japanese beetles. Aphids are attracted by pollen which is bad for plants because the aphids feed on the plants after they are located from spreading their pollen. Assassin Bugs are non beneficial because they feed on many beneficial insects by stabbing them with a horn on their head repeated times living up to its name "Assassin bug". Japanese beetles are especially a pest to gardeners and plants because the larvae feed on the stems and roots while full grown beetles feed on leaves and flowers killing the plant.
Insects
Beneficial insects can include predators (such as ladybugs) of pest insects, and pollinators (such as bees, which are an integral part of the growth cycle of many crops). Increasingly certain species of insects are managed and used to intervene where natural pollination or biological control is insufficient, usually due to human disturbance of the balance of established ecosystems.
Nematodes
Certain microscopic nematodes (worms) are beneficial in destroying and controlling populations of larvae that are damaging or deadly to crops and other plants. They are commonly used in organic gardening for their ability to kill various kinds of harmful larvae (fungus gnats, flea larvae, spidermites, weevils, grubs, wireworms, cutworms, and armyworms).
Animals
Birds and other animals may, by their actions, improve conditions in various growing situations, and in such cases are also beneficials. Birds assist in the spread of seeds by ingesting the fruits and berries of plants, then depositing the seeds in their droppings. Other animals, such as raccoons, bears, etc. provide similar benefits.
Plant
Plants that perform positive functions can also be considered beneficials (companion planting is one technique based on principle of beneficial plants).
Issues
In agriculture, controversy surrounds the concept of beneficial insects. Much of this has to do with the effect of agrichemicals, like insecticides, herbicides and large quantities of synthetic fertilizers, on what are considered beneficials. Citing the reduction or elimination of various organisms as a side effect of agrichemical-based farming, some argue that critical damage is being done to various ecosystems, to the point where conventional agriculture is unsustainable in long term societal planning. For example, if bee populations are continued to be reduced by insecticides aimed at other pests, pollination will be further inhibited and crops don't fruit. If soil microorganisms are killed off, natural soil regeneration is inhibited, and reliance on mechanical and chemical inputs to keep the soil viable is increased, along with the fuel required to power these machines. The longer term impact of these conditions has not yet been determined. Commercial ventures currently exist to provide pollinators and biological pest control, such as beekeepers bringing their hives cross-country to any number of farms in spring to pollinate their crops, or purchasing ladybirds from garden centers in small containers.
| Technology | Soil and soil management | null |
20617690 | https://en.wikipedia.org/wiki/Super%20grid | Super grid | A super grid or supergrid is a wide-area transmission network, generally trans-continental or multinational, that is intended to make possible the trade of high volumes of electricity across great distances. It is sometimes also referred to as a "mega grid". Super grids typically are proposed to use high-voltage direct current (HVDC) to transmit electricity long distances. The latest generation of HVDC power lines can transmit energy with losses of only 1.6% per 1,000 km (621.4 miles).
Super grids could support a global energy transition by smoothing local fluctuations of wind energy and solar energy. In this context they are considered as a key technology to mitigate global warming.
History
The idea of creating long-distance transmission lines in order to take advantage of renewable sources distantly located is not new. In the US in the 1950s, a proposal was made to ship hydroelectric power from dams being constructed in the Pacific Northwest to consumers in Southern California, but it was opposed and scrapped. In 1961, U.S. president John F. Kennedy authorized a large public works project using new high-voltage, direct current technology from Sweden. The project was undertaken as a close collaboration between General Electric of the U.S. and ASEA of Sweden, and the system was commissioned in 1970. With several upgrades of the converter stations in the intervening decades, the system now has a capacity of 3,100 MW and is known as the Pacific DC Intertie.
The concept of a "super grid" dates back to the 1960s and was used to describe the emerging unification of the Great Britain grid.
In the code that governs the British Grid, the Grid Code, the Supergrid is currently defined – and has been since this code was first written, in 1990 – as referring to those parts of the British electricity transmission system that are connected at voltages in excess of 200 kV (200,000 volts). British power system planners and operational staff therefore invariably speak of the Supergrid in this context; in practice the definition used captures all of the equipment owned by the National Grid company in England and Wales, and no other equipment.
What has changed during the past 40 years is the scale of energy and distances that are imagined possible in a super grid. Europe began unifying its grids in the 1950s and its largest unified grid is the synchronous grid of Continental Europe serving 24 countries. Serious work is being conducted on unification of this synchronous European grid (previously known as the UCTE grid), with the neighboring synchronous transmission grid of some CIS countries, the IPS/UPS grid. If completed, the resulting massive grid would span 13 time zones stretching from the Atlantic to the Pacific.
While such grids cover great distances, the capacity to transmit large volumes of electricity remains limited due to congestion and control issues. The SuperSmart Grid (Europe) and the Unified Smart Grid (US) specify major technological upgrades that proponents claim are necessary to assure the practical operation and promised benefits of such transcontinental mega grids.
Concept
In current usage, "super grid" has two senses – one of being a superstructure layer overlaid or super-imposed upon existing regional transmission grid or grids, and the second of having some set of superior abilities exceeding those of even the most advanced grids.
Mega grid
In the "overlay", or "superstructure" meaning, a super grid is a very long-distance equivalent of a wide area synchronous network capable of large-scale transmission of renewable electricity. In some conceptions, a transmission grid of HVDC transmission lines forms a layer that is distinctly separate in the way that a superhighway system is separate from the system of city streets and regional highways. In more conventional conceptions such as the proposed unification of the synchronous European grid UCTE and the IPS/UPS system of the CIS, such a mega grid is no different from typical wide area synchronous transmission systems where electricity takes an ad hoc transit route directly through local utility transmission lines or HVDC lines as required.
Studies for such continental sized systems report there are scaling problems as a result of network complexity, transmission congestion, and the need for rapid diagnostic, coordination and control systems. Such studies observe that transmission capacity would need to be significantly higher than current transmission systems in order to promote unimpeded energy trading across distances unbounded by state, regional or national, or even continental borders.
As a practical matter, it has become necessary to incorporate smart grid features such as wide area sensor networks (WAMS) into even modest-sized regional grids in order to avert major power outages such as the Northeast Blackout of 2003. Dynamic interactions between power generation groups are increasingly complex, and transient disturbances that cascade across neighboring utilities can be sudden, large and violent, accompanied by abrupt changes in the network topology as operators attempt to manually stabilize the network.
Superior grid
In the second sense of an advanced grid, the super grid is superior not only because it is a wide area mega grid, but also because it is highly coordinated from a macro level spanning nations and continents, all the way down to the micro-level scheduling low priority loads like water heaters and refrigeration. In the European SuperSmart Grid proposal and the US Unified Smart Grid concept, such super grids have intelligence features in the wide-area transmission layer which integrate the local smart grids into a single wide-area super grid. This is similar to how the Internet bound together multiple small networks into a single ubiquitous network.
Wide area transmission can be viewed as a horizontal extension of the smart grid. In a paradigm shift, the distinction between transmission and distribution blurs with the integration as energy flow becomes bidirectional. For example, distribution grids in rural areas might generate more energy than they use, turning the local smart grid into a virtual power plant, or a city's fleet of one million electric vehicles could be used to trim peaks in transmission supply by integrating them to the smart grid using vehicle to grid technology.
One advantage of such a geographically dispersed and dynamically balanced system is that the need for baseload generation is significantly reduced since intermittency of some sources such as ocean, solar, and wind can be smoothed.
A series of detailed modeling studies by Dr. Gregor Czisch, which looked at the European-wide adoption of renewable energy and interlinking power grids using HVDC cables, indicates that Europe's entire power usage could come from renewables, with 70% total energy from wind at the same level of cost or lower as at present.
To some critics, such a wide area transmission layer is not novel; they point out that the technology has little difference from that used for regional and national power transmission networks. Proponents respond that beyond the qualitative smart grid features that allow instantaneous coordination and balancing of intermittent power sources across international boundaries, the quantitative comprehensiveness has a quality all its own. The claim is made that super grids open up markets.
In the same way that freeways revolutionized interstate transport and the Internet revolutionized online commerce when comprehensive high-capacity networks were built, it is argued that a high capacity super grid must be built in order to provide a distribution network so comprehensive and with such available capacity that energy trading is only limited by how much electricity entrepreneurs can bring to market.
Technology
Wide area super grids plans typically call for bulk transmission using high voltage direct current lines. Europe's SuperSmart Grid proposal relies on HVDC, and in the US, key decision makers such as Steven Chu favor a national long distance DC grid system.
There are industry advocates of high voltage alternating current (HVAC). Although flexible alternating current transmission systems (FACTS) have drawbacks for long distances, American Electric Power has championed a 765 kV super grid they call I-765 that would provide 400 GW of extra transmission capacity required for producing 20% of US energy from wind farms based in the midwest. (See figure above). Advocates of HVAC systems point out that HVDC systems are oriented for point to point bulk transmission and multiple connections to them would require expensive complex communication and control equipment as opposed to the simple step up transformers needed if AC lines were used. Currently, there is only one multipoint long distance HVDC transmission system.
In the more distant future, the voltage loss of current methods could be avoided using experimental superconducting "SuperGrid" technology where the transmission cable is cooled by a liquid hydrogen pipeline which is also used to move energy nationwide. The energy losses for creating, containing, and re-cooling liquid hydrogen need to be accounted for.
Coordination and control of the network would use smart grid technologies such as phasor measurement units to rapidly detect imbalances in the network caused by fluctuating renewable energy sources and potentially respond instantaneously with programmed automatic protection schemes to reroute, reduce load, or reduce generation in response to network disturbances.
Government policy
China supports the idea of a global, intercontinental super grid. For a super grid in the US, a study estimated an 80% reduction of greenhouse gas emissions in combination with the installation of renewable energy, currently in planning stage.
Significant scale
One study for a European super grid estimates that as much as 750 GW of extra transmission capacity would be required – capacity that would be accommodated in increments of 5 GW with HVDC lines.
A 2008 proposal by Transcanada priced a 1,600-km, 3 GW HVDC line at US$3 billion; it would require a corridor 60 meters wide.
In India, an August 2007 6 GW, 1,825-km proposal was priced at $790 million and would require a 69 meter wide right of way.
With 750 GW of new HVDC transmission capacity required for a European super grid, the land and money needed for new transmission lines would be considerable.
Energy independence
In Europe, the energy security implication of a super grid has been discussed as a way in part to prevent Russian energy hegemony.
In the US, advocates such as T. Boone Pickens have promoted the idea of a national transmission grid in order to promote United States energy independence. Al Gore advocates the Unified Smart Grid which has comprehensive super grid capabilities. Gore and other advocates such as James E. Hansen believe super grids are essential for the eventual complete replacement of the greenhouse gas producing fossil fuel use that feeds global warming.
Permits for corridors
Large amounts of land would be required for the electricity transmission corridors used by the new transmission lines of a super grid. There can be significant opposition to the siting of power lines out of concerns about visual impact, anxiety over perceived health issues, and environmental concerns. The US has a process of designating National Interest Electric Transmission Corridors, and it is likely that this process would be used to specify the pathways for a super grid in that country. In the EU, permits for new overhead lines can easily reach 10 years.
In some cases, this has made underground cable more expedient. Since land required can be one fifth than that for overhead and the permit process can be significantly faster, underground cable can be more attractive despite its weaknesses of being more expensive, lower capacity, shorter-lived, and suffering significantly longer downtimes.
Business interests
Siting
Just as superhighways change valuations of land due to the proximity to the ability to transport valuable commodities, businesses are strongly motivated to influence the siting of a super grid to their benefit. The cost of alternative power is the delivered price of electricity, and if the production of electricity from North Dakota wind or Arizona solar is to be competitive, the distance of the connection from the wind farm to the interstate transmission grid must not be great. This is because the feeder line from the generator to the transmission lines is usually paid for by the owner of the generation. Some localities will help pay for the cost of these lines, at the cost of local regulation such as that of a public utilities commission. T. Boone Pickens' project has chosen to pay for the feeder lines privately. Some localities, such as Texas give such projects the power of eminent domain which allows companies to seize land in the path of the planned construction.
Technology preferences
Energy producers are interested in whether the super grid employs HVDC technology, or uses AC, because the cost of connection to an HVDC line is generally greater than that if the AC is used. The Pickens plan favors 765 kV AC transmission, which is considered to be less efficient for long-distance transmission.
Competition
In the 1960s, private California power companies opposed the Pacific Intertie project with a set of technical objections that were overruled. When the project was completed, consumers in Los Angeles saved approximately U.S. $600,000 per day by use of electric power from projects on the Columbia River rather than local power companies burning more expensive fossil fuel.
Proposals
Asian Super Grid
DESERTEC
Electrical interconnector
Europe:
European super grid
SuperSmart Grid
Global Energy Interconnection
One Sun, One World, One Grid
High voltage direct current (HVDC)
Hydrogen economy
List of energy storage projects
North Sea Offshore Grid
Pickens plan
Smart grid
SuperGrid
Unified Smart Grid
| Technology | Electricity transmission and distribution | null |
22121709 | https://en.wikipedia.org/wiki/Monotreme | Monotreme | Monotremes () are mammals of the order Monotremata. They are the only group of living mammals that lay eggs, rather than bearing live young. The extant monotreme species are the platypus and the four species of echidnas. Monotremes are typified by structural differences in their brains, jaws, digestive tract, reproductive tract, and other body parts, compared to the more common mammalian types. Although they are different from almost all mammals in that they lay eggs, like all mammals, the female monotremes nurse their young with milk.
Monotremes have been considered by some authors to be members of Australosphenida, a clade that contains extinct mammals from the Jurassic and Cretaceous of Madagascar, South America, and Australia, but this categorization is disputed and their taxonomy is under debate.
All extant species of monotremes are indigenous to Australia and New Guinea, although they were also present during the Late Cretaceous and Paleocene epochs in southern South America, implying that they were also present in Antarctica, though remains have not yet been found there.
The name monotreme derives from the Greek words ( 'single') and ( 'hole'), referring to the cloaca.
General characteristics
Like other mammals, monotremes are endothermic with a high metabolic rate (though not as high as other mammals; see below); have hair on their bodies; produce milk through mammary glands to feed their young; have a single bone in their lower jaw; and have three middle ear bones.
In common with marsupials, monotremes lack the connective structure (corpus callosum) which in placentals is the primary communication route between the right and left brain hemispheres. The anterior commissure does provide an alternate communication route between the two hemispheres, though, and in monotremes and marsupials it carries all the commissural fibers arising from the neocortex, whereas in placental mammals the anterior commissure carries only some of these fibers.
Extant monotremes lack teeth as adults. Fossil forms and modern platypus young have a "tribosphenic" form of molars (with the occlusal surface formed by three cusps arranged in a triangle), which is one of the hallmarks of extant mammals. Some recent work suggests that monotremes acquired this form of molar independently of placentals and marsupials, although this hypothesis remains disputed. Tooth loss in modern monotremes might be related to their development of electrolocation.
Monotreme jaws are constructed somewhat differently from those of other mammals, and the jaw opening muscle is different. As in all true mammals, the tiny bones that conduct sound to the inner ear are fully incorporated into the skull, rather than lying in the jaw as in non-mammalian cynodonts and other pre-mammalian synapsids; this feature, too, is now claimed to have evolved independently in monotremes and therians, although, as with the analogous evolution of the tribosphenic molar, this hypothesis is disputed. Nonetheless, findings on the extinct species Teinolophos confirm that suspended ear bones evolved independently among monotremes and therians. The external opening of the ear still lies at the base of the jaw.
The sequencing of the platypus genome has also provided insight into the evolution of a number of monotreme traits, such as venom and electroreception, as well as showing some new unique features, such as monotremes possessing five pairs of sex chromosomes and that one of the X chromosomes resembles the Z chromosome of birds, suggesting that the two sex chromosomes of marsupial and placentals evolved after the split from the monotreme lineage. Additional reconstruction through shared genes in sex chromosomes supports this hypothesis of independent evolution. This feature, along with some other genetic similarities with birds, such as shared genes related to egg-laying, is thought to provide some insight into the most recent common ancestor of the synapsid lineage leading to mammals and the sauropsid lineage leading to birds and modern reptiles, which are believed to have split about 315 million years ago during the Carboniferous. The presence of vitellogenin genes (a protein necessary for egg yolk formation) is shared with birds; the presence of this symplesiomorphy suggests that the common ancestor of monotremes, marsupials, and placentals was oviparous, and that this trait was retained in monotremes but lost in all other extant mammal groups. DNA analyses suggest that although this trait is shared and is synapomorphic with birds, platypuses are still mammals and that the common ancestor of extant mammals lactated.
The monotremes also have extra bones in the shoulder girdle, including an interclavicle and coracoid, which are not found in other mammals. Monotremes retain a reptile-like gait, with legs on the sides of, rather than underneath, their bodies. The monotreme leg bears a spur in the ankle region; the spur is not functional in echidnas, but contains a powerful venom in the male platypus. This venom is derived from β-defensins, proteins that are present in mammals that create holes in viral and bacterial pathogens. Some reptile venom is also composed of different types of β-defensins, another trait shared with reptiles. It is thought to be an ancient mammalian characteristic, as many non-monotreme archaic mammal groups also possess venomous spurs.
Reproductive system
The key anatomical difference between monotremes and other mammals gives them their name; monotreme means "single opening" in Greek, referring to the single duct (the cloaca) for their urinary, defecatory, and reproductive systems. Like birds and reptiles, monotremes have a single cloaca. Marsupials have a separate genital tract, whereas most placental females have separate openings for reproduction (the vagina), urination (the urethra), and defecation (the anus). In monotremes, only semen passes through the penis while urine is excreted through the male's cloaca. The monotreme penis is similar to that of turtles and is covered by a preputial sac. Male monotremes do not have a prostate or seminal vesicles.
Monotreme eggs are retained for some time within the mother and receive nutrients directly from her, generally hatching within ten days after being laid – much shorter than the incubation period of sauropsid eggs. Much like newborn marsupials (and perhaps all non-placentals), newborn monotremes, called "puggles", are larval- and fetus-like and have relatively well-developed forelimbs that enable them to crawl around. Monotremes lack teats, so puggles crawl about more frequently than marsupial joeys in search of milk. This difference raises questions about the supposed developmental restrictions on marsupial forelimbs.
Rather than through teats, monotremes lactate from their mammary glands via openings in their skin. All five extant species show prolonged parental care of their young, with low rates of reproduction and relatively long life-spans.
Monotremes are also noteworthy in their zygotic development: most mammalian zygotes go through holoblastic cleavage, where the ovum splits into multiple, divisible daughter cells. In contrast, monotreme zygotes, like those of birds and reptiles, undergo meroblastic (partial) division. This means that the cells at the yolk's edge have cytoplasm continuous with that of the egg, allowing the yolk and embryo to exchange waste and nutrients with the surrounding cytoplasm.
Physiology
Monotremes' metabolic rate is remarkably low by mammalian standards. The platypus has an average body temperature of about rather than the averages of for marsupials and for placentals. Research suggests this has been a gradual adaptation to the harsh, marginal environmental niches in which the few extant monotreme species have managed to survive, rather than a general characteristic of extinct monotremes.
Monotremes may have less developed thermoregulation than other mammals, but recent research shows that they easily maintain a constant body temperature in a variety of circumstances, such as the platypus in icy mountain streams. Early researchers were misled by two factors: firstly, monotremes maintain a lower average temperature than most mammals; secondly, the short-beaked echidna, much easier to study than the reclusive platypus, maintains normal temperature only when active; during cold weather, it conserves energy by "switching off" its temperature regulation. Understanding of this mechanism came when reduced thermal regulation was observed in the hyraxes, which are placentals.
The echidna was originally thought to experience no rapid eye movement sleep (REM). However, a more recent study showed that REM sleep accounted for about 15% of sleep time observed on subjects at an environmental temperature of 25 °C (77 °F). Surveying a range of environmental temperatures, the study observed very little REM at reduced temperatures of 15 °C (59 °F) and 20 °C (68 °F), and also a substantial reduction at the elevated temperature of 28 °C (82 °F).
Monotreme milk contains a highly expressed antibacterial protein not found in other mammals, perhaps to compensate for the more septic manner of milk intake associated with the absence of teats.
During the course of evolution, the monotremes have lost the gastric glands normally found in mammalian stomachs as an adaptation to their diet. As such, by some definitions, they do not have stomachs as an organ, although the term is widely used in studies of monotreme anatomy. Monotremes synthesize L-ascorbic acid only in the kidneys.
Both the platypus and echidna species have spurs on their hind limbs. The echidna spurs are vestigial and have no known function, while the platypus spurs contain venom. Molecular data show that the main component of platypus venom emerged before the divergence of platypus and echidnas, suggesting that the most recent common ancestor of these taxa was also possibly a venomous monotreme.
Taxonomy
The traditional "Theria hypothesis" states that the divergence of the monotreme lineage from the Metatheria (marsupial) and Eutheria (placental) lineages happened prior to the divergence between marsupials and placentals, and this explains why monotremes retain a number of primitive traits presumed to have been present in the synapsid ancestors of later mammals, such as egg-laying. Most morphological evidence supports the Theria hypothesis, but one possible exception is a similar pattern of tooth replacement seen in monotremes and marsupials, which originally provided the basis for the competing "Marsupionta" hypothesis in which the divergence between monotremes and marsupials happened later than the divergence between these lineages and the placentals. Van Rheede (2005) concluded that the genetic evidence favors the Theria hypothesis, and this hypothesis continues to be the more widely accepted one.
Monotremes are conventionally treated as comprising a single order Monotremata. The entire grouping is also traditionally placed into a subclass Prototheria, which was extended to include several fossil orders, but these are no longer seen as constituting a group allied to monotreme ancestry. A controversial hypothesis now relates the monotremes to a different assemblage of fossil mammals in a clade termed Australosphenida, a group of mammals from the Jurassic and Cretaceous of Madagascar, South America and Australia, that share tribosphenic molars. However, in a 2022 review of monotreme evolution, it was noted that Teinolophos, the oldest (Barremian ~ 125 million years ago) and the most primitive monotreme differed substantially from non-monotreme australosphenidans in having five molars as opposed to the three present in non-monotreme australosphenidians. Aptian and Cenomanian monotremes of the family Kollikodontidae (113–96.6 ma) have four molars. This suggests that the monotremes are likely to be unrelated to the australosphenidan tribosphenids.
The time when the monotreme line diverged from other mammalian lines is uncertain, but one survey of genetic studies gives an estimate of about 220 million years ago, while others have posited younger estimates of 163 to 186 million years ago (though the already eutherian Juramaia is dated to 161–160 million years ago). Teinolophos like modern monotremes displays adaptations to elongation and increased sensory perception in the jaws, related to mechanoreception or electroreception.
Molecular clock and fossil dating give a wide range of dates for the split between echidnas and platypuses, with one survey putting the split at 19–48 million years ago, but another putting it at 17–89 million years ago. It has been suggested that both the short-beaked and long-beaked echidna species are derived from a platypus-like ancestor.
The precise relationships among extinct groups of mammals and modern groups such as monotremes are uncertain, but cladistic analyses usually put the last common ancestor (LCA) of placentals and monotremes close to the LCA of placentals and multituberculates, whereas some suggest that the LCA of placentals and multituberculates was more recent than the LCA of placentals and monotremes.
ORDER MONOTREMATA
Superfamily Ornithorhynchoidea
Family Ornithorhynchidae: platypus
Genus Ornithorhynchus
Platypus, O. anatinus
Family Tachyglossidae: echidnas
Genus Tachyglossus
Short-beaked echidna, T. aculeatus
T. a. aculeatus (Common short-beaked echidna)
T. a. acanthion (Northern short-beaked echidna)
T. a. lawesii (New Guinea short-beaked echidna)
T. a. multiaculeatus (Kangaroo Island short-beaked echidna)
T. a. setosus (Tasmanian short-beaked echidna)
Genus Zaglossus
Sir David's long-beaked echidna, Z. attenboroughi
Eastern long-beaked echidna, Z. bartoni
Z. b. bartoni
Z. b. clunius
Z. b. diamondi
Z. b. smeenki
Western long-beaked echidna, Z. bruijni
Fossil monotremes
The first Mesozoic monotreme to be discovered was the Cenomanian (100–96.6 Ma) Steropodon galmani from Lightning Ridge, New South Wales. Biochemical and anatomical evidence suggests that the monotremes diverged from the mammalian lineage before the marsupials and placentals arose. The only Mesozoic monotremes are Teinolophos (Barremian, 126 Ma), Sundrius and Kryoryctes (Albian, 113–108 Ma), and Dharragarra, Kollikodon, Opalios, Parvopalus, Steropodon, and Stirtodon (all Cenomanian, 100.2–96.6 Ma) from Australian deposits, and Patagorhynchus (Maastrichtian) from Patagonian deposits in the Cretaceous, indicating that monotremes were diversifiying by the early Late Cretaceous. Monotremes have been found in the latest Cretaceous and Paleocene of southern South America, so one hypothesis is that monotremes arose in Australia in the Late Jurassic or Early Cretaceous, and that some migrated across Antarctica to South America, both of which were still united with Australia at that time. This direction of migration is the opposite of that hypothesized for Australia's other dominant mammal group, the marsupials, which likely migrated across Antarctica to Australia from South America.
In 2024, a prominent assemblage of early monotremes was described from the Cenomanian deposits (100–96.6 Ma) of the Griman Creek Formation in Lightning Ridge, New South Wales. One of these, the fossil jaw fragment of Dharragarra, is the oldest known platypus-like fossil. The durophagous Kollikodon, the pseudotribosphenic Steropodon, and Stirtodon, Dharragarra, Opalios, and Parvopalus occur in the same Cenomanian deposits. Oligo-Miocene fossils of the toothed platypus Obdurodon have also been recovered from Australia, and fossils of a 63 million-year old platypus relative occur in southern Argentina (Monotrematum), see fossil monotremes below. The extant platypus genus Ornithorhynchus in also known from Pliocene deposits, and the oldest fossil tachyglossids are Pleistocene (1.7 Ma) in age.
Fossil species
Excepting Ornithorhynchus anatinus, all the animals listed in this section are known only from fossils. Some family designations are hesitant, given the fragmentary nature of the specimens.
Family Kollikodontidae
Genus Kollikodon
Species Kollikodon ritchiei
Genus Kryoryctes
Species Kryoryctes cadburyi
Genus Sundrius
Species Sundrius ziegleri
Family Steropodontidae
Genus Parvopalus
Species Parvopalus clytiei
Genus Steropodon
Species Steropodon galmani
Family Teinolophidae
Genus Stirtodon
Species Stirtodon elizabethae
Genus Teinolophos
Species Teinolophos trusleri – 123 Ma, oldest monotreme specimen
Superfamily Ornithorhynchoidea
Family Opalionidae
Genus Opalios
Species Opalios splendens
Family Ornithorhynchidae
Genus Dharragarra
Species Dharragarra aurora
Genus Monotrematum
Species Monotrematum sudamericanum – 61 Ma, southern South America
Genus Ornithorhynchus – oldest Ornithorhynchus specimen 9 Ma
Species Ornithorhynchus anatinus (platypus) – oldest specimen 10,000 years old
Genus Obdurodon – includes a number of Miocene (24–5 Ma) Riversleigh platypuses
Species Obdurodon dicksoni
Species Obdurodon insignis
Species Obdurodon tharalkooschild – Middle Miocene and Upper Miocene (15–5 Ma)
Genus Patagorhynchus
Species Patagorhynchus pascuali - Maastrichtian, earliest known South American monotreme
Family Tachyglossidae
Genus Zaglossus – Upper Pleistocene (1.8–0.1 Ma)
SpeciesZaglossus robustus
Genus Murrayglossus
Species Murrayglossus hacketti
Genus Megalibgwilia
Species Megalibgwilia ramsayi – Late Pleistocene
Species Megalibgwilia robusta – Miocene
| Biology and health sciences | Monotremes | null |
10398668 | https://en.wikipedia.org/wiki/Acacia%20decurrens | Acacia decurrens | Acacia decurrens, commonly known as black wattle or early green wattle, is a perennial tree or shrub native to eastern New South Wales, including Sydney, the Greater Blue Mountains Area, the Hunter Region, and southwest to the Australian Capital Territory. It grows to a height of 2–15 m (7–50 ft) and it flowers from July to September.
Cultivated throughout Australia and in many other countries, Acacia decurrens has naturalised in most Australian states and in Africa, the Americas, Europe, New Zealand and the Pacific, the Indian Ocean area, and Japan.
Description
Acacia decurrens is a fast-growing tree, reaching anywhere from 2 to 15 m (7–50 ft) high. The bark is brown to dark grey colour and smooth to deeply fissured longitudinally with conspicuous intermodal flange marks. The branchlets have longitudinal ridges running along them that are unique to the species. Young foliage tips are yellow. .
Alternately arranged leaves are dark green on both sides. Stipules are either small or absent. The base of the petiole is swollen to form the pulvinus. The leaf blade is bipinnate and the rachis is 20–120 mm long, angular and hairless. 15–45 pairs of widely spaced small leaflets (pinnules) are connected with each other and 5–15 mm long by 0.4–1 mm wide. They are straight, parallel-sided, with a pointed tip, tapering base, shiny and hairless or rarely sparsely hairy.
The small yellow or golden-yellow flowers are very cottony in appearance and are densely attached to the stems, with each head 5–7 mm long, forming a 60–110 mm long axillary raceme or terminal panicle. They are bisexual and fragrant. The flowers have five petals and sepals with numerous conspicuous stamens. The ovary is superior and has only one carpel with numerous ovules.
Flowering is followed by the formation of seed pods, which ripen from November to January.
Dark brown or reddish brown to black in colour, the seed are located inside a parallel-sided, flattish, smooth pod. They are 20–105 mm long by 4–8.5 mm wide with edges. The seed opens by two valves. Pods are initially hairy but they become hairless as they grow.
Taxonomy
German botanist Johann Christoph Wendland first described this species as Mimosa decurrens in 1798, before his countryman Carl Ludwig Willdenow redescribed it in the genus Acacia in 1919. In his description, Willdenow did not cite Wendland but instead a 1796 description by James Donn. However, as Donn's description was a nomen nudum, the proper citation is Acacia decurrens Willd. with neither older work cited.
George Bentham classified A. decurrens in the series Botrycephalae in his 1864 Flora Australiensis.
Queensland botanist Les Pedley reclassified the species as Racosperma decurrens in 2003, when he proposed placing almost all Australian members of the genus into the new genus Racosperma. However, this name is treated as a synonym of its original name.
Common names include coast green wattle, black wattle, early black wattle, Sydney green wattle, queen wattle, and in the local Dharawal language, Boo'kerrikin. Maiden noted that it was called Wat-tah by the indigenous people of Cumberland (Parramatta) and Camden districts. Sydney wattle was a name coined by von Mueller and early settlers around Penrith called it green wattle. Feathery wattle was another early name. It is also known as early green wattle in the Sydney basin, as it flowers in winter—earlier than similar species, such as Parramatta wattle (Acacia parramattensis), blueskin (A. irrorata) and late black wattle (A. mearnsii). It has attracted the vernacular name 'green cancer' in South Africa, where it has become weedy.
Other names include acacia bark, wattle bark, tan wattle, golden teak, and Brazilian teak.
Along with other bipinnate wattles, it is classified in the section Botrycephalae within the subgenus Phyllodineae in the genus Acacia. An analysis of genomic and chloroplast DNA along with morphological characters found that the section is polyphyletic, though the close relationships of A. decurrens and many other species were unable to be resolved.
Distribution and habitat
Acacia decurrens is native to tablelands of New South Wales and Victoria. It is found in temperate coastal to cool inland areas, but not dry or hot areas of inland NSW. It prefers high rainfall areas with per year, and is otherwise tolerant of a wide range of conditions. In woodlands and dry sclerophyll forests in New South Wales, it grows with trees such as grey gum (Eucalyptus punctata) and narrow-leaved ironbark (E. crebra). In areas where it has become naturalised, Acacia decurrens is generally found on roadsides, along creeklines and in waste areas. It also grows in disturbed sites nearby bushlands and open woodlands.
It was extensively planted in New South Wales, and it is difficult to tell whether it is native or naturalised in areas near its native range. The species became naturalised in other states including Queensland, Victoria and Tasmania. It grows on shale and sandstone soils with medium nutrients and good drainage.
Despite its invasive nature, it has not been declared a noxious weed by any state or Australian government body.
Ecology
The dark brown or black seed is the main source of reproduction. They can be spread by ants or birds, and form a seedbank in the soil. Seedlings generally grow rapidly after bushfire, and the species can colonise disturbed areas. Trees can live for 15 to 50 years.
Sulphur-crested cockatoos eat the unripe seed.
The foliage serves as food for the caterpillars of the double-spotted line blue (Nacaduba biocellata), moonlight jewel (Hypochrysops delicia), imperial hairstreak (Jalmenus evagoras), ictinus blue (Jalmenus ictinus), amethyst hairstreak (Jalmenus icilius) and silky hairstreak (Pseudalmenus chlorinda).
The wood serves as food for larvae of the jewel beetle species Agrilus australasiae, Cisseis cupripennis and C. scabrosula.
Uses
Uses of Acacia decurrens include chemical products, environmental management, and wood. The flowers are edible and are used in fritters. An edible gum oozing from the tree's trunk can be used as a lesser-quality substitute for gum arabic, for example in the production of fruit jelly. The bark contains about 37–40% tannin. The flowers are used to produce yellow dye, and the seed pods are used to produce green dye. An organic chemical compound called kaempferol gives the flowers of A. decurrens their color. It has been grown for firewood, or as a fast-growing windbreak or shelter tree.
Cultural significance
In the Dharawal story of the Boo'kerrikin Sisters, one of the kindly sisters was turned into Acacia decurrens. The other two sisters were turned into A. parvipinnula and A. parramattensis. The flowering of A. decurrens was used as a seasonal indicator of the ceasing of cold winds and the beginning of a period of gentle rain.
Cultivation
Acacia decurrens adapts easily to cultivation and grows very quickly. It can be used as a shelter or specimen tree in large gardens and parks. The tree can look imposing when in flower. Cultivation of A. decurrens can be started by soaking the seeds in warm water and sowing them outdoors. The seeds keep their ability to germinate for many years.
Fieldwork conducted in the Southern Highlands found that the presence of bipinnate wattles (either as understory or tree) was related to reduced numbers of noisy miners, an aggressive species of bird that drives off small birds from gardens and bushland, and hence recommended the use of these plants in establishing green corridors and revegetation projects.
| Biology and health sciences | Fabales | Plants |
12699214 | https://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene%20boundary | Cretaceous–Paleogene boundary | The Cretaceous–Paleogene (K–Pg) boundary, formerly known as the Cretaceous–Tertiary (K–T) boundary, is a geological signature, usually a thin band of rock containing much more iridium than other bands. The K–Pg boundary marks the end of the Cretaceous Period, the last period of the Mesozoic Era, and marks the beginning of the Paleogene Period, the first period of the Cenozoic Era. Its age is usually estimated at 66 million years, with radiometric dating yielding a more precise age of 66.043 ± 0.043 Ma.
The K–Pg boundary is associated with the Cretaceous–Paleogene extinction event, a mass extinction which destroyed a majority of the world's Mesozoic species, including all dinosaurs except for some birds.
Strong evidence exists that the extinction coincided with a large meteorite impact at the Chicxulub crater and the generally accepted scientific theory is that this impact triggered the extinction event.
The word "Cretaceous" is derived from the Latin "creta" (chalk). It is abbreviated K (as in "K–Pg boundary") for its German translation "Kreide" (chalk).
Proposed causes
Chicxulub crater
In 1980, a team of researchers led by Nobel prize-winning physicist Luis Alvarez, his son, geologist Walter Alvarez, and chemists Frank Asaro and Helen Vaughn Michel discovered that sedimentary layers found all over the world at the Cretaceous–Paleogene boundary contain a concentration of iridium hundreds of times greater than normal. They suggested that this layer was evidence of an impact event that triggered worldwide climate disruption and caused the Cretaceous–Paleogene extinction event, a mass extinction in which 75% of plant and animal species on Earth suddenly became extinct, including all non-avian dinosaurs.
When it was originally proposed, one issue with the "Alvarez hypothesis" (as it came to be known) was that no documented crater matched the event. This was not a lethal blow to the theory; while the crater resulting from the impact would have been larger than in diameter, Earth's geological processes hide or destroy craters over time.
The Chicxulub crater is an impact crater buried underneath the Yucatán Peninsula in Mexico. Its center is located near the town of Chicxulub, after which the crater is named. It was formed by a large asteroid or comet about in diameter, the Chicxulub impactor, striking the Earth. The date of the impact coincides precisely with the Cretaceous–Paleogene boundary (K–Pg boundary), slightly more than 66 million years ago.
The crater is estimated to be over in diameter and in depth, well into the continental crust of the region of about depth. It makes the feature the second of the largest confirmed impact structures on Earth, and the only one whose peak ring is intact and directly accessible for scientific research.
The crater was discovered by Antonio Camargo and Glen Penfield, geophysicists who had been looking for petroleum in the Yucatán during the late 1970s. Penfield was initially unable to obtain evidence that the geological feature was a crater and gave up his search. Later, through contact with Alan Hildebrand in 1990, Penfield obtained samples that suggested it was an impact feature. Evidence for the impact origin of the crater includes shocked quartz, a gravity anomaly, and tektites in surrounding areas.
In 2016, a scientific drilling project drilled deep into the peak ring of the impact crater, hundreds of meters below the current sea floor, to obtain rock core samples from the impact itself. The discoveries were widely seen as confirming current theories related to both the crater impact and its effects.
The shape and location of the crater indicate further causes of devastation in addition to the dust cloud. The asteroid landed right on the coast and would have caused gigantic tsunamis, for which evidence has been found all around the coast of the Caribbean and eastern United States—marine sand in locations which were then inland, and vegetation debris and terrestrial rocks in marine sediments dated to the time of the impact.
The asteroid landed in a bed of anhydrite () or gypsum (CaSO4·2(H2O)), which would have ejected large quantities of sulfur trioxide that combined with water to produce a sulfuric acid aerosol. This would have further reduced the sunlight reaching the Earth's surface and then over several days, precipitated planet-wide as acid rain, killing vegetation, plankton and organisms which build shells from calcium carbonate (coccolithophorids and molluscs).
Deccan Traps
Before 2000, arguments that the Deccan Traps flood basalts caused the extinction were usually linked to the view that the extinction was gradual, as the flood basalt events were thought to have started around 68 Ma and lasted for over 2 million years. However, there is evidence that two thirds of the Deccan Traps were created within 1 million years about 65.5 Ma, so these eruptions would have caused a fairly rapid extinction, possibly a period of thousands of years, but still a longer period than what would be expected from a single impact event.
The Deccan Traps could have caused extinction through several mechanisms, including the release of dust and sulfuric aerosols into the air which might have blocked sunlight and thereby reduced photosynthesis in plants. In addition, Deccan Trap volcanism might have resulted in carbon dioxide emissions which would have increased the greenhouse effect when the dust and aerosols cleared from the atmosphere.
In the years when the Deccan Traps theory was linked to a slower extinction, Luis Alvarez (who died in 1988) replied that paleontologists were being misled by sparse data. While his assertion was not initially well-received, later intensive field studies of fossil beds lent weight to his claim. Eventually, most paleontologists began to accept the idea that the mass extinctions at the end of the Cretaceous were largely or at least partly due to a massive Earth impact. However, even Walter Alvarez has acknowledged that there were other major changes on Earth even before the impact, such as a drop in sea level and massive volcanic eruptions that produced the Indian Deccan Traps, and these may have contributed to the extinctions.
Multiple impact event
Several other craters also appear to have been formed about the time of the K–Pg boundary. This suggests the possibility of nearly simultaneous multiple impacts, perhaps from a fragmented asteroidal object, similar to the Shoemaker–Levy 9 cometary impact with Jupiter. Among these are the Boltysh crater, a diameter impact crater in Ukraine and the Silverpit crater, a diameter proposed impact crater in the North Sea Any other craters that might have formed in the Tethys Ocean would have been obscured by erosion and tectonic events such as the relentless northward drift of Africa and India.
A very large structure in the sea floor off the west coast of India was interpreted in 2006 as a crater by three researchers. The potential Shiva crater, in diameter, would substantially exceed Chicxulub in size and has been estimated to be about 66 mya, an age consistent with the K–Pg boundary. An impact at this site could have been the triggering event for the nearby Deccan Traps. However, this feature has not yet been accepted by the geologic community as an impact crater and may just be a sinkhole depression caused by salt withdrawal.
Maastrichtian marine regression
Clear evidence exists that sea levels fell in the final stage of the Cretaceous by more than at any other time in the Mesozoic era. In some Maastrichtian stage rock layers from various parts of the world, the later ones are terrestrial; earlier ones represent shorelines and the earliest represent seabeds. These layers do not show the tilting and distortion associated with mountain building; therefore, the likeliest explanation is a regression, that is, a buildout of sediment, but not necessarily a drop in sea level. No direct evidence exists for the cause of the regression, but the explanation which is currently accepted as the most likely is that the mid-ocean ridges became less active and therefore sank under their own weight as sediment from uplifted orogenic belts filled in structural basins.
A severe regression would have greatly reduced the continental shelf area, which is the most species-rich part of the sea, and therefore could have been enough to cause a marine mass extinction. However, research concludes that this change would have been insufficient to cause the observed level of ammonite extinction. The regression would also have caused climate changes, partly by disrupting winds and ocean currents and partly by reducing the Earth's albedo and therefore increasing global temperatures.
Marine regression also resulted in the reduction in area of epeiric seas, such as the Western Interior Seaway of North America. The reduction of these seas greatly altered habitats, removing coastal plains that ten million years before had been host to diverse communities such as are found in rocks of the Dinosaur Park Formation. Another consequence was an expansion of freshwater environments, since continental runoff now had longer distances to travel before reaching oceans. While this change was favorable to freshwater vertebrates, those that prefer marine environments, such as sharks, suffered.
Supernova hypothesis
Another discredited cause for the K–Pg extinction event is cosmic radiation from a nearby supernova explosion. An iridium anomaly at the boundary is consistent with this hypothesis. However, analysis of the boundary layer sediments failed to find , a supernova byproduct which is the longest-lived plutonium isotope, with a half-life of 81 million years.
Verneshot
An attempt to link volcanism – like the Deccan Traps – and impact events causally in the other direction compared to the proposed Shiva crater is the so-called Verneshot hypothesis (named for Jules Verne), which proposes that volcanism might have gotten so intense as to "shoot up" material into a ballistic trajectory into space before it fell down as an impactor. Due to the spectacular nature of this proposed mechanism, the scientific community has largely reacted with skepticism to this hypothesis.
Multiple causes
It is possible that more than one of these hypotheses may be a partial solution to the mystery, and that more than one of these events may have occurred. Both the Deccan Traps and the Chicxulub impact may have been important contributors. For example, the most recent dating of the Deccan Traps supports the idea that rapid eruption rates in the Deccan Traps may have been triggered by large seismic waves radiated by the impact.
| Physical sciences | Geologic features | Earth science |
1211986 | https://en.wikipedia.org/wiki/Virtual%20work | Virtual work | In mechanics, virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action. The work of a force on a particle along a virtual displacement is known as the virtual work.
Historically, virtual work and the associated calculus of variations were formulated to analyze systems of rigid bodies, but they have also been developed for the study of the mechanics of deformable bodies.
History
The principle of virtual work had always been used in some form since antiquity in the study of statics. It was used by the Greeks, medieval Arabs and Latins, and Renaissance Italians as "the law of lever". The idea of virtual work was invoked by many notable physicists of the 17th century, such as Galileo, Descartes, Torricelli, Wallis, and Huygens, in varying degrees of generality, when solving problems in statics. Working with Leibnizian concepts, Johann Bernoulli systematized the virtual work principle and made explicit the concept of infinitesimal displacement. He was able to solve problems for both rigid bodies as well as fluids. Bernoulli's version of virtual work law appeared in his letter to Pierre Varignon in 1715, which was later published in Varignon's second volume of Nouvelle mécanique ou Statique in 1725. This formulation of the principle is today known as the principle of virtual velocities and is commonly considered as the prototype of the contemporary virtual work principles. In 1743 D'Alembert published his Traité de Dynamique where he applied the principle of virtual work, based on Bernoulli's work, to solve various problems in dynamics. His idea was to convert a dynamical problem into static problem by introducing inertial force. In 1768, Lagrange presented the virtual work principle in a more efficient form by introducing generalized coordinates and presented it as an alternative principle of mechanics by which all problems of equilibrium could be solved. A systematic exposition of Lagrange's program of applying this approach to all of mechanics, both static and dynamic, essentially D'Alembert's principle, was given in his Mécanique Analytique of 1788. Although Lagrange had presented his version of least action principle prior to this work, he recognized the virtual work principle to be more fundamental mainly because it could be assumed alone as the foundation for all mechanics, unlike the modern understanding that least action does not account for non-conservative forces.
Overview
If a force acts on a particle as it moves from point to point , then, for each possible trajectory that the particle may take, it is possible to compute the total work done by the force along the path. The principle of virtual work, which is the form of the principle of least action applied to these systems, states that the path actually followed by the particle is the one for which the difference between the work along this path and other nearby paths is zero (to the first order). The formal procedure for computing the difference of functions evaluated on nearby paths is a generalization of the derivative known from differential calculus, and is termed the calculus of variations.
Consider a point particle that moves along a path which is described by a function from point , where , to point , where . It is possible that the particle moves from to along a nearby path described by , where is called the variation of . The variation satisfies the requirement . The scalar components of the variation , and are called virtual displacements. This can be generalized to an arbitrary mechanical system defined by the generalized coordinates , . In which case, the variation of the trajectory is defined by the virtual displacements , .
Virtual work is the total work done by the applied forces and the inertial forces of a mechanical system as it moves through a set of virtual displacements. When considering forces applied to a body in static equilibrium, the principle of least action requires the virtual work of these forces to be zero.
Mathematical treatment
Consider a particle P that moves from a point A to a point B along a trajectory , while a force is applied to it. The work done by the force is given by the integral
where is the differential element along the curve that is the trajectory of P, and is its velocity. It is important to notice that the value of the work depends on the trajectory .
Now consider particle P that moves from point A to point B again, but this time it moves along the nearby trajectory that differs from by the variation , where is a scaling constant that can be made as small as desired and is an arbitrary function that satisfies . Suppose the force is the same as . The work done by the force is given by the integral
The variation of the work associated with this nearby path, known as the virtual work, can be computed to be
If there are no constraints on the motion of P, then 3 parameters are needed to completely describe Ps position at any time . If there are () constraint forces, then parameters are needed. Hence, we can define generalized coordinates (), and express and in terms of the generalized coordinates. That is,
Then, the derivative of the variation is given by
then we have
The requirement that the virtual work be zero for an arbitrary variation is equivalent to the set of requirements
The terms are called the generalized forces associated with the virtual displacement .
Static equilibrium
Static equilibrium is a state in which the net force and net torque acted upon the system is zero. In other words, both linear momentum and angular momentum of the system are conserved. The principle of virtual work states that the virtual work of the applied forces is zero for all virtual movements of the system from static equilibrium. This principle can be generalized such that three dimensional rotations are included: the virtual work of the applied forces and applied moments is zero for all virtual movements of the system from static equilibrium. That is
where Fi , i = 1, 2, ..., m and Mj , j = 1, 2, ..., n are the applied forces and applied moments, respectively, and δri , i = 1, 2, ..., m and δφj, j = 1, 2, ..., n are the virtual displacements and virtual rotations, respectively.
Suppose the system consists of N particles, and it has f (f ≤ 6N) degrees of freedom. It is sufficient to use only f coordinates to give a complete description of the motion of the system, so f generalized coordinates qk , k = 1, 2, ..., f are defined such that the virtual movements can be expressed in terms of these generalized coordinates. That is,
The virtual work can then be reparametrized by the generalized coordinates:
where the generalized forces Qk are defined as
Kane shows that these generalized forces can also be formulated in terms of the ratio of time derivatives. That is,
The principle of virtual work requires that the virtual work done on a system by the forces Fi and moments Mj vanishes if it is in equilibrium. Therefore, the generalized forces Qk are zero, that is
Constraint forces
An important benefit of the principle of virtual work is that only forces that do work as the system moves through a virtual displacement are needed to determine the mechanics of the system. There are many forces in a mechanical system that do no work during a virtual displacement, which means that they need not be considered in this analysis. The two important examples are (i) the internal forces in a rigid body, and (ii) the constraint forces at an ideal joint.
Lanczos presents this as the postulate: "The virtual work of the forces of reaction is always zero for any virtual displacement which is in harmony with the given kinematic constraints." The argument is as follows. The principle of virtual work states that in equilibrium the virtual work of the forces applied to a system is zero. Newton's laws state that at equilibrium the applied forces are equal and opposite to the reaction, or constraint forces. This means the virtual work of the constraint forces must be zero as well.
Law of the lever
A lever is modeled as a rigid bar connected to a ground frame by a hinged joint called a fulcrum. The lever is operated by applying an input force FA at a point A located by the coordinate vector rA on the bar. The lever then exerts an output force FB at the point B located by rB. The rotation of the lever about the fulcrum P is defined by the rotation angle θ.
Let the coordinate vector of the point P that defines the fulcrum be rP, and introduce the lengths
which are the distances from the fulcrum to the input point A and to the output point B, respectively.
Now introduce the unit vectors eA and eB from the fulcrum to the point A and B, so
This notation allows us to define the velocity of the points A and B as
where eA⊥ and eB⊥ are unit vectors perpendicular to eA and eB, respectively.
The angle θ is the generalized coordinate that defines the configuration of the lever, therefore using the formula above for forces applied to a one degree-of-freedom mechanism, the generalized force is given by
Now, denote as FA and FB the components of the forces that are perpendicular to the radial segments PA and PB. These forces are given by
This notation and the principle of virtual work yield the formula for the generalized force as
The ratio of the output force FB to the input force FA is the mechanical advantage of the lever, and is obtained from the principle of virtual work as
This equation shows that if the distance a from the fulcrum to the point A where the input force is applied is greater than the distance b from fulcrum to the point B where the output force is applied, then the lever amplifies the input force. If the opposite is true that the distance from the fulcrum to the input point A is less than from the fulcrum to the output point B, then the lever reduces the magnitude of the input force.
This is the law of the lever, which was proven by Archimedes using geometric reasoning.
Gear train
A gear train is formed by mounting gears on a frame so that the teeth of the gears engage. Gear teeth are designed to ensure the pitch circles of engaging gears roll on each other without slipping, this provides a smooth transmission of rotation from one gear to the next. For this analysis, we consider a gear train that has one degree-of-freedom, which means the angular rotation of all the gears in the gear train are defined by the angle of the input gear.
The size of the gears and the sequence in which they engage define the ratio of the angular velocity ωA of the input gear to the angular velocity ωB of the output gear, known as the speed ratio, or gear ratio, of the gear train. Let R be the speed ratio, then
The input torque TA acting on the input gear GA is transformed by the gear train into the output torque TB exerted by the output gear GB. If we assume, that the gears are rigid and that there are no losses in the engagement of the gear teeth, then the principle of virtual work can be used to analyze the static equilibrium of the gear train.
Let the angle θ of the input gear be the generalized coordinate of the gear train, then the speed ratio R of the gear train defines the angular velocity of the output gear in terms of the input gear, that is
The formula above for the principle of virtual work with applied torques yields the generalized force
The mechanical advantage of the gear train is the ratio of the output torque TB to the input torque TA, and the above equation yields
Thus, the speed ratio of a gear train also defines its mechanical advantage. This shows that if the input gear rotates faster than the output gear, then the gear train amplifies the input torque. And, if the input gear rotates slower than the output gear, then the gear train reduces the input torque.
Dynamic equilibrium for rigid bodies
If the principle of virtual work for applied forces is used on individual particles of a rigid body, the principle can be generalized for a rigid body: When a rigid body that is in equilibrium is subject to virtual compatible displacements, the total virtual work of all external forces is zero; and conversely, if the total virtual work of all external forces acting on a rigid body is zero then the body is in equilibrium.
If a system is not in static equilibrium, D'Alembert showed that by introducing the acceleration terms of Newton's laws as inertia forces, this approach is generalized to define dynamic equilibrium. The result is D'Alembert's form of the principle of virtual work, which is used to derive the equations of motion for a mechanical system of rigid bodies.
The expression compatible displacements means that the particles remain in contact and displace together so that the work done by pairs of action/reaction inter-particle forces cancel out. Various forms of this principle have been credited to Johann (Jean) Bernoulli (1667–1748) and Daniel Bernoulli (1700–1782).
Generalized inertia forces
Let a mechanical system be constructed from n rigid bodies, Bi, i=1,...,n, and let the resultant of the applied forces on each body be the force-torque pairs, Fi and Ti, i = 1,...,n. Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocity Vi and angular velocities ωi, i=1,...,n, for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have one degree of freedom.
Consider a single rigid body which moves under the action of a resultant force F and torque T, with one degree of freedom defined by the generalized coordinate q. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia force Q* associated with the generalized coordinate q is given by
This inertia force can be computed from the kinetic energy of the rigid body,
by using the formula
A system of n rigid bodies with m generalized coordinates has the kinetic energy
which can be used to calculate the m generalized inertia forces
D'Alembert's form of the principle of virtual work
D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires that
for any set of virtual displacements δqj. This condition yields m equations,
which can also be written as
The result is a set of m equations of motion that define the dynamics of the rigid body system, known as Lagrange's equations or the generalized equations of motion.
If the generalized forces Qj are derivable from a potential energy V(q1,...,qm), then these equations of motion take the form
In this case, introduce the Lagrangian, , so these equations of motion become
These are known as the Euler-Lagrange equations for a system with m degrees of freedom, or Lagrange's equations of the second kind.
Virtual work principle for a deformable body
Consider now the free body diagram of a deformable body, which is composed of an infinite number of differential cubes. Let's define two unrelated states for the body:
The -State : This shows external surface forces T, body forces f, and internal stresses in equilibrium.
The -State : This shows continuous displacements and consistent strains .
The superscript * emphasizes that the two states are unrelated. Other than the above stated conditions, there is no need to specify if any of the states are real or virtual.
Imagine now that the forces and stresses in the -State undergo the displacements and deformations in the -State: We can compute the total virtual (imaginary) work done by all forces acting on the faces of all cubes in two different ways:
First, by summing the work done by forces such as which act on individual common faces (Fig.c): Since the material experiences compatible displacements, such work cancels out, leaving only the virtual work done by the surface forces T (which are equal to stresses on the cubes' faces, by equilibrium).
Second, by computing the net work done by stresses or forces such as , which act on an individual cube, e.g. for the one-dimensional case in Fig.(c): where the equilibrium relation has been used and the second order term has been neglected. Integrating over the whole body gives: – Work done by the body forces f.
Equating the two results leads to the principle of virtual work for a deformable body:
where the total external virtual work is done by T and f. Thus,
The right-hand-side of (,) is often called the internal virtual work. The principle of virtual work then states: External virtual work is equal to internal virtual work when equilibrated forces and stresses undergo unrelated but consistent displacements and strains. It includes the principle of virtual work for rigid bodies as a special case where the internal virtual work is zero.
Proof of equivalence between the principle of virtual work and the equilibrium equation
We start by looking at the total work done by surface traction on the body going through the specified deformation:
Applying divergence theorem to the right hand side yields:
Now switch to indicial notation for the ease of derivation.
To continue our derivation, we substitute in the equilibrium equation . Then
The first term on the right hand side needs to be broken into a symmetric part and a skew part as follows:
where is the strain that is consistent with the specified displacement field. The 2nd to last equality comes from the fact that the stress matrix is symmetric and that the product of a skew matrix and a symmetric matrix is zero.
Now recap. We have shown through the above derivation that
Move the 2nd term on the right hand side of the equation to the left:
The physical interpretation of the above equation is, the External virtual work is equal to internal virtual work when equilibrated forces and stresses undergo unrelated but consistent displacements and strains.
For practical applications:
In order to impose equilibrium on real stresses and forces, we use consistent virtual displacements and strains in the virtual work equation.
In order to impose consistent displacements and strains, we use equilibriated virtual stresses and forces in the virtual work equation.
These two general scenarios give rise to two often stated variational principles. They are valid irrespective of material behaviour.
Principle of virtual displacements
Depending on the purpose, we may specialize the virtual work equation. For example, to derive the principle of virtual displacements in variational notations for supported bodies, we specify:
Virtual displacements and strains as variations of the real displacements and strains using variational notation such as and
Virtual displacements be zero on the part of the surface that has prescribed displacements, and thus the work done by the reactions is zero. There remains only external surface forces on the part that do work.
The virtual work equation then becomes the principle of virtual displacements:
This relation is equivalent to the set of equilibrium equations written for a differential element in the deformable body as well as of the stress boundary conditions on the part of the surface. Conversely, () can be reached, albeit in a non-trivial manner, by starting with the differential equilibrium equations and the stress boundary conditions on , and proceeding in the manner similar to () and ().
Since virtual displacements are automatically compatible when they are expressed in terms of continuous, single-valued functions, we often mention only the need for consistency between strains and displacements. The virtual work principle is also valid for large real displacements; however, Eq.() would then be written using more complex measures of stresses and strains.
Principle of virtual forces
Here, we specify:
Virtual forces and stresses as variations of the real forces and stresses.
Virtual forces be zero on the part of the surface that has prescribed forces, and thus only surface (reaction) forces on (where displacements are prescribed) would do work.
The virtual work equation becomes the principle of virtual forces:
This relation is equivalent to the set of strain-compatibility equations as well as of the displacement boundary conditions on the part . It has another name: the principle of complementary virtual work.
Alternative forms
A specialization of the principle of virtual forces is the unit dummy force method, which is very useful for computing displacements in structural systems. According to D'Alembert's principle, inclusion of inertial forces as additional body forces will give the virtual work equation applicable to dynamical systems. More generalized principles can be derived by:
allowing variations of all quantities.
using Lagrange multipliers to impose boundary conditions and/or to relax the conditions specified in the two states.
These are described in some of the references.
Among the many energy principles in structural mechanics, the virtual work principle deserves a special place due to its generality that leads to powerful applications in structural analysis, solid mechanics, and finite element method in structural mechanics.
| Physical sciences | Classical mechanics | Physics |
1212681 | https://en.wikipedia.org/wiki/Gang-gang%20cockatoo | Gang-gang cockatoo | The gang-gang cockatoo (Callocephalon fimbriatum) is a parrot found in the cooler and wetter forests and woodlands of Australia, particularly alpine bushland. It is the only species placed in the genus Callocephalon. Mostly mild grey in colour with some lighter scalloping (more pronounced and buffy in females), the male has a red head and crest, while the female has a small fluffy grey crest. It ranges throughout south-eastern Australia. The gang-gang cockatoo is the faunal emblem of the Australian Capital Territory. It is easily identified by its distinctive call, which is described as resembling a creaky gate, or the sound of a cork being pulled from a wine bottle.
The name gang-gang comes from a New South Wales Aboriginal language, probably from one of the coastal languages, although possibly from Wiradjuri. It is probably an onomatopoeic name.
Taxonomy
In 1803 the British Royal Navy officer James Grant included an illustration of the gang-gang cockatoo in his book describing a voyage to the colony of New South Wales in Australia. Grant coined the binomial name Psittacus fimbriatus. The gang-gang cockatoo is now the only species placed in the genus Callocephalon that was introduced in 1837 by the French naturalist René Lesson. The type locality is the Bass River in the state of Victoria. The specific epithet is from Latin fimbriata meaning "fringed". The genus name combines the Ancient Greek kallos meaning "beauty" and kephalē meaning "head". The species is monotypic: no subspecies are recognised.
The classification of the gang-gang cockatoo has always been controversial due to the unusual appearance and coloration of the bird, especially its sexual dichromatism. The gang-gang cockatoo was thought to be a distinctive early offshoot of the Calyptorhynchinae (black) cockatoos. However, more recent molecular phylogenetic analysis places it in the Cacatuinae clade, not the Calyptorhynchinae, and having diverged from the palm cockatoo (Probosciger aterrimus).
Description
The gang-gang cockatoo is in length with a wingspan, and weighs 230–334 grams. They are grey birds with wispy crests. The head and crest is bright red in males, but dark grey in females. The edges of feathers in underparts have edges of yellow or pink. The edges of feathers on upperarts are slightly paler grey than the rest of the feather, which makes the bird look somewhat barred. Juvenile males can be distinguished by their brighter crowns and shorter crests, but otherwise look similar to the adult female. The birds are not easily mistaken for other cockatoos, but while in flight may resemble the Galah. Gang-gangs are very social birds, but not overly noisy.
Distribution and habitat
Gang-gangs are endemic to coastal regions of south-eastern Australia. They used to inhabit King Island off of Tasmania, but have become extinct there. They are an introduced species on Kangaroo Island. Gang-gangs prefer forests and woodlands in the mountains, with dense shrub understories. They migrate short distances during winter into more open habitats, but must migrate back to denser forests to breed, because they need tall trees in order to build nests.
Behaviour and ecology
Breeding habits
Unlike most other cockatoos, gang-gangs nest in young, solid trees. They often nest near water. The females using their strong beaks to excavate nesting cavities. Gang-gangs are monogamous. The breeding season lasts from spring to summer. The birds will lay 2-3 white eggs, the incubation period is 4 weeks and both sexes take care of the young.
Diet
They forage in canopies mostly feed on the flowers and buds of eucalypts.
Status
Loss of older, hollow trees and loss of feeding habitat across south-eastern Australia through land clearing has led to a significant reduction in the numbers of this cockatoo in recent years. As a result, the gang-gang is now listed as vulnerable in New South Wales. It is protected as a vulnerable species under the Biodiversity Conservation Act 2016 (NSW). This protection status as a threatened species makes it a Tier 1 criminal offence for a person or corporation to knowingly damage the bird's habitat. Damage is defined to include "damage caused by removing any part of the habitat". Habitat is defined to include "an area periodically or occasionally occupied by a species".
In July 2021, an Australian Department of the Environment and Energy spokesperson stated the population has declined by approximately 69% in the last three generations, or 21 years and in addition to this decline, the species has suffered direct mortality and habitat loss during the 2019–20 Australian bushfire season. Between 28 and 36 per cent of the species' distribution was impacted by the fires. As a result, it is set to be listed as endangered under the threatened fauna of Australia.
Gallery
| Biology and health sciences | Psittaciformes | Animals |
1213846 | https://en.wikipedia.org/wiki/Sodium%20borohydride | Sodium borohydride | Sodium borohydride, also known as sodium tetrahydridoborate and sodium tetrahydroborate, is an inorganic compound with the formula (sometimes written as ). It is a white crystalline solid, usually encountered as an aqueous basic solution. Sodium borohydride is a reducing agent that finds application in papermaking and dye industries. It is also used as a reagent in organic synthesis.
The compound was discovered in the 1940s by H. I. Schlesinger, who led a team seeking volatile uranium compounds. Results of this wartime research were declassified and published in 1953.
Properties
The compound is soluble in alcohols, certain ethers, and water, although it slowly hydrolyzes.
Sodium borohydride is an odorless white to gray-white microcrystalline powder that often forms lumps. It can be purified by recrystallization from warm (50 °C) diglyme. Sodium borohydride is soluble in protic solvents such as water and lower alcohols. It also reacts with these protic solvents to produce ; however, these reactions are fairly slow. Complete decomposition of a methanol solution requires nearly 90 min at 20 °C. It decomposes in neutral or acidic aqueous solutions, but is stable at pH 14.
Structure
is a salt, consisting of the tetrahedral anion. The solid is known to exist as three polymorphs: α, β and γ. The stable phase at room temperature and pressure is α-, which is cubic and adopts an NaCl-type structure, in the Fmm space group. At a pressure of 6.3 GPa, the structure changes to the tetragonal β- (space group P421c) and at 8.9 GPa, the orthorhombic γ- (space group Pnma) becomes the most stable.
Synthesis and handling
For commercial production, the Brown-Schlesinger process and the Bayer process are the most popular methods. In the Brown-Schlesinger process, sodium borohydride is industrially prepared from sodium hydride (produced by reacting Na and ) and trimethyl borate at 250–270 °C:
Millions of kilograms are produced annually, far exceeding the production levels of any other hydride reducing agent. In the Bayer process, it is produced from inorganic borates, including borosilicate glass and borax ():
Magnesium is a less expensive reductant, and could in principle be used instead:
and
Reactivity
Organic synthesis
reduces many organic carbonyls, depending on the conditions. Most typically, it is used in the laboratory for converting ketones and aldehydes to alcohols. These reductions proceed in two stages, formation of the alkoxide followed by hydrolysis:
It also efficiently reduces acyl chlorides, anhydrides, α-hydroxylactones, thioesters, and imines at room temperature or below. It reduces esters slowly and inefficiently with excess reagent and/or elevated temperatures, while carboxylic acids and amides are not reduced at all.
Nevertheless, an alcohol, often methanol or ethanol, is generally the solvent of choice for sodium borohydride reductions of ketones and aldehydes. The mechanism of ketone and aldehyde reduction has been scrutinized by kinetic studies, and contrary to popular depictions in textbooks, the mechanism does not involve a 4-membered transition state like alkene hydroboration, or a six-membered transition state involving a molecule of the alcohol solvent. Hydrogen-bonding activation is required, as no reduction occurs in an aprotic solvent like diglyme. However, the rate order in alcohol is 1.5, while carbonyl compound and borohydride are both first order, suggesting a mechanism more complex than one involving a six-membered transition state that includes only a single alcohol molecule. It was suggested that the simultaneous activation of the carbonyl compound and borohydride occurs, via interaction with the alcohol and alkoxide ion, respectively, and that the reaction proceeds through an open transition state.
α,β-Unsaturated ketones tend to be reduced by in a 1,4-sense, although mixtures are often formed. Addition of cerium chloride improves the selectivity for 1,2-reduction of unsaturated ketones (Luche reduction). α,β-Unsaturated esters also undergo 1,4-reduction in the presence of .
The -MeOH system, formed by the addition of methanol to sodium borohydride in refluxing THF, reduces esters to the corresponding alcohols. Mixing water or an alcohol with the borohydride converts some of it into unstable hydride ester, which is more efficient at reduction, but the reductant eventually decomposes spontaneously to produce hydrogen gas and borates. The same reaction can also occur intramolecularly: an α-ketoester converts into a diol, since the alcohol produced attacks the borohydride to produce an ester of the borohydride, which then reduces the neighboring ester.
The reactivity of can be enhanced or augmented by a variety of compounds.
Many additives for modifying the reactivity of sodium borohydride have been developed as indicated by the following incomplete listing.
Oxidation
Oxidation with iodine in tetrahydrofuran gives borane–tetrahydrofuran, which can reduce carboxylic acids to alcohols.
Partial oxidation of borohydride with iodine gives octahydrotriborate:
Coordination chemistry
is a ligand for metal ions. Such borohydride complexes are often prepared by the action of (or the ) on the corresponding metal halide. One example is the titanocene derivative:
Protonolysis and hydrolysis
reacts with water and alcohols, with evolution of hydrogen gas and formation of the corresponding borate, the reaction being especially fast at low pH. Exploiting this reactivity, sodium borohydride has been studied as a prototypes of the direct borohydride fuel cell.
(ΔH < 0)
Applications
Paper manufacture
The dominant application of sodium borohydride is the production of sodium dithionite from sulfur dioxide: Sodium dithionite is used as a bleaching agent for wood pulp and in the dyeing industry.
It has been tested as pretreatment for pulping of wood, but is too costly to be commercialized.
Chemical synthesis
Sodium borohydride reduces aldehydes and ketones to give the related alcohols. This reaction is used in the production of various antibiotics including chloramphenicol, dihydrostreptomycin, and thiophenicol. Various steroids and vitamin A are prepared using sodium borohydride in at least one step.
Niche or abandoned applications
Sodium borohydride has been considered as a way to store hydrogen for hydrogen-fueled vehicles, as it is safer (being stable in dry air) and more efficient on a weight basis than most other alternatives. The hydrogen can be released by simple hydrolysis of the borohydride. However, such a usage would need a cheap, relatively simple, and energy-efficient process to recycle the hydrolysis product, sodium metaborate, back to the borohydride. No such process was available as of 2007.
Although practical temperatures and pressures for hydrogen storage have not been achieved, in 2012 a core–shell nanostructure of sodium borohydride was used to store, release and reabsorb hydrogen under moderate conditions.
Skilled professional conservator/restorers have used sodium borohydride to minimize or reverse foxing in old books and documents.
Education
A common laboratory demonstration "uncooks" eggs with sodium borohydride, as hydride reagents reduce disulfides to thiols. To uncook an egg, breaking the hydrogen and hydrophobic bonds is not enough. As sodium borohydride is toxic, the egg white uncooked after three hours is not edible, but Vitamin C can be used instead.
| Physical sciences | Borohydride salts | Chemistry |
1214539 | https://en.wikipedia.org/wiki/Feathered%20dinosaur | Feathered dinosaur | A feathered dinosaur is any species of dinosaur possessing feathers. That includes all species of birds, and in recent decades evidence has accumulated that many non-avian dinosaur species also possessed feathers in some shape or form. The extent to which feathers or feather-like structures were present in dinosaurs as a whole is a subject of ongoing debate and research.
It has been suggested that feathers had originally functioned as thermal insulation, as it remains their function in the down feathers of infant birds prior to their eventual modification in birds into structures that support flight.
Since scientific research began on dinosaurs in the early 1800s, they were generally believed to be closely related to modern reptiles such as lizards. The word dinosaur itself, coined in 1842 by paleontologist Richard Owen, comes from the Greek for 'terrible lizard'. That view began to shift during the so-called dinosaur renaissance in scientific research in the late 1960s; by the mid-1990s, significant evidence had emerged that dinosaurs were much more closely related to birds, which descended directly from the theropod group of dinosaurs.
Knowledge of the origin of feathers developed as new fossils were discovered throughout the 2000s and the 2010s, and technology enabled scientists to study fossils more closely. Among non-avian dinosaurs, feathers or feather-like integument have been discovered in dozens of genera via direct and indirect fossil evidence. Although the vast majority of feather discoveries have been in coelurosaurian theropods, feather-like integument has also been discovered in at least three ornithischians, suggesting that feathers may have been present on the last common ancestor of the Ornithoscelida, a dinosaur group including both theropods and ornithischians. It is possible that feathers first developed in even earlier archosaurs, in light of the discovery of vaned feathers in pterosaurs. Fossil feathers from the dinosaur Sinosauropteryx contain traces of beta-proteins (formerly called beta-keratins), confirming that early feathers had a composition similar to that of feathers in modern birds. Crocodilians also possess beta keratin similar to those of birds, which suggests that they evolved from common ancestral genes.
History of research
Early
Shortly after the 1859 publication of Charles Darwin's On the Origin of Species, the British biologist Thomas Henry Huxley proposed that birds were descendants of dinosaurs. He compared the skeletal structure of Compsognathus, a small theropod dinosaur, and the "first bird" Archaeopteryx lithographica (both of which were found in the Upper Jurassic Bavarian limestone of Solnhofen). He showed that, apart from its hands and feathers, Archaeopteryx was quite similar to Compsognathus. Thus Archaeopteryx represents a transitional fossil. In 1868, he published On the Animals which are most nearly intermediate between Birds and Reptiles, which made that case.
The first restoration of a feathered dinosaur was Huxley's depiction in 1876 of a feathered Compsognathus, made to accompany a bird evolution lecture he delivered in New York, in which he speculated that the aforementioned dinosaur might have had feathers.
Dinosaur renaissance
A century later, during the dinosaur renaissance, paleoartists began to create modern restorations of highly active dinosaurs. In 1969, Robert T. Bakker drew a running Deinonychus. His student Gregory S. Paul depicted non-avian maniraptoran dinosaurs with feathers and protofeathers, starting in the late 1970s.
Fossil discoveries
The first known specimen of Archaeopteryx, on the basis of which the genus was named, was an isolated feather, although whether or not it belongs to Archaeopteryx has been controversial. One of the earliest discoveries of possible feather impressions by non-avian dinosaurs is a trace fossil (Fulicopus lyellii) of the 195–199 million year old Portland Formation in the northeastern United States. Gierlinski (1996, 1997, 1998) and Kundrát (2004) have interpreted traces between two footprints in this fossil as feather impressions from the belly of a squatting dilophosaurid. Although some reviewers have raised questions about the naming and interpretation of this fossil, if correct, that early Jurassic fossil is the oldest known evidence of feathers, almost 30 million years older than the next-oldest-known evidence.
The most important discoveries at Liaoning have been a host of feathered dinosaur fossils, with a steady stream of new finds filling in the picture of the dinosaur–bird connection and adding more to theories of the evolutionary development of feathers and flight. Turner et al. (2007) reported quill knobs from an ulna of Velociraptor mongoliensis, and these are strongly correlated with large and well-developed secondary feathers.
Behavioural evidence, in the form of an oviraptorosaur on its nest, showed another link with birds. Its forearms were folded, like those of a bird. Although no feathers were preserved, it is likely that these would have been present to insulate eggs and juveniles.
Not all of the Chinese fossil discoveries proved valid however. In 1999, a supposed fossil of an apparently feathered dinosaur named Archaeoraptor liaoningensis, also found in Liaoning, turned out to be a forgery. Comparing the photograph of the specimen with another find, Chinese paleontologist Xu Xing came to the conclusion that it was composed of two portions of different fossil animals. His claim made National Geographic review their research and they too came to the same conclusion.
In 2011, samples of amber were discovered to contain preserved feathers from 75 to 80 million years ago during the Cretaceous Period, with evidence that they were from both dinosaurs and birds. Initial analysis suggests that some of the feathers were used for insulation, and not flight. More complex feathers were revealed to have variations in coloration similar to modern birds, while simpler protofeathers were predominantly dark. Only 11 specimens are currently known. The specimens are too rare to be broken open to study their melanosomes (pigment-bearing organelles), but there are plans for using non-destructive high-resolution X-ray imaging. Melanosomes produce colouration in feathers; as differently-shaped melanosomes produce different colours, subsequent research on melanosomes preserved in feathered dinosaur specimens has led to reconstructions of the life appearance of several dinosaur species. These include Anchiornis, Sinosauropteryx, Microraptor, and Archaeopteryx.
In 2016, the discovery was announced of a feathered dinosaur tail preserved in amber that is estimated to be 99 million years old. Lida Xing, a researcher from the China University of Geosciences in Beijing, found the specimen at an amber market in Myanmar. It is the first definitive discovery of dinosaur material in amber.
Current knowledge
Non-avian dinosaur species preserved with evidence of feathers
Several non-avian dinosaurs are now known to have been feathered. Direct evidence of feathers exists for several species. In all examples, the evidence described consists of feather impressions, except those genera inferred to have had feathers based on skeletal or chemical evidence, such as the presence of quill knobs (the anchor points for wing feathers on the forelimb) or a pygostyle (the fused vertebrae at the tail tip which often supports large feathers).
Primitive feather types
Integumentary structures that gave rise to the feathers of birds are seen in the dorsal spines of reptiles and fish. A similar stage in their evolution to the complex coats of birds and mammals can be observed in living reptiles such as iguanas and Gonocephalus agamids. Feather structures are thought to have proceeded from simple hollow filaments through several stages of increasing complexity, ending with the large, deeply rooted feathers with strong pens (rachis), barbs and barbules that birds display today.
According to Prum's (1999) proposed model, at stage I, the follicle originates with a cylindrical epidermal depression around the base of the feather papilla. The first feather resulted when undifferentiated tubular follicle collar developed out of the old keratinocytes being pushed out. At stage II, the inner, basilar layer of the follicle collar differentiated into longitudinal barb ridges with unbranched keratin filaments, while the thin peripheral layer of the collar became the deciduous sheath, forming a tuft of unbranched barbs with a basal calamus. Stage III consists of two developmental novelties, IIIa and IIIb, as either could have occurred first. Stage IIIa involves helical displacement of barb ridges arising within the collar. The barb ridges on the anterior midline of the follicle fuse together, forming the rachis. The creation of a posterior barb locus follows, giving an indeterminate number of barbs. This resulted in a feather with a symmetrical, primarily branched structure with a rachis and unbranched barbs. In stage IIIb, barbules paired within the peripheral barbule plates of the barb ridges, create branched barbs with rami and barbules. This resulting feather is one with a tuft of branched barbs without a rachis. At stage IV, differentiated distal and proximal barbules produce a closed, pennaceous vane (a contour feather). A closed vane develops when pennulae on the distal barbules form a hooked shape to attach to the simpler proximal barbules of the adjacent barb. Stage V developmental novelties gave rise to additional structural diversity in the closed pennaceous feather. Here, asymmetrical flight feathers, bipinnate plumulaceous feathers, filoplumes, powder down, and bristles evolved.
Some evidence suggests that the original function of simple feathers was insulation. In particular, preserved patches of skin in large, derived, tyrannosauroids show scutes, while those in smaller, more primitive, forms show feathers. This may indicate that the larger forms had complex skins, with both scutes and filaments, or that tyrannosauroids may be like rhinos and elephants, having filaments at birth and then losing them as they developed to maturity. An adult Tyrannosaurus rex weighed about as much as an African elephant. If large tyrannosauroids were endotherms, they would have needed to radiate heat efficiently. This is due to the different structural properties of feathers compared to fur.
Some evidence also suggests that more derived feather types may have served as insulation. For instance, a study of oviraptorid pennaceous wing feathers and nesting posture suggests that elongated wing feathers evidently may have served to fill gaps between brooding individuals' insulatory body chamber and the outside environment. This "wall" of wing feathers could have shielded eggs from temperature extremes.
There is an increasing body of evidence that supports the display hypothesis, which states that early feathers were colored and increased reproductive success. Coloration could have provided the original adaptation of feathers, implying that all later functions of feathers, such as thermoregulation and flight, were co-opted. This hypothesis has been supported by the discovery of pigmented feathers in multiple species. Supporting the display hypothesis is the fact that fossil feathers have been observed in a ground-dwelling herbivorous dinosaur clade, making it unlikely that feathers functioned as predatory tools or as a means of flight. Additionally, some specimens have iridescent feathers. Pigmented and iridescent feathers may have provided greater attractiveness to mates, providing enhanced reproductive success when compared to non-colored feathers. Current research shows that it is plausible that theropods would have had the visual acuity necessary to see the displays. In a study by Stevens (2006), the binocular field of view for Velociraptor has been estimated to be 55 to 60 degrees, which is about that of modern owls. Visual acuity for Tyrannosaurus has been predicted to be anywhere from about that of humans to 13 times that of humans. Paleontological and evolutionary developmental studies show that feathers or feather-like structures were converting back to scales.
The idea that precursors of feathers appeared before they were co-opted for insulation is already stated in Gould and Vrba (1982). The original benefit might have been metabolic. Feathers are largely made of the keratin protein complex, which has disulfide bonds between amino acids that give it stability and elasticity. The metabolism of amino acids containing sulfur can be toxic; however, if the sulfur amino acids are not catabolized as the final products of urea or uric acid but used for the synthesis of keratin instead, the release of hydrogen sulfide is extremely reduced or avoided. For an organism whose metabolism works at high internal temperatures of or greater, it can be extremely important to prevent the excess production of hydrogen sulfide. This hypothesis could be consistent with the need for high metabolic rate of theropod dinosaurs.
The point is not known with certainty in archosaur phylogeny that the earliest simple "protofeathers" arose, as well as whether they arose once or independently multiple times. Filamentous structures are clearly present in pterosaurs, and long, hollow quills have been reported in specimens of the ornithischian dinosaurs Psittacosaurus and Tianyulong although there has been disagreement. In 2009, Xu et al. noted that the hollow, unbranched, stiff integumentary structures found on a specimen of Beipiaosaurus were strikingly similar to the integumentary structures of Psittacosaurus and pterosaurs. They suggested that all of these structures may have been inherited from a common ancestor much earlier in the evolution of archosaurs, possibly in an ornithodire from the Middle Triassic or earlier. More recently, findings in Russia of the basal neornithischian Kulindadromeus report that although the lower leg and tail seemed to be scaled, "varied integumentary structures were found directly associated with skeletal elements, supporting the hypothesis that simple filamentous feathers, as well as compound feather-like structures comparable to those in theropods, were widespread amongst the whole dinosaur clade." In contrast, a 2016 study published in the Journal of Geology suggested that the integumentary structures found on Kulindadromeus and Psittacosaurus may be highly deformed scales rather than filamentous feathers.
Display feathers are also known from dinosaurs that are very primitive members of the bird lineage, or Avialae. The most primitive example is Epidexipteryx, which had a short tail with extremely long, ribbon-like feathers. Oddly enough, the fossil does not preserve wing feathers, suggesting that Epidexipteryx was either secondarily flightless, or that display feathers evolved before flight feathers in the bird lineage. Plumaceous feathers are found in nearly all lineages of Theropoda common in the northern hemisphere, and pennaceous feathers are attested as far down the tree as the Ornithomimosauria. The fact that only adult Ornithomimus had wing-like structures suggests that pennaceous feathers evolved for mating displays.
Phylogeny and inference of feathers in other dinosaurs
This technique, called phylogenetic bracketing, can also be used to infer the type of feathers a species may have had, since the developmental history of feathers is now reasonably well-known. All feathered species had filamentaceous or plumaceous (downy) feathers, with pennaceous feathers found among the more bird-like groups. The following cladogram is adapted from Godefroit et al., 2013.
Grey denotes a clade that is not known to contain any feathered specimen at the time of writing, some of which have fossil evidence of scales. The presence or lack of feathered specimens in a given clade does not confirm that all members in a clade have the specified integument, unless corroborated with representative fossil evidence within clade members.
The following cladogram is from Xu (2020).
Slender monofilamentous integument
Broad monofilamentous integument
Basally joining filamentous feather
Basally joining shafter filamentous feather
Radially branched shafted filamentous feather
Bilaterally branched filamentous feather
Basally joining branched filamentous feather
Basally joining membranous-based filamentous feather
Symmetrical open-vaned feather
Symmetrical close-vaned feather
Asymmetrical close-vaned feather
Proximally ribbon-like close-vaned feather
Rachis-dominant close-vaned feather
| Biology and health sciences | Dinosaurs | Animals |
8035060 | https://en.wikipedia.org/wiki/Ecosystem%20model | Ecosystem model | An ecosystem model is an abstract, usually mathematical, representation of an ecological system (ranging in scale from an individual population, to an ecological community, or even an entire biome), which is studied to better understand the real system.
Using data gathered from the field, ecological relationships—such as the relation of sunlight and water availability to photosynthetic rate, or that between predator and prey populations—are derived, and these are combined to form ecosystem models. These model systems are then studied in order to make predictions about the dynamics of the real system. Often, the study of inaccuracies in the model (when compared to empirical observations) will lead to the generation of hypotheses about possible ecological relations that are not yet known or well understood. Models enable researchers to simulate large-scale experiments that would be too costly or unethical to perform on a real ecosystem. They also enable the simulation of ecological processes over very long periods of time (i.e. simulating a process that takes centuries in reality, can be done in a matter of minutes in a computer model).
Ecosystem models have applications in a wide variety of disciplines, such as natural resource management, ecotoxicology and environmental health, agriculture, and wildlife conservation. Ecological modelling has even been applied to archaeology with varying degrees of success, for example, combining with archaeological models to explain the diversity and mobility of stone tools.
Types of models
There are two major types of ecological models, which are generally applied to different types of problems: (1) analytic models and (2) simulation / computational models. Analytic models are typically relatively simple (often linear) systems, that can be accurately described by a set of mathematical equations whose behavior is well-known. Simulation models on the other hand, use numerical techniques to solve problems for which analytic solutions are impractical or impossible. Simulation models tend to be more widely used, and are generally considered more ecologically realistic, while analytic models are valued for their mathematical elegance and explanatory power. Ecopath is a powerful software system which uses simulation and computational methods to model marine ecosystems. It is widely used by marine and fisheries scientists as a tool for modelling and visualising the complex relationships that exist in real world marine ecosystems.
Model design
The process of model design begins with a specification of the problem to be solved, and the objectives for the model.
Ecological systems are composed of an enormous number of biotic and abiotic factors that interact with each other in ways that are often unpredictable, or so complex as to be impossible to incorporate into a computable model. Because of this complexity, ecosystem models typically simplify the systems they are studying to a limited number of components that are well understood, and deemed relevant to the problem that the model is intended to solve.
The process of simplification typically reduces an ecosystem to a small number of state variables and mathematical functions that describe the nature of the relationships between them. The number of ecosystem components that are incorporated into the model is limited by aggregating similar processes and entities into functional groups that are treated as a unit.
After establishing the components to be modeled and the relationships between them, another important factor in ecosystem model structure is the representation of space used. Historically, models have often ignored the confounding issue of space. However, for many ecological problems spatial dynamics are an important part of the problem, with different spatial environments leading to very different outcomes. Spatially explicit models (also called "spatially distributed" or "landscape" models) attempt to incorporate a heterogeneous spatial environment into the model. A spatial model is one that has one or more state variables that are a function of space, or can be related to other spatial variables.
Validation
After construction, models are validated to ensure that the results are acceptably accurate or realistic. One method is to test the model with multiple sets of data that are independent of the actual system being studied. This is important since certain inputs can cause a faulty model to output correct results. Another method of validation is to compare the model's output with data collected from field observations. Researchers frequently specify beforehand how much of a disparity they are willing to accept between parameters output by a model and those computed from field data.
Examples
The Lotka–Volterra equations
One of the earliest, and most well-known, ecological models is the predator-prey model of Alfred J. Lotka (1925) and Vito Volterra (1926). This model takes the form of a pair of ordinary differential equations, one representing a prey species, the other its predator.
where,
Volterra originally devised the model to explain fluctuations in fish and shark populations observed in the Adriatic Sea after the First World War (when fishing was curtailed). However, the equations have subsequently been applied more generally. Although simple, they illustrate some of the salient features of ecological models: modelled biological populations experience growth, interact with other populations (as either predators, prey or competitors) and suffer mortality.
A credible, simple alternative to the Lotka-Volterra predator-prey model and its common prey dependent generalizations is the ratio dependent or Arditi-Ginzburg model. The two are the extremes of the spectrum of predator interference models. According to the authors of the alternative view, the data show that true interactions in nature are so far from the Lotka-Volterra extreme on the interference spectrum that the model can simply be discounted as wrong. They are much closer to the ratio dependent extreme, so if a simple model is needed one can use the Arditi-Ginzburg model as the first approximation.
Others
The theoretical ecologist Robert Ulanowicz has used information theory tools to describe the structure of ecosystems, emphasizing mutual information (correlations) in studied systems. Drawing on this methodology and prior observations of complex ecosystems, Ulanowicz depicts approaches to determining the stress levels on ecosystems and predicting system reactions to defined types of alteration in their settings (such as increased or reduced energy flow, and eutrophication.
Conway's Game of Life and its variations model ecosystems where proximity of the members of a population are factors in population growth.
| Biology and health sciences | Ecology | Biology |
8040039 | https://en.wikipedia.org/wiki/Vault%20%28architecture%29 | Vault (architecture) | In architecture, a vault (French voûte, from Italian volta) is a self-supporting arched form, usually of stone or brick, serving to cover a space with a ceiling or roof. As in building an arch, a temporary support is needed while rings of voussoirs are constructed and the rings placed in position. Until the topmost voussoir, the keystone, is positioned, the vault is not self-supporting. Where timber is easily obtained, this temporary support is provided by centering consisting of a framed truss with a semicircular or segmental head, which supports the voussoirs until the ring of the whole arch is completed.
Vault types
Corbelled vaults, also called false vaults, with horizontally joined layers of stone have been documented since prehistoric times; in the 14th century BC from Mycenae. They were built regionally until modern times.
The real vault construction with radially joined stones was already known to the Egyptians and Assyrians and was introduced into the building practice of the West by the Etruscans. The Romans in particular developed vault construction further and built barrel, cross and dome vaults. Some outstanding examples have survived in Rome, e.g. the Pantheon and the Basilica of Maxentius.
Brick vaults have been used in Egypt since the early 3rd millennium BC. widely used and from the end of the 8th century B.C. Keystone vaults were built. However, monumental temple buildings of the pharaonic culture in the Nile Valley did not use vaults, since even the huge portals with widths of more than 7 meters were spanned with cut stone beams.
Dome
Amongst the earliest known examples of any form of vaulting is to be found in the neolithic village of Khirokitia on Cyprus. Dating from BCE, the circular buildings supported beehive shaped corbel domed vaults of unfired mud-bricks and also represent the first evidence for settlements with an upper floor. Similar beehive tombs, called tholoi, exist in Crete and Northern Iraq. Their construction differs from that at Khirokitia in that most appear partially buried and make provision for a dromos entry.
The inclusion of domes, however, represents a wider sense of the word vault. The distinction between the two is that a vault is essentially an arch which is extruded into the third dimension, whereas a dome is an arch revolved around its vertical axis.
Pitched brick barrel vault
Pitched-brick vaults are named for their construction, the bricks are installed vertically (not radially) and are leaning (pitched) at an angle: This allows their construction to be completed without the use of centering. Examples have been found in archaeological excavations in Mesopotamia dating to the 2nd and 3rd millennium BCE, which were set in gypsum mortar.
Barrel vault
A barrel vault is the simplest form of a vault and resembles a barrel or tunnel cut lengthwise in half. The effect is that of a structure composed of continuous semicircular or pointed sections.
The earliest known examples of barrel vaults were built by the Sumerians, possibly under the ziggurat at Nippur in Babylonia, which was built of fired bricks cemented with clay mortar.
The earliest barrel vaults in ancient Egypt are thought to be those in the granaries built by the 19th dynasty Pharaoh Ramesses II, the ruins of which are behind the Ramesseum, at Thebes. The span was and the lower part of the arch was built in horizontal courses, up to about one-third of the height, and the rings above were inclined back at a slight angle, so that the bricks of each ring, laid flatwise, adhered till the ring was completed, no centering of any kind being required; the vault thus formed was elliptic in section, arising from the method of its construction. A similar system of construction was employed for the vault over the great hall at Ctesiphon, where the material employed was fired bricks or tiles of great dimensions, cemented with mortar; but the span was close upon , and the thickness of the vault was nearly at the top, there being four rings of brickwork.
Assyrian palaces used pitched-brick vaults, made with sun-dried mudbricks, for gates, subterranean graves and drains. During the reign of king Sennacherib they were used to construct aqueducts, such as those at Jerwan. In the provincial city Dūr-Katlimmu they were used to created vaulted platforms. The tradition of their erection, however, would seem to have been handed down to their successors in Mesopotamia, viz. to the Sassanians, who in their palaces in Sarvestan and Firouzabad built domes of similar form to those shown in the Nimrud sculptures, the chief difference being that, constructed in rubble stone and cemented with mortar, they still exist, though probably abandoned on the Islamic invasion in the 7th century.
Groin vaults
A groin vault is formed by the intersection of two or more barrel vaults, resulting in the formation of angles or groins along the lines of transition between the webs. In these bays the longer transverse arches are semi-circular, as are the shorter longitudinal arches. The curvatures of these bounding arches were apparently used as the basis for the web centrings, which was created in the form of two intersecting tunnels as though each web was an arch projected horizontally in three dimensions.
The earliest example is thought to be over a small hall at Pergamum, in Asia Minor, but its first employment over halls of great dimensions is due to the Romans. When two semicircular barrel vaults of the same diameter cross one another their intersection (a true ellipse) is known as a groin vault, down which the thrust of the vault is carried to the cross walls; if a series of two or more barrel vaults intersect one another, the weight is carried on to the piers at their intersection and the thrust is transmitted to the outer cross walls; thus in the Roman reservoir at Baiae, known as the Piscina Mirabilis, a series of five aisles with semicircular barrel vaults are intersected by twelve cross aisles, the vaults being carried on 48 piers and thick external walls. The width of these aisles being only about there was no great difficulty in the construction of these vaults, but in the Roman Baths of Caracalla the tepidarium had a span of , more than twice that of an English cathedral, so that its construction both from the statical and economical point of view was of the greatest importance.
The researches of M. Choisy (L'Art de bâtir chez les Romains), based on a minute examination of those portions of the vaults which still remain in situ, have shown that, on a comparatively slight centering, consisting of trusses placed about apart and covered with planks laid from truss to truss, were laid – to begin with – two layers of the Roman brick (measuring nearly square and 2 in. thick); on these and on the trusses transverse rings of brick were built with longitudinal ties at intervals; on the brick layers and embedding the rings and cross ties concrete was thrown in horizontal layers, the haunches being filled in solid, and the surface sloped on either side and covered over with a tile roof of low pitch laid direct on the concrete. The rings relieved the centering from the weight imposed, and the two layers of bricks carried the concrete till it had set.
As the walls carrying these vaults were also built in concrete with occasional bond courses of brick, the whole structure was homogeneous. One of the important ingredients of the mortar was a volcanic deposit found near Rome, known as pozzolana, which, when the concrete had set, not only made the concrete as solid as the rock itself, but to a certain extent neutralized the thrust of the vaults, which formed shells equivalent to that of a metal lid; the Romans, however, do not seem to have recognized the value of this pozzolana mixture, for they otherwise provided amply for the counteracting of any thrust which might exist by the erection of cross walls and buttresses. In the tepidaria of the Thermae and in the basilica of Constantine, in order to bring the thrust well within the walls, the main barrel vault of the hall was brought forward on each side and rested on detached columns, which constituted the principal architectural decoration. In cases where the cross vaults intersecting were not of the same span as those of the main vault, the arches were either stilted so that their soffits might be of the same height, or they formed smaller intersections in the lower part of the vault; in both of these cases, however, the intersections or groins were twisted, for which it was very difficult to form a centering, and, moreover, they were of disagreeable effect: though every attempt was made to mask this in the decoration of the vault by panels and reliefs modelled in stucco.
Rib vault
A rib vault is one in which all of the groins are covered by ribs or diagonal ribs in the form of segmental arches. Their curvatures are defined by the bounding arches. Whilst the transverse arches retain the same semi-circular profile as their groin-vaulted counterparts, the longitudinal arches are pointed with both arcs having their centres on the impost line. This allows the latter to correspond more closely to the curvatures of the diagonal ribs, producing a straight tunnel running from east to west.
Reference has been made to the rib vault in Roman work, where the intersecting barrel vaults were not of the same diameter. Their construction must at all times have been somewhat difficult, but where the barrel vaulting was carried round over the choir aisle and was intersected (as in St Bartholomew-the-Great in Smithfield, London) by semicones instead of cylinders, it became worse and the groins more complicated. This would seem to have led to a change of system and to the introduction of a new feature, which completely revolutionized the construction of the vault. Hitherto the intersecting features were geometrical surfaces, of which the diagonal groins were the intersections, elliptical in form, generally weak in construction and often twisting. The medieval builder reversed the process, and set up the diagonal ribs first, which were utilized as permanent centres, and on these he carried his vault or web, which henceforward took its shape from the ribs. Instead of the elliptical curve which was given by the intersection of two semicircular barrel vaults, or cylinders, he employed the semicircular arch for the diagonal ribs; this, however, raised the centre of the square bay vaulted above the level of the transverse arches and of the wall ribs, and thus gave the appearance of a dome to the vault, such as may be seen in the nave of Sant'Ambrogio, Florence. To meet this, at first the transverse and wall ribs were stilted, or the upper part of their arches was raised, as in the Abbaye-aux-Hommes at Caen, and the Abbey of Lessay, in Normandy. The problem was ultimately solved by the introduction of the pointed arch for the transverse and wall ribs – the pointed arch had long been known and employed, on account of its much greater strength and of the less thrust it exerted on the walls. When employed for the ribs of a vault, however narrow the span might be, by adopting a pointed arch, its summit could be made to range in height with the diagonal rib; and, moreover, when utilized for the ribs of the annular vault, as in the aisle round the apsidal termination of the choir, it was not necessary that the half ribs on the outer side should be in the same plane as those of the inner side; for when the opposite ribs met in the centre of the annular vault, the thrust was equally transmitted from one to the other, and being already a broken arch the change of its direction was not noticeable.
The first introduction of the pointed arch rib took place at Cefalù Cathedral and pre-dated the abbey of Saint-Denis. Whilst the pointed rib-arch is often seen as an identifier for Gothic architecture, Cefalù is a Romanesque cathedral whose masons experimented with the possibility of Gothic rib-arches before it was widely adopted by western church architecture. Besides Cefalù Cathedral, the introduction of the pointed arch rib would seem to have taken place in the choir aisles of the abbey of Saint-Denis, near Paris, built by the abbot Suger in 1135. It was in the church at Vezelay (1140) that it was extended to the square bay of the porch. As has been pointed out, the aisles had already in the early Christian churches been covered over with groined vaults, the only advance made in the later developments being the introduction of transverse ribs' dividing the bays into square compartments. In the 12th century the first attempts were made to vault over the naves, which were twice the width of the aisles, so it became necessary to include two bays of the aisles to form one rectangular bay in the nave (although this is often mistaken as square). It followed that every alternate pier served no purpose, so far as the support of the nave vault was concerned, and this would seem to have suggested an alternative to provide a supplementary rib across the church and between the transverse ribs. This resulted in what is known as a sexpartite, or six-celled vault, of which one of the earliest examples is found in the Abbaye-aux-Hommes at Caen. This church, built by William the Conqueror, was originally constructed to carry a timber roof only, but nearly a century later the upper part of the nave walls were partly rebuilt, in order that it might be covered with a vault. The immense size, however, of the square vault over the nave necessitated some additional support, so that an intermediate rib was thrown across the church, dividing the square compartment into six cells, and called the sexpartite vault
The intermediate rib, however, had the disadvantage of partially obscuring one side of the clerestory windows, and it threw unequal weights on the alternate piers, so that in the cathedral of Soissons (1205) a quadripartite or four-celled vault was introduced, the width of each bay being half the span of the nave, and corresponding therefore with the aisle piers. To this there are some exceptions, in Sant' Ambrogio, Milan, and San Michele, Pavia (the original vault), and in the cathedrals of Speyer, Mainz and Worms, where the quadripartite vaults are nearly square, the intermediate piers of the aisles being of much smaller dimensions. In England sexpartite vaults exist at Canterbury (1175) (set out by William of Sens), Rochester (1200), Lincoln (1215), Durham (east transept), and St. Faith's chapel, Westminster Abbey.
In the earlier stage of rib vaulting, the arched ribs consisted of independent or separate voussoirs down to the springing; the difficulty, however, of working the ribs separately led to two other important changes: (1) the lower part of the transverse diagonal and wall ribs were all worked out of one stone; and (2) the lower horizontal, constituting what is known as the tas-de-charge or solid springer. The tas-de-charge, or solid springer, had two advantages: (1) it enabled the stone courses to run straight through the wall, so as to bond the whole together much better; and (2) it lessened the span of the vault, which then required a centering of smaller dimensions. As soon as the ribs were completed, the web or stone shell of the vault was laid on them. In some English work each course of stone was of uniform height from one side to the other; but, as the diagonal rib was longer than either the transverse or wall rib, the courses dipped towards the former, and at the apex of the vault were cut to fit one another. In the early English Gothic period, in consequence of the great span of the vault and the very slight rise or curvature of the web, it was thought better to simplify the construction of the web by introducing intermediate ribs between the wall rib and the diagonal rib and between the diagonal and the transverse ribs; and in order to meet the thrust of these intermediate ribs a ridge rib was required, and the prolongation of this rib to the wall rib hid the junction of the web at the summit, which was not always very sightly, and constituted the ridge rib. In France, on the other hand, the web courses were always laid horizontally, and they are therefore of unequal height, increasing towards the diagonal rib. Each course also was given a slight rise in the centre, so as to increase its strength; this enabled the French masons to dispense with the intermediate rib, which was not introduced by them till the 15th century, and then more as a decorative than a constructive feature, as the domical form given to the French web rendered unnecessary the ridge rib, which, with some few exceptions, exists only in England. In both English and French vaulting centering was rarely required for the building of the web, a template (Fr. cerce) being employed to support the stones of each ring until it was complete. In Italy, Germany and Spain the French method of building the web was adopted, with horizontal courses and a domical form. Sometimes, in the case of comparatively narrow compartments, and more especially in clerestories, the wall rib was stilted, and this caused a peculiar twisting of the web, where the springing of the wall rib is at K: to these twisted surfaces the term ploughshare vaulting is given.
One of the earliest examples of the introduction of the intermediate rib is found in the nave of Lincoln Cathedral, and there the ridge rib is not carried to the wall rib. It was soon found, however, that the construction of the web was much facilitated by additional ribs, and consequently there was a tendency to increase their number, so that in the nave of Exeter Cathedral three intermediate ribs were provided between the wall rib and the diagonal rib. In order to mask the junction of the various ribs, their intersections were ornamented with richly carved bosses, and this practice increased on the introduction of another short rib, known as the lierne, a term in France given to the ridge rib. Lierne ribs are short ribs crossing between the main ribs, and were employed chiefly as decorative features, as, for instance, in the Liebfrauenkirche (1482) of Mühlacker, Germany. One of the best examples of Lierne ribs exists in the vault of the oriel window of Crosby Hall, London. The tendency to increase the number of ribs led to singular results in some cases, as in the choir of Gloucester Cathedral, where the ordinary diagonal ribs become mere ornamental mouldings on the surface of an intersected pointed barrel vault, and again in the cloisters, where the introduction of the fan vault, forming a concave-sided conoid, returned to the principles of the Roman geometrical vault. This is further shown in the construction of these fan vaults, for although in the earliest examples each of the ribs above the tas-de-charge was an independent feature, eventually it was found easier to carve them and the web out of the solid stone, so that the rib and web were purely decorative and had no constructional or independent functions.
Fan vault
This form of vaulting is found in English late Gothic in which the vault is constructed as a single surface of dressed stones, with the resulting conoid forming an ornamental network of blind tracery.
The fan vault would seem to have owed its origin to the employment of centerings of one curve for all the ribs, instead of having separate centerings for the transverse, diagonal wall and intermediate ribs; it was facilitated also by the introduction of the four-centred arch, because the lower portion of the arch formed part of the fan, or conoid, and the upper part could be extended at pleasure with a greater radius across the vault. These ribs were often cut from the same stones as the webs, with the entire vault being treated as a single jointed surface covered in interlocking tracery.
The earliest example is perhaps the east walk of the cloister at Gloucester, with its surface consisting of intricately decorated panels of stonework forming conical structures that rise from the springers of the vault. In later examples, as in King's College Chapel, Cambridge, on account of the great dimensions of the vault, it was found necessary to introduce transverse ribs, which were required to give greater strength. Similar transverse ribs are found in Henry VII's chapel and in the Divinity School at Oxford, where a new development presented itself. One of the defects of the fan vault at Gloucester is the appearance it gives of being half sunk in the wall; to remedy this, in the two buildings just quoted, the complete conoid is detached and treated as a pendant.
Byzantine vaults and domes
The vault of the Basilica of Maxentius, completed by Constantine, was the last great work carried out in Rome before its fall, and two centuries pass before the next important development is found in the Church of the Holy Wisdom (Hagia Sophia) at Constantinople. It is probable that the realization of the great advance in the science of vaulting shown in this church owed something to the eastern tradition of dome vaulting seen in the Assyrian domes, which are known to us only by the representations in the bas-relief from Nimrud, because in the great water cisterns in Istanbul, known as the Basilica Cistern and Bin bir direk (cistern with a thousand and one columns), we find the intersecting groin vaults of the Romans already replaced by small cupolas or domes. These domes, however, are of small dimensions when compared with that projected and carried out by Justinian in the Hagia Sophia. Previous to this the greatest dome was that of the Pantheon at Rome, but this was carried on an immense wall thick, and with the exception of small niches or recesses in the thickness of the wall could not be extended, so that Justinian apparently instructed his architect to provide an immense hemicycle or apse at the eastern end, a similar apse at the western end, and great arches on either side, the walls under which would be pierced with windows. Unlike the Pantheon dome, the upper portions of which are made of concrete, Byzantine domes were made of brick, which were lighter and thinner, but more vulnerable to the forces exerted onto them.
The diagram shows the outlines of the solution of the problem. If a hemispherical dome is cut by four vertical planes, the intersection gives four semicircular arches; if cut in addition by a horizontal plane tangent to the top of these arches, it describes a circle; that portion of the sphere which is below this circle and between the arches, forming a spherical spandrel, is the pendentive, and its radius is equal to the diagonal of the square on which the four arches rest. Having obtained a circle for the base of the dome, it is not necessary that the upper portion of the dome should spring from the same level as the arches, or that its domical surface should be a continuation of that of the pendentive. The first and second dome of the Hagia Sophia apparently fell down, so that Justinian determined to raise it, possibly to give greater lightness to the structure, but mainly in order to obtain increased light for the interior of the church. This was effected by piercing it with forty windows – the effect of which, as the light streaming through these windows, gave the dome the appearance of being suspended in the air. The pendentive which carried the dome rested on four great arches, the thrust of those crossing the church being counteracted by immense buttresses which traversed the aisles, and the other two partly by smaller arches in the apse, the thrust being carried to the outer walls, and to a certain extent by the side walls which were built under the arches. From the description given by Procopius we gather that the centering employed for the great arches consisted of a wall erected to support them during their erection. The construction of the pendentives is not known, but it is surmised that to the top of the pendentives they were built in horizontal courses of brick, projecting one over the other, the projecting angles being cut off afterwards and covered with stucco in which the mosaics were embedded; this was the method employed in the erection of the Périgordian domes, to which we shall return; these, however, were of less diameter than those of the Hagia Sophia, being only about 40 to instead of The apotheosis of Byzantine architecture, in fact, was reached in Hagia Sophia, for although it formed the model on which all subsequent Byzantine churches were based, so far as their plan was concerned, no domes approaching the former in dimensions were even attempted. The principal difference in some later examples is that which took place in the form of the pendentive on which the dome was carried. Instead of the spherical spandril of Hagia Sophia, large niches were formed in the angles, as in the Mosque of Damascus, which was built by Byzantine workmen for the Al-Walid I in CE 705; these gave an octagonal base on which the hemispherical dome rested; or again, as in the Sassanian palaces of Sarvestan and Firouzabad of the 4th and 5th century, when a series of concentric arch rings, projecting one in front of the other, were built, giving also an octagonal base; each of these pendentives is known as a squinch.
There is one other remarkable vault, also built by Justinian, in the Church of the Saints Sergius and Bacchus in Constantinople. The central area of this church was octagonal on plan, and the dome is divided into sixteen compartments; of these eight consist of broad flat bands rising from the centre of each of the walls, and the alternate eight are concave cells over the angles of the octagon, which externally and internally give to the roof the appearance of an umbrella.
Romanesque
Although the dome constitutes the principal characteristic of the Byzantine church, throughout Asia Minor are numerous examples in which the naves are vaulted with the semicircular barrel vault, and this is the type of vault found throughout the south of France in the 11th and 12th centuries, the only change being the occasional substitution of the pointed barrel vault, adopted not only on account of its exerting a less thrust, but because, as pointed out by Fergusson (vol. ii. p. 46), the roofing tiles were laid directly on the vault and a less amount of filling in at the top was required.
The continuous thrust of the barrel vault in these cases was met either by semicircular or pointed barrel vaults on the aisles, which had only half the span of the nave; of this there is an interesting example in the Chapel of Saint John in the Tower of London – and sometimes by half-barrel vaults. The great thickness of the walls, however, required in such constructions would seem to have led to another solution of the problem of roofing over churches with incombustible material, viz. that which is found throughout Périgord and La Charente, where a series of domes carried on pendentives covered over the nave, the chief peculiarities of these domes being the fact that the arches carrying them form part of the pendentives, which are all built in horizontal courses.
The intersecting and groined vault of the Romans was employed in the early Christian churches in Rome, but only over the aisles, which were comparatively of small span, but in these there was a tendency to raise the centres of these vaults, which became slightly domical; in all these cases centering was employed.
Gothic Revival and the Renaissance
One good example of the fan vault is that over the staircase leading to the hall of Christ Church, Oxford, where the complete conoid is displayed in its centre carried on a central column. This vault, not built until 1640, is an example of traditional workmanship, probably in Oxford transmitted in consequence of the late vaulting of the entrance gateways to the colleges. Fan vaulting is peculiar to England, the only example approaching it in France being the pendant of the Lady-chapel at Caudebec-en-Caux, in Normandy.
In France, Germany, and Spain the multiplication of ribs in the 15th century led to decorative vaults of various kinds, but with some singular modifications. Thus, in Germany, recognizing that the rib was no longer a necessary constructive feature, they cut it off abruptly, leaving a stump only; in France, on the other hand, they gave still more importance to the rib, by making it of greater depth, piercing it with tracery and hanging pendants from it, and the web became a horizontal stone paving laid on the top of these decorated vertical webs. This is the characteristic of the great Renaissance work in France and Spain; but it soon gave way to Italian influence, when the construction of vaults reverted to the geometrical surfaces of the Romans, without, however, always that economy in centering to which they had attached so much importance, and more especially in small structures. In large vaults, where it constituted an important expense, the chief boast of some of the most eminent architects has been that centering was dispensed with, as in the case of the dome of the Santa Maria del Fiore in Florence, built by Filippo Brunelleschi, and Ferguson cites as an example the great dome of the church at Mousta in Malta, erected in the first half of the 19th century, which was built entirely without centering of any kind.
Vaulting and faux-vaulting in the Renaissance and after
It is important to note that whereas Roman vaults, like that of the Pantheon, and Byzantine vaults, like that at Hagia Sophia, were not protected from above (i.e. the vault from the inside was the same that one saw from the outside), the European architects of the Middle Ages protected their vaults with wooden roofs. In other words, one will not see a Gothic vault from the outside. The reasons for this development are hypothetical, but the fact that the roofed basilica form preceded the era when vaults begin to be made is certainly to be taken into consideration. In other words, the traditional image of a roof took precedence over the vault.
The separation between interior and exterior – and between structure and image – was to be developed very purposefully in the Renaissance and beyond, especially once the dome became reinstated in the Western tradition as a key element in church design. Michelangelo's dome for St. Peter's Basilica in Rome, as redesigned between 1585 and 1590 by Giacomo della Porta, for example, consists of two domes of which, however, only the inner is structural. Baltasar Neumann, in his baroque churches, perfected light-weight plaster vaults supported by wooden frames. These vaults, which exerted no lateral pressures, were perfectly suited for elaborate ceiling frescoes. In St Paul's Cathedral in London there is a highly complex system of vaults and faux-vaults. The dome that one sees from the outside is not a vault, but a relatively light-weight wooden-framed structure resting on an invisible – and for its age highly original – catenary vault of brick, below which is another dome, (the dome that one sees from the inside), but of plaster supported by a wood frame. From the inside, one can easily assume that one is looking at the same vault that one sees from the outside.
India
There are two distinctive "other ribbed vaults" (called "Karbandi" in Persian) in India which form no part of the development of European vaults, but have some unusual features; one carries the central dome of the Jumma Musjid at Bijapur (A.D. 1559), and the other is Gol Gumbaz, the tomb of Muhammad Adil Shah II (1626–1660) in the same town. The vault of the latter was constructed over a hall square, to carry a hemispherical dome. The ribs, instead of being carried across the angles only, thus giving an octagonal base for the dome, are carried across to the further pier of the octagon and consequently intersect one another, reducing the central opening to in diameter, and, by the weight of the masonry they carry, serving as counterpoise to the thrust of the dome, which is set back so as to leave a passage about wide round the interior. The internal diameter of the dome is , its height and the ribs struck from four centres have their springing from the floor of the hall. The Jumma Musjid dome was of smaller dimensions, on a square of with a diameter of , and was carried on piers only instead of immensely thick walls as in the tomb; but any thrust which might exist was counteracted by its transmission across aisles to the outer wall.
Islamic architecture
The Muqarnas is a form of vaulting common in Islamic architecture.
Modern vaults
Hyperbolic paraboloids
The 20th century saw great advances in reinforced concrete design. The advent of shell construction and the better mathematical understanding of hyperbolic paraboloids allowed very thin, strong vaults to be constructed with previously unseen shapes. The vaults in the Church of Saint Sava are made of prefabricated concrete boxes. They were built on the ground and lifted to 40 m on chains.
Vegetal vault
When made by plants or trees, either artificially or grown on purpose by humans, structures of this type are called tree tunnels.
| Technology | Architectural elements | null |
6110795 | https://en.wikipedia.org/wiki/Passive%20cooling | Passive cooling | Passive cooling is a building design approach that focuses on heat gain control and heat dissipation in a building in order to improve the indoor thermal comfort with low or no energy consumption. This approach works either by preventing heat from entering the interior (heat gain prevention) or by removing heat from the building (natural cooling).
Natural cooling utilizes on-site energy, available from the natural environment, combined with the architectural design of building components (e.g. building envelope), rather than mechanical systems to dissipate heat. Therefore, natural cooling depends not only on the architectural design of the building but on how the site's natural resources are used as heat sinks (i.e. everything that absorbs or dissipates heat). Examples of on-site heat sinks are the upper atmosphere (night sky), the outdoor air (wind), and the earth/soil.
Passive cooling is an important tool for design of buildings for climate change adaptationreducing dependency on energy-intensive air conditioning in warming environments.
Overview
Passive cooling covers all natural processes and techniques of heat dissipation and modulation without the use of energy. Some authors consider that minor and simple mechanical systems (e.g. pumps and economizers) can be integrated in passive cooling techniques, as long they are used to enhance the effectiveness of the natural cooling process. Such applications are also called 'hybrid cooling systems'. The techniques for passive cooling can be grouped in two main categories:
Preventive techniques that aim to provide protection and/or prevention of external and internal heat gains.
Modulation and heat dissipation techniques that allow the building to store and dissipate heat gain through the transfer of heat from heat sinks to the climate. This technique can be the result of thermal mass or natural cooling.
Preventive techniques
Protection from or prevention of heat gains encompasses all the design techniques that minimizes the impact of solar heat gains through the building's envelope and of internal heat gains that is generated inside the building due occupancy and equipment. It includes the following design techniques:
Microclimate and site design - By taking into account the local climate and the site context, specific cooling strategies can be selected to apply which are the most appropriate for preventing overheating through the envelope of the building. The microclimate can play a huge role in determining the most favorable building location by analyzing the combined availability of sun and wind. The bioclimatic chart, the solar diagram and the wind rose are relevant analysis tools in the application of this technique.
Solar control - A properly designed shading system can effectively contribute to minimizing the solar heat gains. Shading both transparent and opaque surfaces of the building envelope will minimize the amount of solar radiation that induces overheating in both indoor spaces and building's structure. By shading the building structure, the heat gain captured through the windows and envelope will be reduced.
Building form and layout - Building orientation and an optimized distribution of interior spaces can prevent overheating. Rooms can be zoned within the buildings in order to reject sources of internal heat gain and/or allocating heat gains where they can be useful, considering the different activities of the building. For example, creating a flat, horizontal plan will increase the effectiveness of cross-ventilation across the plan. Locating the zones vertically can take advantage of temperature stratification. Typically, building zones in the upper levels are warmer than the lower zones due to stratification. Vertical zoning of spaces and activities uses this temperature stratification to accommodate zone uses according to their temperature requirements. Form factor (i.e. the ratio between volume and surface) also plays a major role in the building's energy and thermal profile. This ratio can be used to shape the building form to the specific local climate. For example, more compact forms tend to preserve more heat than less compact forms because the ratio of the internal loads to envelope area is significant.
Thermal insulation - Insulation in the building's envelope will decrease the amount of heat transferred by radiation through the facades. This principle applies both to the opaque (walls and roof) and transparent surfaces (windows) of the envelope. Since roofs could be a larger contributor to the interior heat load, especially in lighter constructions (e.g. building and workshops with roof made out of metal structures), providing thermal insulation can effectively decrease heat transfer from the roof.
Behavioral and occupancy patterns - Some building management policies such as limiting the number of people in a given area of the building can also contribute effectively to the minimization of heat gains inside a building. Building occupants can also contribute to indoor overheating prevention by: shutting off the lights and equipment of unoccupied spaces, operating shading when necessary to reduce solar heat gains through windows, or dress lighter in order to adapt better to the indoor environment by increasing their thermal comfort tolerance.
Internal gain control - More energy-efficient lighting and electronic equipment tend to release less energy thus contributing to less internal heat loads inside the space.
Modulation and heat dissipation techniques
The modulation and heat dissipation techniques rely on natural heat sinks to store and remove the internal heat gains. Examples of natural sinks are night sky, earth soil, and building mass. Therefore, passive cooling techniques that use heat sinks can act to either modulate heat gain with thermal mass or dissipate heat through natural cooling strategies.
Thermal mass - Heat gain modulation of an indoor space can be achieved by the proper use of the building's thermal mass as a heat sink. The thermal mass will absorb and store heat during daytime hours and return it to the space at a later time. Thermal mass can be coupled with night ventilation natural cooling strategy if the stored heat that will be delivered to the space during the evening/night is not desirable.
Natural cooling - Natural cooling refers to the use of ventilation or natural heat sinks for heat dissipation from indoor spaces. Natural cooling can be separated into five different categories: ventilation, night flushing, radiative cooling, evaporative cooling, and earth coupling.
Ventilation
Ventilation as a natural cooling strategy uses the physical properties of air to remove heat or provide cooling to occupants. In select cases, ventilation can be used to cool the building structure, which subsequently may serve as a heat sink.
Cross ventilation - The strategy of cross ventilation relies on wind to pass through the building for the purpose of cooling the occupants. Cross ventilation requires openings on two sides of the space, called the inlet and outlet. The sizing and placement of the ventilation inlets and outlets will determine the direction and velocity of cross ventilation through the building. Generally, an equal (or greater) area of outlet openings must also be provided to provide adequate cross ventilation.
Stack ventilation - Cross ventilation is an effective cooling strategy, however, wind is an unreliable resource. Stack ventilation is an alternative design strategy that relies on the buoyancy of warm air to rise and exit through openings located at ceiling height. Cooler outside air replaces the rising warm air through carefully designed inlets placed near the floor.
These two strategies are part of the ventilative cooling strategies.
One specific application of natural ventilation is night flushing.
Night flushing
Night flushing (also known as night ventilation, night cooling, night purging, or nocturnal convective cooling) is a passive or semi-passive cooling strategy that requires increased air movement at night to cool the structural elements of a building. A distinction may be made between free cooling to chill water and night flushing to cool down building thermal mass. To execute night flushing, one typically keeps the building envelope closed during the day. The building structure's thermal mass acts as a sink through the day and absorbs heat gains from occupants, equipment, solar radiation, and conduction through walls, roofs, and ceilings. At night, when the outside air is cooler, the envelope is opened, allowing cooler air to pass through the building so the stored heat can be dissipated by convection. This process reduces the temperature of the indoor air and of the building's thermal mass, allowing convective, conductive, and radiant cooling to take place during the day when the building is occupied. Night flushing is most effective in climates with a large diurnal swing, i.e. a large difference between the daily maximum and minimum outdoor temperature. For optimal performance, the nighttime outdoor air temperature should fall well below the daytime comfort zone limit of , and should have low absolute or specific humidity. In hot, humid climates the dirunial temperature swing is typically small, and the nighttime humidity stays high. Night flushing has limited effectiveness and can introduce high humidity that causes problems and can lead to high energy costs if it is removed by active systems during the day. Thus, night flushing's effectiveness is limited to sufficiently dry climates. For the night flushing strategy to be effective at reducing indoor temperature and energy usage, the thermal mass must be sized sufficiently and distributed over a wide enough surface area to absorb the space's daily heat gains. Also, the total air change rate must be high enough to remove the internal heat gains from the space at night.
There are three ways night flushing can be achieved in a building:
Natural night flushing by opening windows at night, letting wind-driven or buoyancy-driven airflow cool the space, and then closing windows during the day.
Mechanical night flushing by forcing air mechanically through ventilation ducts at night at a high airflow rate and supplying air to the space during the day at a code-required minimum airflow rate.
Mixed-mode night flushing through a combination of natural ventilation and mechanical ventilation, also known as mixed-mode ventilation, by using fans to assist the natural nighttime airflow.
These three strategies are part of the ventilative cooling strategies.
There are numerous benefits to using night flushing as a cooling strategy for buildings, including improved comfort and a shift in peak energy load. Energy is most expensive during the day. By implementing night flushing, the usage of mechanical ventilation is reduced during the day, leading to energy and money savings.
There are also a number of limitations to using night flushing, such as usability, security, reduced indoor air quality, humidity, and poor room acoustics. For natural night flushing, the process of manually opening and closing windows every day can be tiresome, especially in the presence of insect screens. This problem can be eased with automated windows or ventilation louvers, such as in the Manitoba Hydro Place. Natural night flushing also requires windows to be open at night when the building is most likely unoccupied, which can raise security issues. If outdoor air is polluted, night flushing can expose occupants to harmful conditions inside the building. In loud city locations, the opening of windows can create poor acoustical conditions inside the building. In humid climates, night flushing can introduce humid air, typically above 90% relative humidity during the coolest part of the night. This moisture can accumulate in the building overnight leading to increased humidity during the day leading to comfort problems and even mold growth.
Radiative cooling
Evaporative cooling
This design relies on the evaporative process of water to cool the incoming air while simultaneously increasing the relative humidity. A saturated filter is placed at the supply inlet so the natural process of evaporation can cool the supply air. Apart from the energy to drive the fans, water is the only other resource required to provide conditioning to indoor spaces. The effectiveness of evaporative cooling is largely dependent on the humidity of the outside air; dryer air produces more cooling. A study of field performance results in Kuwait revealed that power requirements for an evaporative cooler are approximately 75% less than the power requirements for a conventional packaged unit air-conditioner. As for interior comfort, a study found that evaporative cooling reduced inside air temperature by 9.6 °C compared to outdoor temperature. An innovative passive system uses evaporating water to cool the roof so that a major portion of solar heat does not come inside.
Ancient Egypt used evaporative cooling; for instance, reeds were hung in windows and were moistened with trickling water.
Evaporation from the soil and transpiration from plants also provides cooling; the water released from the plant evaporates. Gardens and potted plants are used to drive cooling, as in the of a , the of a , and so on.
Earth coupling
Earth coupling uses the moderate and consistent temperature of the soil to act as a heat sink to cool a building through conduction. This passive cooling strategy is most effective when earth temperatures are cooler than ambient air temperature, such as in hot climates.
Direct coupling or earth sheltering occurs when a building uses earth as a buffer for the walls. The earth acts as a heat sink and can effectively mitigate temperature extremes. Earth sheltering improves the performance of building envelopes by reducing heat losses and also reduces heat gains by limiting infiltration.
Indirect coupling means that a building is coupled with the earth by means of earth ducts. An earth duct is a buried tube that acts as avenue for supply air to travel through before entering the building. The supply air is cooled by conductive heat transfer between the tubes and surrounding soil. Therefore, earth ducts will not perform well as a source of cooling unless the soil temperature is lower than the desired room air temperature. Earth ducts typically require long tubes to cool the supply air to an appropriate temperature before entering the building. A fan is required to draw the air from the earth duct into the building. Some of the factors that affect the performance of an earth duct are: duct length, number of bends, thickness of duct wall, depth of duct, diameter of the duct, and air velocity.
In conventional buildings
There are "smart-roof coatings" and "smart windows" for cooling that switches to warming during cold temperatures. The whitest paint formulation can reflect up to 98.1% of sunlight.
| Technology | Heating and cooling | null |
6115906 | https://en.wikipedia.org/wiki/Super%20star%20cluster | Super star cluster | A super star cluster (SSC) is a very massive young open cluster that is thought to be the precursor of a globular cluster. These clusters called "super" because they are relatively more luminous and contain more mass than other young star clusters. The SSC, however, does not have to physically be larger than other clusters of lower mass and luminosity. They typically contain a very large number of young, massive stars that ionize a surrounding HII region or a so-called "Ultra dense HII region (UDHII)" in the Milky Way Galaxy or in other galaxies (however, SSCs do not always have to be inside an HII region). An SSC's HII region is in turn surrounded by a cocoon of dust. In many cases, the stars and the HII regions will be invisible to observations in certain wavelengths of light, such as the visible spectrum, due to high levels of extinction. As a result, the youngest SSCs are best observed and photographed in radio and infrared. SSCs, such as Westerlund 1 (Wd1), have been found in the Milky Way Galaxy. However, most have been observed in farther regions of the universe. In the galaxy M82 alone, 197 young SSCs have been observed and identified using the Hubble Space Telescope.
Generally, SSCs have been seen to form in the interactions between galaxies and in regions of high amounts of star formation with high enough pressures to satisfy the properties needed for the formation of a star cluster. These regions can include newer galaxies with much new star formation, dwarf starburst galaxies, arms of a spiral galaxy that have a high star formation rate, and in the merging of galaxies. In an Astronomical Journal published in 1996, using pictures taken in the ultraviolet (UV) spectrum by the Hubble Space Telescope of star-forming rings in five different barred galaxies, numerous star clusters were found in clumps within the rings which had high rates of star formation. These clusters were found to have masses of about to , ages of about 100 Myr, and radii of about 5 pc, and are thought to evolve into globular clusters later in their lifetimes. These properties match those found in SSCs.
Characteristics and properties
The typical characteristics and properties of SSCs:
Mass
Radius ≈ 5 pc ≈
Age ≈ 100 Myr (although other sources state that observed SSCs have an age of 1 Gyr)
Large electron densities = – (this is a property of the HII region associated with the SSC)
Pressures = –. (this is a property of the HII region associated with the SSC)
Hubble Space Telescope contributions
Given the relatively small size of SSCs compared to their host galaxies, astronomers have had trouble finding them in the past due to the limited resolution of the ground-based and space telescopes at the time. With the introduction of the Hubble Space Telescope (HST) in the 1990s, finding SSCs (as well as other astronomical objects) became much easier thanks to the higher resolution of the HST (angular resolution of ~1/10 arcsecond). This has not only allowed astronomers to see SSCs, but also allowed for them to measure their properties as well as the properties of the individual stars within the SSC. Recently, a massive star, Westerlund 1-26, was discovered in the SSC Westerlund 1 in the Milky Way. The radius of this star is thought to be larger than the radius of Jupiter's orbit around the Sun. Essentially, the HST searches the night sky, specifically nearby galaxies, for star clusters and "dense stellar objects" to see if any have the properties similar to that of a SSC or an object that would, in its lifetime, evolve into a globular cluster.
List of SSCs
| Physical sciences | Stellar astronomy | Astronomy |
3461736 | https://en.wikipedia.org/wiki/Data%20and%20information%20visualization | Data and information visualization | Data and information visualization (data viz/vis or info viz/vis) is the practice of designing and creating easy-to-communicate and easy-to-understand graphic or visual representations of a large amount of complex quantitative and qualitative data and information with the help of static, dynamic or interactive visual items. Typically based on data and information collected from a certain domain of expertise, these visualizations are intended for a broader audience to help them visually explore and discover, quickly understand, interpret and gain important insights into otherwise difficult-to-identify structures, relationships, correlations, local and global patterns, trends, variations, constancy, clusters, outliers and unusual groupings within data (exploratory visualization). When intended for the general public (mass communication) to convey a concise version of known, specific information in a clear and engaging manner (presentational or explanatory visualization), it is typically called information graphics.
Data visualization is concerned with visually presenting sets of primarily quantitative raw data in a schematic form. The visual formats used in data visualization include tables, charts and graphs (e.g. pie charts, bar charts, line charts, area charts, cone charts, pyramid charts, donut charts, histograms, spectrograms, cohort charts, waterfall charts, funnel charts, bullet graphs, etc.), diagrams, plots (e.g. scatter plots, distribution plots, box-and-whisker plots), geospatial maps (such as proportional symbol maps, choropleth maps, isopleth maps and heat maps), figures, correlation matrices, percentage gauges, etc., which sometimes can be combined in a dashboard.
Information visualization, on the other hand, deals with multiple, large-scale and complicated datasets which contain quantitative (numerical) data as well as qualitative (non-numerical, i.e. verbal or graphical) and primarily abstract information and its goal is to add value to raw data, improve the viewers' comprehension, reinforce their cognition and help them derive insights and make decisions as they navigate and interact with the computer-supported graphical display. Visual tools used in information visualization include maps (such as tree maps), animations, infographics, Sankey diagrams, flow charts, network diagrams, semantic networks, entity-relationship diagrams, venn diagrams, timelines, mind maps, etc.
Emerging technologies like virtual, augmented and mixed reality have the potential to make information visualization more immersive, intuitive, interactive and easily manipulable and thus enhance the user's visual perception and cognition. In data and information visualization, the goal is to graphically present and explore abstract, non-physical and non-spatial data collected from databases, information systems, file systems, documents, business data, etc. (presentational and exploratory visualization) which is different from the field of scientific visualization, where the goal is to render realistic images based on physical and spatial scientific data to confirm or reject hypotheses (confirmatory visualization).
Effective data visualization is properly sourced, contextualized, simple and uncluttered. The underlying data is accurate and up-to-date to make sure that insights are reliable. Graphical items are well-chosen for the given datasets and aesthetically appealing, with shapes, colors and other visual elements used deliberately in a meaningful and non-distracting manner. The visuals are accompanied by supporting texts (labels and titles). These verbal and graphical components complement each other to ensure clear, quick and memorable understanding. Effective information visualization is aware of the needs and concerns and the level of expertise of the target audience, deliberately guiding them to the intended conclusion. Such effective visualization can be used not only for conveying specialized, complex, big data-driven ideas to a wider group of non-technical audience in a visually appealing, engaging and accessible manner, but also to domain experts and executives for making decisions, monitoring performance, generating new ideas and stimulating research. In addition, data scientists, data analysts and data mining specialists use data visualization to check the quality of data, find errors, unusual gaps and missing values in data, clean data, explore the structures and features of data and assess outputs of data-driven models. In business, data and information visualization can constitute a part of data storytelling, where they are paired with a coherent narrative structure or storyline to contextualize the analyzed data and communicate the insights gained from analyzing the data clearly and memorably with the goal of convincing the audience into making a decision or taking an action in order to create business value. This can be contrasted with the field of statistical graphics, where complex statistical data are communicated graphically in an accurate and precise manner among researchers and analysts with statistical expertise to help them perform exploratory data analysis or to convey the results of such analyses, where visual appeal, capturing attention to a certain issue and storytelling are not as important.
The field of data and information visualization is of interdisciplinary nature as it incorporates principles found in the disciplines of descriptive statistics (as early as the 18th century), visual communication, graphic design, cognitive science and, more recently, interactive computer graphics and human-computer interaction. Since effective visualization requires design skills, statistical skills and computing skills, it is argued by authors such as Gershon and Page that it is both an art and a science. The neighboring field of visual analytics marries statistical data analysis, data and information visualization and human analytical reasoning through interactive visual interfaces to help human users reach conclusions, gain actionable insights and make informed decisions which are otherwise difficult for computers to do.
Research into how people read and misread various types of visualizations is helping to determine what types and features of visualizations are most understandable and effective in conveying information. On the other hand, unintentionally poor or intentionally misleading and deceptive visualizations (misinformative visualization) can function as powerful tools which disseminate misinformation, manipulate public perception and divert public opinion toward a certain agenda. Thus data visualization literacy has become an important component of data and information literacy in the information age akin to the roles played by textual, mathematical and visual literacy in the past.
Overview
The field of data and information visualization has emerged "from research in human–computer interaction, computer science, graphics, visual design, psychology, and business methods. It is increasingly applied as a critical component in scientific research, digital libraries, data mining, financial data analysis, market studies, manufacturing production control, and drug discovery".
Data and information visualization presumes that "visual representations and interaction techniques take advantage of the human eye's broad bandwidth pathway into the mind to allow users to see, explore, and understand large amounts of information at once. Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways."
Data analysis is an indispensable part of all applied research and problem solving in industry. The most fundamental data analysis approaches are visualization (histograms, scatter plots, surface plots, tree maps, parallel coordinate plots, etc.), statistics (hypothesis test, regression, PCA, etc.), data mining (association mining, etc.), and machine learning methods (clustering, classification, decision trees, etc.). Among these approaches, information visualization, or visual data analysis, is the most reliant on the cognitive skills of human analysts, and allows the discovery of unstructured actionable insights that are limited only by human imagination and creativity. The analyst does not have to learn any sophisticated methods to be able to interpret the visualizations of the data. Information visualization is also a hypothesis generation scheme, which can be, and is typically followed by more analytical or formal analysis, such as statistical hypothesis testing.
To communicate information clearly and efficiently, data visualization uses statistical graphics, plots, information graphics and other tools. Numerical data may be encoded using dots, lines, or bars, to visually communicate a quantitative message. Effective visualization helps users analyze and reason about data and evidence. It makes complex data more accessible, understandable, and usable, but can also be reductive. Users may have particular analytical tasks, such as making comparisons or understanding causality, and the design principle of the graphic (i.e., showing comparisons or showing causality) follows the task. Tables are generally used where users will look up a specific measurement, while charts of various types are used to show patterns or relationships in the data for one or more variables.
Data visualization refers to the techniques used to communicate data or information by encoding it as visual objects (e.g., points, lines, or bars) contained in graphics. The goal is to communicate information clearly and efficiently to users. It is one of the steps in data analysis or data science. According to Vitaly Friedman (2008) the "main goal of data visualization is to communicate information clearly and effectively through graphical means. It doesn't mean that data visualization needs to look boring to be functional or extremely sophisticated to look beautiful. To convey ideas effectively, both aesthetic form and functionality need to go hand in hand, providing insights into a rather sparse and complex data set by communicating its key aspects in a more intuitive way. Yet designers often fail to achieve a balance between form and function, creating gorgeous data visualizations which fail to serve their main purpose — to communicate information".
Indeed, Fernanda Viegas and Martin M. Wattenberg suggested that an ideal visualization should not only communicate clearly, but stimulate viewer engagement and attention.
Data visualization is closely related to information graphics, information visualization, scientific visualization, exploratory data analysis and statistical graphics. In the new millennium, data visualization has become an active area of research, teaching and development. According to Post et al. (2002), it has united scientific and information visualization.
In the commercial environment data visualization is often referred to as dashboards. Infographics are another very common form of data visualization.
Principles
Characteristics of effective graphical displays
Edward Tufte has explained that users of information displays are executing particular analytical tasks such as making comparisons. The design principle of the information graphic should support the analytical task. As William Cleveland and Robert McGill show, different graphical elements accomplish this more or less effectively. For example, dot plots and bar charts outperform pie charts.
In his 1983 book The Visual Display of Quantitative Information, Edward Tufte defines 'graphical displays' and principles for effective graphical display in the following passage:
"Excellence in statistical graphics consists of complex ideas communicated with clarity, precision, and efficiency. Graphical displays should:
show the data
induce the viewer to think about the substance rather than about methodology, graphic design, the technology of graphic production, or something else
avoid distorting what the data has to say
present many numbers in a small space
make large data sets coherent
encourage the eye to compare different pieces of data
reveal the data at several levels of detail, from a broad overview to the fine structure
serve a reasonably clear purpose: description, exploration, tabulation, or decoration
be closely integrated with the statistical and verbal descriptions of a data set.
Graphics reveal data. Indeed, graphics can be more precise and revealing than conventional statistical computations."
For example, the Minard diagram shows the losses suffered by Napoleon's army in the 1812–1813 period. Six variables are plotted: the size of the army, its location on a two-dimensional surface (x and y), time, the direction of movement, and temperature. The line width illustrates a comparison (size of the army at points in time), while the temperature axis suggests a cause of the change in army size. This multivariate display on a two-dimensional surface tells a story that can be grasped immediately while identifying the source data to build credibility. Tufte wrote in 1983 that: "It may well be the best statistical graphic ever drawn."
Not applying these principles may result in misleading graphs, distorting the message, or supporting an erroneous conclusion. According to Tufte, chartjunk refers to the extraneous interior decoration of the graphic that does not enhance the message or gratuitous three-dimensional or perspective effects. Needlessly separating the explanatory key from the image itself, requiring the eye to travel back and forth from the image to the key, is a form of "administrative debris." The ratio of "data to ink" should be maximized, erasing non-data ink where feasible.
The Congressional Budget Office summarized several best practices for graphical displays in a June 2014 presentation. These included: a) Knowing your audience; b) Designing graphics that can stand alone outside the report's context; and c) Designing graphics that communicate the key messages in the report.
Quantitative messages
Author Stephen Few described eight types of quantitative messages that users may attempt to understand or communicate from a set of data and the associated graphs used to help communicate the message:
Time-series: A single variable is captured over a period of time, such as the unemployment rate or temperature measures over a 10-year period. A line chart may be used to demonstrate the trend over time.
Ranking: Categorical subdivisions are ranked in ascending or descending order, such as a ranking of sales performance (the measure) by sales persons (the category, with each sales person a categorical subdivision) during a single period. A bar chart may be used to show the comparison across the sales persons.
Part-to-whole: Categorical subdivisions are measured as a ratio to the whole (i.e., a percentage out of 100%). A pie chart or bar chart can show the comparison of ratios, such as the market share represented by competitors in a market.
Deviation: Categorical subdivisions are compared against a reference, such as a comparison of actual vs. budget expenses for several departments of a business for a given time period. A bar chart can show comparison of the actual versus the reference amount.
Frequency distribution: Shows the number of observations of a particular variable for given interval, such as the number of years in which the stock market return is between intervals such as 0–10%, 11–20%, etc. A histogram, a type of bar chart, may be used for this analysis. A boxplot helps visualize key statistics about the distribution, such as median, quartiles, outliers, etc.
Correlation: Comparison between observations represented by two variables (X,Y) to determine if they tend to move in the same or opposite directions. For example, plotting unemployment (X) and inflation (Y) for a sample of months. A scatter plot is typically used for this message.
Nominal comparison: Comparing categorical subdivisions in no particular order, such as the sales volume by product code. A bar chart may be used for this comparison.
Geographic or geospatial: Comparison of a variable across a map or layout, such as the unemployment rate by state or the number of persons on the various floors of a building. A cartogram is a typical graphic used.
Analysts reviewing a set of data may consider whether some or all of the messages and graphic types above are applicable to their task and audience. The process of trial and error to identify meaningful relationships and messages in the data is part of exploratory data analysis.
Visual perception and data visualization
A human can distinguish differences in line length, shape, orientation, distances, and color (hue) readily without significant processing effort; these are referred to as "pre-attentive attributes". For example, it may require significant time and effort ("attentive processing") to identify the number of times the digit "5" appears in a series of numbers; but if that digit is different in size, orientation, or color, instances of the digit can be noted quickly through pre-attentive processing.
Compelling graphics take advantage of pre-attentive processing and attributes and the relative strength of these attributes. For example, since humans can more easily process differences in line length than surface area, it may be more effective to use a bar chart (which takes advantage of line length to show comparison) rather than pie charts (which use surface area to show comparison).
Human perception/cognition and data visualization
Almost all data visualizations are created for human consumption. Knowledge of human perception and cognition is necessary when designing intuitive visualizations. Cognition refers to processes in human beings like perception, attention, learning, memory, thought, concept formation, reading, and problem solving. Human visual processing is efficient in detecting changes and making comparisons between quantities, sizes, shapes and variations in lightness. When properties of symbolic data are mapped to visual properties, humans can browse through large amounts of data efficiently. It is estimated that 2/3 of the brain's neurons can be involved in visual processing. Proper visualization provides a different approach to show potential connections, relationships, etc. which are not as obvious in non-visualized quantitative data. Visualization can become a means of data exploration.
Studies have shown individuals used on average 19% less cognitive resources, and 4.5% better able to recall details when comparing data visualization with text.
History
The modern study of visualization started with computer graphics, which "has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the special issue of Computer Graphics on Visualization in Scientific Computing. Since then there have been several conferences and workshops, co-sponsored by the IEEE Computer Society and ACM SIGGRAPH". They have been devoted to the general topics of data visualization, information visualization and scientific visualization, and more specific areas such as volume visualization.
In 1786, William Playfair published the first presentation graphics.
There is no comprehensive 'history' of data visualization. There are no accounts that span the entire development of visual thinking and the visual representation of data, and which collate the contributions of disparate disciplines. Michael Friendly and Daniel J Denis of York University are engaged in a project that attempts to provide a comprehensive history of visualization. Contrary to general belief, data visualization is not a modern development. Since prehistory, stellar data, or information such as location of stars were visualized on the walls of caves (such as those found in Lascaux Cave in Southern France) since the Pleistocene era. Physical artefacts such as Mesopotamian clay tokens (5500 BC), Inca quipus (2600 BC) and Marshall Islands stick charts (n.d.) can also be considered as visualizing quantitative information.
The first documented data visualization can be tracked back to 1160 B.C. with Turin Papyrus Map which accurately illustrates the distribution of geological resources and provides information about quarrying of those resources. Such maps can be categorized as thematic cartography, which is a type of data visualization that presents and communicates specific data and information through a geographical illustration designed to show a particular theme connected with a specific geographic area. Earliest documented forms of data visualization were various thematic maps from different cultures and ideograms and hieroglyphs that provided and allowed interpretation of information illustrated. For example, Linear B tablets of Mycenae provided a visualization of information regarding Late Bronze Age era trades in the Mediterranean. The idea of coordinates was used by ancient Egyptian surveyors in laying out towns, earthly and heavenly positions were located by something akin to latitude and longitude at least by 200 BC, and the map projection of a spherical Earth into latitude and longitude by Claudius Ptolemy [–] in Alexandria would serve as reference standards until the 14th century.
The invention of paper and parchment allowed further development of visualizations throughout history. Figure shows a graph from the 10th or possibly 11th century that is intended to be an illustration of the planetary movement, used in an appendix of a textbook in monastery schools. The graph apparently was meant to represent a plot of the inclinations of the planetary orbits as a function of the time. For this purpose, the zone of the zodiac was represented on a plane with a horizontal line divided into thirty parts as the time or longitudinal axis. The vertical axis designates the width of the zodiac. The horizontal scale appears to have been chosen for each planet individually for the periods cannot be reconciled. The accompanying text refers only to the amplitudes. The curves are apparently not related in time.
By the 16th century, techniques and instruments for precise observation and measurement of physical quantities, and geographic and celestial position were well-developed (for example, a "wall quadrant" constructed by Tycho Brahe [1546–1601], covering an entire wall in his observatory). Particularly important were the development of triangulation and other methods to determine mapping locations accurately. Very early, the measure of time led scholars to develop innovative way of visualizing the data (e.g. Lorenz Codomann in 1596, Johannes Temporarius in 1596).
French philosopher and mathematician René Descartes and Pierre de Fermat developed analytic geometry and two-dimensional coordinate system which heavily influenced the practical methods of displaying and calculating values. Fermat and Blaise Pascal's work on statistics and probability theory laid the groundwork for what we now conceptualize as data. According to the Interaction Design Foundation, these developments allowed and helped William Playfair, who saw potential for graphical communication of quantitative data, to generate and develop graphical methods of statistics. In the second half of the 20th century, Jacques Bertin used quantitative graphs to represent information "intuitively, clearly, accurately, and efficiently".
John Tukey and Edward Tufte pushed the bounds of data visualization; Tukey with his new statistical approach of exploratory data analysis and Tufte with his book "The Visual Display of Quantitative Information" paved the way for refining data visualization techniques for more than statisticians. With the progression of technology came the progression of data visualization; starting with hand-drawn visualizations and evolving into more technical applications – including interactive designs leading to software visualization.
Programs like SAS, SOFA, R, Minitab, Cornerstone and more allow for data visualization in the field of statistics. Other data visualization applications, more focused and unique to individuals, programming languages such as D3, Python and JavaScript help to make the visualization of quantitative data a possibility. Private schools have also developed programs to meet the demand for learning data visualization and associated programming libraries, including free programs like The Data Incubator or paid programs like General Assembly.
Beginning with the symposium "Data to Discovery" in 2013, ArtCenter College of Design, Caltech and JPL in Pasadena have run an annual program on interactive data visualization. The program asks: How can interactive data visualization help scientists and engineers explore their data more effectively? How can computing, design, and design thinking help maximize research results? What methodologies are most effective for leveraging knowledge from these fields? By encoding relational information with appropriate visual and interactive characteristics to help interrogate, and ultimately gain new insight into data, the program develops new interdisciplinary approaches to complex science problems, combining design thinking and the latest methods from computing, user-centered design, interaction design and 3D graphics.
Terminology
Data visualization involves specific terminology, some of which is derived from statistics. For example, author Stephen Few defines two types of data, which are used in combination to support a meaningful analysis or visualization:
Categorical: Represent groups of objects with a particular characteristic. Categorical variables can either be nominal or ordinal. Nominal variables for example gender have no order between them and are thus nominal. Ordinal variables are categories with an order, for sample recording the age group someone falls into.
Quantitative: Represent measurements, such as the height of a person or the temperature of an environment. Quantitative variables can either be continuous or discrete. Continuous variables capture the idea that measurements can always be made more precisely. While discrete variables have only a finite number of possibilities, such as a count of some outcomes or an age measured in whole years.
The distinction between quantitative and categorical variables is important because the two types require different methods of visualization.
Two primary types of information displays are tables and graphs.
A table contains quantitative data organized into rows and columns with categorical labels. It is primarily used to look up specific values. In the example above, the table might have categorical column labels representing the name (a qualitative variable) and age (a quantitative variable), with each row of data representing one person (the sampled experimental unit or category subdivision).
A graph is primarily used to show relationships among data and portrays values encoded as visual objects (e.g., lines, bars, or points). Numerical values are displayed within an area delineated by one or more axes. These axes provide scales (quantitative and categorical) used to label and assign values to the visual objects. Many graphs are also referred to as charts.
Eppler and Lengler have developed the "Periodic Table of Visualization Methods," an interactive chart displaying various data visualization methods. It includes six types of data visualization methods: data, information, concept, strategy, metaphor and compound. In "Visualization Analysis and Design" Tamara Munzner writes "Computer-based visualization systems provide visual representations of datasets designed to help people carry out tasks more effectively." Munzner agues that visualization "is suitable when there is a need to augment human capabilities rather than replace people with computational decision-making methods."
Techniques
Other techniques
Cartogram
Cladogram (phylogeny)
Concept Mapping
Dendrogram (classification)
Information visualization reference model
Grand tour
Graph drawing
Heatmap
HyperbolicTree
Multidimensional scaling
Parallel coordinates
Problem solving environment
Treemapping
Interactivity
Interactive data visualization enables direct actions on a graphical plot to change elements and link between multiple plots.
Interactive data visualization has been a pursuit of statisticians since the late 1960s. Examples of the developments can be found on the American Statistical Association video lending library.
Common interactions include:
Brushing: works by using the mouse to control a paintbrush, directly changing the color or glyph of elements of a plot. The paintbrush is sometimes a pointer and sometimes works by drawing an outline of sorts around points; the outline is sometimes irregularly shaped, like a lasso. Brushing is most commonly used when multiple plots are visible and some linking mechanism exists between the plots. There are several different conceptual models for brushing and a number of common linking mechanisms. Brushing scatterplots can be a transient operation in which points in the active plot only retain their new characteristics. At the same time, they are enclosed or intersected by the brush, or it can be a persistent operation, so that points retain their new appearance after the brush has been moved away. Transient brushing is usually chosen for linked brushing, as we have just described.
Painting: Persistent brushing is useful when we want to group the points into clusters and then proceed to use other operations, such as the tour, to compare the groups. It is becoming common terminology to call the persistent operation painting,
Identification: which could also be called labeling or label brushing, is another plot manipulation that can be linked. Bringing the cursor near a point or edge in a scatterplot, or a bar in a barchart, causes a label to appear that identifies the plot element. It is widely available in many interactive graphics, and is sometimes called mouseover.
Scaling: maps the data onto the window, and changes in the area of the. mapping function help us learn different things from the same plot. Scaling is commonly used to zoom in on crowded regions of a scatterplot, and it can also be used to change the aspect ratio of a plot, to reveal different features of the data.
Linking: connects elements selected in one plot with elements in another plot. The simplest kind of linking, one-to-one, where both plots show different projections of the same data, and a point in one plot corresponds to exactly one point in the other. When using area plots, brushing any part of an area has the same effect as brushing it all and is equivalent to selecting all cases in the corresponding category. Even when some plot elements represent more than one case, the underlying linking rule still links one case in one plot to the same case in other plots. Linking can also be by categorical variable, such as by a subject id, so that all data values corresponding to that subject are highlighted, in all the visible plots.
Other perspectives
There are different approaches on the scope of data visualization. One common focus is on information presentation, such as Friedman (2008). Friendly (2008) presumes two main parts of data visualization: statistical graphics, and thematic cartography. In this line the "Data Visualization: Modern Approaches" (2007) article gives an overview of seven subjects of data visualization:
Articles & resources
Displaying connections
Displaying data
Displaying news
Displaying websites
Mind maps
Tools and services
All these subjects are closely related to graphic design and information representation.
On the other hand, from a computer science perspective, Frits H. Post in 2002 categorized the field into sub-fields:
Information visualization
Interaction techniques and architectures
Modelling techniques
Multiresolution methods
Visualization algorithms and techniques
Volume visualization
Within The Harvard Business Review, Scott Berinato developed a framework to approach data visualisation. To start thinking visually, users must consider two questions; 1) What you have and 2) what you're doing. The first step is identifying what data you want visualised. It is data-driven like profit over the past ten years or a conceptual idea like how a specific organisation is structured. Once this question is answered one can then focus on whether they are trying to communicate information (declarative visualisation) or trying to figure something out (exploratory visualisation). Scott Berinato combines these questions to give four types of visual communication that each have their own goals.
These four types of visual communication are as follows;
idea illustration (conceptual & declarative).
Used to teach, explain and/or simply concepts. For example, organisation charts and decision trees.
idea generation (conceptual & exploratory).
Used to discover, innovate and solve problems. For example, a whiteboard after a brainstorming session.
visual discovery (data-driven & exploratory).
Used to spot trends and make sense of data. This type of visual is more common with large and complex data where the dataset is somewhat unknown and the task is open-ended.
everyday data-visualisation (data-driven & declarative).
The most common and simple type of visualisation used for affirming and setting context. For example, a line graph of GDP over time.
Applications
Data and information visualization insights are being applied in areas such as:
Scientific research
Digital libraries
Data mining
Information graphics
Financial data analysis
Health care
Market studies
Manufacturing production control
Crime mapping
eGovernance and Policy Modeling
Digital Humanities
Data Art
Organization
Notable academic and industry laboratories in the field are:
Adobe Research
IBM Research
Google Research
Microsoft Research
Panopticon Software
Scientific Computing and Imaging Institute
Tableau Software
University of Maryland Human-Computer Interaction Lab
Conferences in this field, ranked by significance in data visualization research, are:
IEEE Visualization: An annual international conference on scientific visualization, information visualization, and visual analytics. Conference is held in October.
ACM SIGGRAPH: An annual international conference on computer graphics, convened by the ACM SIGGRAPH organization. Conference dates vary.
Conference on Human Factors in Computing Systems (CHI): An annual international conference on human–computer interaction, hosted by ACM SIGCHI. Conference is usually held in April or May.
Eurographics: An annual Europe-wide computer graphics conference, held by the European Association for Computer Graphics. Conference is usually held in April or May.
For further examples, see: :Category:Computer graphics organizations
Data presentation architecture
Data presentation architecture (DPA) is a skill-set that seeks to identify, locate, manipulate, format and present data in such a way as to optimally communicate meaning and proper knowledge.
Historically, the term data presentation architecture is attributed to Kelly Lautt: "Data Presentation Architecture (DPA) is a rarely applied skill set critical for the success and value of Business Intelligence. Data presentation architecture weds the science of numbers, data and statistics in discovering valuable information from data and making it usable, relevant and actionable with the arts of data visualization, communications, organizational psychology and change management in order to provide business intelligence solutions with the data scope, delivery timing, format and visualizations that will most effectively support and drive operational, tactical and strategic behaviour toward understood business (or organizational) goals. DPA is neither an IT nor a business skill set but exists as a separate field of expertise. Often confused with data visualization, data presentation architecture is a much broader skill set that includes determining what data on what schedule and in what exact format is to be presented, not just the best way to present data that has already been chosen. Data visualization skills are one element of DPA."
Objectives
DPA has two main objectives:
To use data to provide knowledge in the most efficient manner possible (minimize noise, complexity, and unnecessary data or detail given each audience's needs and roles)
To use data to provide knowledge in the most effective manner possible (provide relevant, timely and complete data to each audience member in a clear and understandable manner that conveys important meaning, is actionable and can affect understanding, behavior and decisions)
Scope
With the above objectives in mind, the actual work of data presentation architecture consists of:
Creating effective delivery mechanisms for each audience member depending on their role, tasks, locations and access to technology
Defining important meaning (relevant knowledge) that is needed by each audience member in each context
Determining the required periodicity of data updates (the currency of the data)
Determining the right timing for data presentation (when and how often the user needs to see the data)
Finding the right data (subject area, historical reach, breadth, level of detail, etc.)
Utilizing appropriate analysis, grouping, visualization, and other presentation formats
Related fields
DPA work shares commonalities with several other fields, including:
Business analysis in determining business goals, collecting requirements, mapping processes.
Business process improvement in that its goal is to improve and streamline actions and decisions in furtherance of business goals
Data visualization in that it uses well-established theories of visualization to add or highlight meaning or importance in data presentation.
Digital humanities explores more nuanced ways of visualising complex data.
Information architecture, but information architecture's focus is on unstructured data and therefore excludes both analysis (in the statistical/data sense) and direct transformation of the actual content (data, for DPA) into new entities and combinations.
HCI and interaction design, since many of the principles in how to design interactive data visualisation have been developed cross-disciplinary with HCI.
Visual journalism and data-driven journalism or data journalism: Visual journalism is concerned with all types of graphic facilitation of the telling of news stories, and data-driven and data journalism are not necessarily told with data visualisation. Nevertheless, the field of journalism is at the forefront in developing new data visualisations to communicate data.
Graphic design, conveying information through styling, typography, position, and other aesthetic concerns.
| Mathematics | Statistics and probability | null |
3464219 | https://en.wikipedia.org/wiki/Medical%20genetics | Medical genetics | Medical genetics is the branch of medicine that involves the diagnosis and management of hereditary disorders. Medical genetics differs from human genetics in that human genetics is a field of scientific research that may or may not apply to medicine, while medical genetics refers to the application of genetics to medical care. For example, research on the causes and inheritance of genetic disorders would be considered within both human genetics and medical genetics, while the diagnosis, management, and counselling people with genetic disorders would be considered part of medical genetics.
In contrast, the study of typically non-medical phenotypes such as the genetics of eye color would be considered part of human genetics, but not necessarily relevant to medical genetics (except in situations such as albinism). Genetic medicine is a newer term for medical genetics and incorporates areas such as gene therapy, personalized medicine, and the rapidly emerging new medical specialty, predictive medicine.
Scope
Medical genetics encompasses many different areas, including clinical practice of physicians, genetic counselors, and nutritionists, clinical diagnostic laboratory activities, and research into the causes and inheritance of genetic disorders. Examples of conditions that fall within the scope of medical genetics include birth defects and dysmorphology, intellectual disabilities, autism, mitochondrial disorders, skeletal dysplasia, connective tissue disorders, cancer genetics, and prenatal diagnosis. Medical genetics is increasingly becoming relevant to many common diseases. Overlaps with other medical specialties are beginning to emerge, as recent advances in genetics are revealing etiologies for morphologic, endocrine, cardiovascular, pulmonary, ophthalmologist, renal, psychiatric, and dermatologic conditions. The medical genetics community is increasingly involved with individuals who have undertaken elective genetic and genomic testing.
Subspecialties
In some ways, many of the individual fields within medical genetics are hybrids between clinical care and research. This is due in part to recent advances in science and technology (for example, see the Human Genome Project) that have enabled an unprecedented understanding of genetic disorders.
Clinical genetics
Clinical genetics a medical specialty with particular attention to hereditary disorders. Branches of clinical genetics include:
1. Prenatal genetics
Couples at risk of having a child with a genetic disorder preconception or while pregnant
High risk prenatal screening results
Abnormal fetal ultrasound
2. Pediatric genetics
Birth defects
developmental delay, autism, epilepsy
short stature and skeletal dysplasia
3. Adult genetics
cardiomyopathy and cardiac dysrhythmias
inherited kidney disease
dementia and neurodegeneration
connective tissue disease
4. Cancer genetics
breast/ovarian cancer
bowel cancer
endocrine tumors
Examples of genetic syndromes that are commonly seen in the genetics clinic include chromosomal rearrangements (e.g. Down syndrome, 22q11.2 deletion syndrome, Turner syndrome, Williams syndrome), Fragile X syndrome, Marfan syndrome, neurofibromatosis, Huntington disease, familial adenomatous polyposis, and many more.
Training and qualification
In Europe, the training of physicians in Clinical/Medical Genetics is overseen by the Union Européenne des Médecins Spécialistes (UEMS). This organization aims to harmonize and raise the standards of medical specialist training across Europe. The UEMS has established European Training Requirements (ETR) for Medical Genetics to guide the education and training of medical geneticists.
Individuals seeking acceptance into clinical genetics training programs must hold an MD, or in some countries, an MB ChB or MB BS degree. These qualifications ensure that trainees have the foundational medical knowledge required to specialize in Medical Genetics. The optimal training program involves a total of five years: one year of general medical training (the "common trunk", often covering fields such as general practice, pediatrics, obstetrics and gynecology, neurology, psychiatry, and internal medicine) followed by four years of specialized training in Medical Genetics. This specialized training should include at least two years of clinical patient care and at least six months in genetic laboratory diagnostics. Trainees' progress is evaluated through a structured program that begins with observation and progresses to independent practice under supervision, culminating in the ability to manage complex cases independently.
Final certification involves a comprehensive assessment, which may include national examinations or the European Certificate in Medical Genetics and Genomics (ECMGG). This certificate serves as a benchmark for high standards in the specialty across Europe and is increasingly recognized by various national regulatory authorities.
In the United States, physicians who practice clinical genetics are accredited by the American Board of Medical Genetics and Genomics (ABMGG). In order to become a board-certified practitioner of Clinical Genetics, a physician must complete a minimum of 24 months of training in a program accredited by the ABMGG. Individuals seeking acceptance into clinical genetics training programs must hold an M.D. or D.O. degree (or their equivalent) and have completed a minimum of 12 months of training in an ACGME-accredited residency program in internal medicine, pediatrics, obstetrics and gynecology, or other medical specialty.
In Australia and New Zealand, clinical genetics is a three-year advanced training program for those who already have their primary medical qualification (MBBS or MD) and have successfully completed basic training in either paediatric medicine or adult medicine. Training is overseen by the Royal Australasian College of Physicians with the Australasian Association of Clinical Geneticists contributing to authorship of the curriculum via their parent organization, the Human Genetics Society of Australasia.
Metabolic/biochemical genetics
Metabolic (or biochemical) genetics involves the diagnosis and management of inborn errors of metabolism in which patients have enzymatic deficiencies that perturb biochemical pathways involved in metabolism of carbohydrates, amino acids, and lipids. Examples of metabolic disorders include galactosemia, glycogen storage disease, lysosomal storage disorders, metabolic acidosis, peroxisomal disorders, phenylketonuria, and urea cycle disorders.
Cytogenetics
Cytogenetics is the study of chromosomes and chromosome abnormalities. While cytogenetics historically relied on microscopy to analyze chromosomes, new molecular technologies such as array comparative genomic hybridization are now becoming widely used. Examples of chromosome abnormalities include aneuploidy, chromosomal rearrangements, and genomic deletion/duplication disorders.
Molecular genetics
Molecular genetics involves the discovery of and laboratory testing for DNA mutations that underlie many single gene disorders. Examples of single gene disorders include achondroplasia, cystic fibrosis, Duchenne muscular dystrophy, hereditary breast cancer (BRCA1/2), Huntington disease, Marfan syndrome, Noonan syndrome, and Rett syndrome. Molecular tests are also used in the diagnosis of syndromes involving epigenetic abnormalities, such as Angelman syndrome, Beckwith-Wiedemann syndrome, Prader-willi syndrome, and uniparental disomy.
Mitochondrial genetics
Mitochondrial genetics concerns the diagnosis and management of mitochondrial disorders, which have a molecular basis but often result in biochemical abnormalities due to deficient energy production.
There exists some overlap between medical genetic diagnostic laboratories and molecular pathology.
Genetic counseling
Genetic counseling is the process of providing information about genetic conditions, diagnostic testing, and risks in other family members, within the framework of nondirective counseling. Genetic counselors are non-physician members of the medical genetics team who specialize in family risk assessment and counseling of patients regarding genetic disorders. The precise role of the genetic counselor varies somewhat depending on the disorder.
When working alongside geneticists, genetic counselors normally specialize in pediatric genetics which focuses on developmental abnormalities present in newborns, infants or children. The major goal of pediatric counseling is attempting to explain the genetic basis behind the child's developmental concerns in a compassionate and articulated manner that allows the potentially distressed or frustrated parents to easily understand the information. As well, genetic counselors normally take a family pedigree, which summarizes the medical history of the patient's family. This then aids the clinical geneticist in the differential diagnosis process and help determine which further steps should be taken to help the patient.
History
Although genetics has its roots back in the 19th century with the work of the Bohemian monk Gregor Mendel and other pioneering scientists, human genetics emerged later. It started to develop, albeit slowly, during the first half of the 20th century. Mendelian (single-gene) inheritance was studied in a number of important disorders such as albinism, brachydactyly (short fingers and toes), and hemophilia. Mathematical approaches were also devised and applied to human genetics. Population genetics was created.
Medical genetics was a late developer, emerging largely after the close of World War II (1945) when the eugenics movement had fallen into disrepute. The Nazi misuse of eugenics sounded its death knell. Shorn of eugenics, a scientific approach could be used and was applied to human and medical genetics. Medical genetics saw an increasingly rapid rise in the second half of the 20th century and continues in the 21st century.
Current practice
The clinical setting in which patients are evaluated determines the scope of practice, diagnostic, and therapeutic interventions. For the purposes of general discussion, the typical encounters between patients and genetic practitioners may involve:
Referral to an out-patient genetics clinic (pediatric, adult, or combined) or an in-hospital consultation, most often for diagnostic evaluation.
Specialty genetics clinics focusing on management of inborn errors of metabolism, skeletal dysplasia, or lysosomal storage diseases.
Referral for counseling in a prenatal genetics clinic to discuss risks to the pregnancy (advanced maternal age, teratogen exposure, family history of a genetic disease), test results (abnormal maternal serum screen, abnormal ultrasound), and/or options for prenatal diagnosis (typically non-invasive prenatal screening, diagnostic amniocentesis or chorionic villus sampling).
Multidisciplinary specialty clinics that include a clinical geneticist or genetic counselor (cancer genetics, cardiovascular genetics, craniofacial or cleft lip/palate, hearing loss clinics, muscular dystrophy/neurodegenerative disorder clinics).
Diagnostic evaluation
Each patient will undergo a diagnostic evaluation tailored to their own particular presenting signs and symptoms. The geneticist will establish a differential diagnosis and recommend appropriate testing. These tests might evaluate for chromosomal disorders, inborn errors of metabolism, or single gene disorders.
Chromosome studies
Chromosome studies are used in the general genetics clinic to determine a cause for developmental delay or intellectual disability, birth defects, dysmorphic features, or autism. Chromosome analysis is also performed in the prenatal setting to determine whether a fetus is affected with aneuploidy or other chromosome rearrangements. Finally, chromosome abnormalities are often detected in cancer samples. A large number of different methods have been developed for chromosome analysis:
Chromosome analysis using a karyotype involves special stains that generate light and dark bands, allowing identification of each chromosome under a microscope.
Fluorescence in situ hybridization (FISH) involves fluorescent labeling of probes that bind to specific DNA sequences, used for identifying aneuploidy, genomic deletions or duplications, characterizing chromosomal translocations and determining the origin of ring chromosomes.
Chromosome painting is a technique that uses fluorescent probes specific for each chromosome to differentially label each chromosome. This technique is more often used in cancer cytogenetics, where complex chromosome rearrangements can occur.
Array comparative genomic hybridization is a newer molecular technique that involves hybridization of an individual DNA sample to a glass slide or microarray chip containing molecular probes (ranging from large ~200kb bacterial artificial chromosomes to small oligonucleotides) that represent unique regions of the genome. This method is particularly sensitive for detection of genomic gains or losses across the genome but does not detect balanced translocations or distinguish the location of duplicated genetic material (for example, a tandem duplication versus an insertional duplication).
Basic metabolic studies
Biochemical studies are performed to screen for imbalances of metabolites in the bodily fluid, usually the blood (plasma/serum) or urine, but also in cerebrospinal fluid (CSF). Specific tests of enzyme function (either in leukocytes, skin fibroblasts, liver, or muscle) are also employed under certain circumstances. In the US, the newborn screen incorporates biochemical tests to screen for treatable conditions such as galactosemia and phenylketonuria (PKU). Patients suspected to have a metabolic condition might undergo the following tests:
Quantitative amino acid analysis is typically performed using the ninhydrin reaction, followed by liquid chromatography to measure the amount of amino acid in the sample (either urine, plasma/serum, or CSF). Measurement of amino acids in plasma or serum is used in the evaluation of disorders of amino acid metabolism such as urea cycle disorders, maple syrup urine disease, and PKU. Measurement of amino acids in urine can be useful in the diagnosis of cystinuria or renal Fanconi syndrome as can be seen in cystinosis.
Urine organic acid analysis can be either performed using quantitative or qualitative methods, but in either case the test is used to detect the excretion of abnormal organic acids. These compounds are normally produced during bodily metabolism of amino acids and odd-chain fatty acids, but accumulate in patients with certain metabolic conditions.
The acylcarnitine combination profile detects compounds such as organic acids and fatty acids conjugated to carnitine. The test is used for detection of disorders involving fatty acid metabolism, including MCAD.
Pyruvate and lactate are byproducts of normal metabolism, particularly during anaerobic metabolism. These compounds normally accumulate during exercise or ischemia, but are also elevated in patients with disorders of pyruvate metabolism or mitochondrial disorders.
Ammonia is an end product of amino acid metabolism and is converted in the liver to urea through a series of enzymatic reactions termed the urea cycle. Elevated ammonia can therefore be detected in patients with urea cycle disorders, as well as other conditions involving liver failure.
Enzyme testing is performed for a wide range of metabolic disorders to confirm a diagnosis suspected based on screening tests.
Molecular studies
DNA sequencing is used to directly analyze the genomic DNA sequence of a particular gene. In general, only the parts of the gene that code for the expressed protein (exons) and small amounts of the flanking untranslated regions and introns are analyzed. Therefore, although these tests are highly specific and sensitive, they do not routinely identify all of the mutations that could cause disease.
DNA methylation analysis is used to diagnose certain genetic disorders that are caused by disruptions of epigenetic mechanisms such as genomic imprinting and uniparental disomy.
Southern blotting is an early technique basic on detection of fragments of DNA separated by size through gel electrophoresis and detected using radiolabeled probes. This test was routinely used to detect deletions or duplications in conditions such as Duchenne muscular dystrophy but is being replaced by high-resolution array comparative genomic hybridization techniques. Southern blotting is still useful in the diagnosis of disorders caused by trinucleotide repeats.
Treatments
Each cell of the body contains the hereditary information (DNA) wrapped up in structures called chromosomes. Since genetic syndromes are typically the result of alterations of the chromosomes or genes, there is no treatment currently available that can correct the genetic alterations in every cell of the body. Therefore, there is currently no "cure" for genetic disorders. However, for many genetic syndromes there is treatment available to manage the symptoms. In some cases, particularly inborn errors of metabolism, the mechanism of disease is well understood and offers the potential for dietary and medical management to prevent or reduce the long-term complications. In other cases, infusion therapy is used to replace the missing enzyme. Current research is actively seeking to use gene therapy or other new medications to treat specific genetic disorders.
Management of metabolic disorders
In general, metabolic disorders arise from enzyme deficiencies that disrupt normal metabolic pathways. For instance, in the hypothetical example:
A ---> B ---> C ---> D AAAA ---> BBBBBB ---> CCCCCCCCCC ---> (no D)
X Y Z X Y | (no or insufficient Z)
EEEEE
Compound "A" is metabolized to "B" by enzyme "X", compound "B" is metabolized to "C" by enzyme "Y", and compound "C" is metabolized to "D" by enzyme "Z".
If enzyme "Z" is missing, compound "D" will be missing, while compounds "A", "B", and "C" will build up. The pathogenesis of this particular condition could result from lack of compound "D", if it is critical for some cellular function, or from toxicity due to excess "A", "B", and/or "C", or from toxicity due to the excess of "E" which is normally only present in small amounts and only accumulates when "C" is in excess. Treatment of the metabolic disorder could be achieved through dietary supplementation of compound "D" and dietary restriction of compounds "A", "B", and/or "C" or by treatment with a medication that promoted disposal of excess "A", "B", "C" or "E". Another approach that can be taken is enzyme replacement therapy, in which a patient is given an infusion of the missing enzyme "Z" or cofactor therapy to increase the efficacy of any residual "Z" activity.
Diet
Dietary restriction and supplementation are key measures taken in several well-known metabolic disorders, including galactosemia, phenylketonuria (PKU), maple syrup urine disease, organic acidurias and urea cycle disorders. Such restrictive diets can be difficult for the patient and family to maintain, and require close consultation with a nutritionist who has special experience in metabolic disorders. The composition of the diet will change depending on the caloric needs of the growing child and special attention is needed during a pregnancy if a woman is affected with one of these disorders.
Medication
Medical approaches include enhancement of residual enzyme activity (in cases where the enzyme is made but is not functioning properly), inhibition of other enzymes in the biochemical pathway to prevent buildup of a toxic compound, or diversion of a toxic compound to another form that can be excreted. Examples include the use of high doses of pyridoxine (vitamin B6) in some patients with homocystinuria to boost the activity of the residual cystathione synthase enzyme, administration of biotin to restore activity of several enzymes affected by deficiency of biotinidase, treatment with NTBC in Tyrosinemia to inhibit the production of succinylacetone which causes liver toxicity, and the use of sodium benzoate to decrease ammonia build-up in urea cycle disorders.
Enzyme replacement therapy
Certain lysosomal storage diseases are treated with infusions of a recombinant enzyme (produced in a laboratory), which can reduce the accumulation of the compounds in various tissues. Examples include Gaucher disease, Fabry disease, Mucopolysaccharidoses and Glycogen storage disease type II. Such treatments are limited by the ability of the enzyme to reach the affected areas (the blood brain barrier prevents enzyme from reaching the brain, for example), and can sometimes be associated with allergic reactions. The long-term clinical effectiveness of enzyme replacement therapies vary widely among different disorders.
Other examples
Angiotensin receptor blockers in Marfan syndrome & Loeys-Dietz
Bone marrow transplantation
Gene therapy
Career paths and training
There are a variety of career paths within the field of medical genetics, and naturally the training required for each area differs considerably. The information included in this section applies to the typical pathways in the United States and there may be differences in other countries. US practitioners in clinical, counseling, or diagnostic subspecialties generally obtain board certification through the American Board of Medical Genetics.
Ethical, legal and social implications
Genetic information provides a unique type of knowledge about an individual and his/her family, fundamentally different from a typically laboratory test that provides a "snapshot" of an individual's health status. The unique status of genetic information and inherited disease has a number of ramifications with regard to ethical, legal, and societal concerns.
On 19 March 2015, scientists urged a worldwide ban on clinical use of methods, particularly the use of CRISPR and zinc finger, to edit the human genome in a way that can be inherited. In April 2015 and April 2016, Chinese researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR. In February 2016, British scientists were given permission by regulators to genetically modify human embryos by using CRISPR and related techniques on condition that the embryos were destroyed within seven days. In June 2016 the Dutch government was reported to be planning to follow suit with similar regulations which would specify a 14-day limit.
Societies
The more empirical approach to human and medical genetics was formalized by the founding in 1948 of the American Society of Human Genetics. The Society first began annual meetings that year (1948) and its international counterpart, the International Congress of Human Genetics, has met every 5 years since its inception in 1956. The Society publishes the American Journal of Human Genetics on a monthly basis.
Medical genetics is recognized as a distinct medical specialty. In the U.S., medical genetics has its own approved board (the American Board of Medical Genetics) and clinical specialty college (the American College of Medical Genetics). The college holds an annual scientific meeting, publishes a monthly journal, Genetics in Medicine, and issues position papers and clinical practice guidelines on a variety of topics relevant to human genetics.
In Australia and New Zealand, medical geneticists are trained and certified under the auspices of the Royal Australasian College of Physicians, but professionally belong to the Human Genetics Society of Australasia and its special interest group, the Australasian Association of Clinical Geneticists, for ongoing education, networking and advocacy.
Research
The broad range of research in medical genetics reflects the overall scope of this field, including basic research on genetic inheritance and the human genome, mechanisms of genetic and metabolic disorders, translational research on new treatment modalities, and the impact of genetic testing
Basic genetics research
Basic research geneticists usually undertake research in universities, biotechnology firms and research institutes.
Allelic architecture of disease
Sometimes the link between a disease and an unusual gene variant is more subtle. The genetic architecture of common diseases is an important factor in determining the extent to which patterns of genetic variation influence group differences in health outcomes. According to the common disease/common variant hypothesis, common variants present in the ancestral population before the dispersal of modern humans from Africa play an important role in human diseases. Genetic variants associated with Alzheimer disease, deep venous thrombosis, Crohn disease, and type 2 diabetes appear to adhere to this model. However, the generality of the model has not yet been established and, in some cases, is in doubt. Some diseases, such as many common cancers, appear not to be well described by the common disease/common variant model.
Another possibility is that common diseases arise in part through the action of combinations of variants that are individually rare. Most of the disease-associated alleles discovered to date have been rare, and rare variants are more likely than common variants to be differentially distributed among groups distinguished by ancestry. However, groups could harbor different, though perhaps overlapping, sets of rare variants, which would reduce contrasts between groups in the incidence of the disease.
The number of variants contributing to a disease and the interactions among those variants also could influence the distribution of diseases among groups. The difficulty that has been encountered in finding contributory alleles for complex diseases and in replicating positive associations suggests that many complex diseases involve numerous variants rather than a moderate number of alleles, and the influence of any given variant may depend in critical ways on the genetic and environmental background. If many alleles are required to increase susceptibility to a disease, the odds are low that the necessary combination of alleles would become concentrated in a particular group purely through drift.
Population substructure in genetics research
One area in which population categories can be important considerations in genetics research is in controlling for confounding between population substructure, environmental exposures, and health outcomes. Association studies can produce spurious results if cases and controls have differing allele frequencies for genes that are not related to the disease being studied, although the magnitude of this problem in genetic association studies is subject to debate. Various methods have been developed to detect and account for population substructure, but these methods can be difficult to apply in practice.
Population substructure also can be used to advantage in genetic association studies. For example, populations that represent recent mixtures of geographically separated ancestral groups can exhibit longer-range linkage disequilibrium between susceptibility alleles and genetic markers than is the case for other populations. Genetic studies can use this admixture linkage disequilibrium to search for disease alleles with fewer markers than would be needed otherwise. Association studies also can take advantage of the contrasting experiences of racial or ethnic groups, including migrant groups, to search for interactions between particular alleles and environmental factors that might influence health.
| Biology and health sciences | Fields of medicine | Health |
3464580 | https://en.wikipedia.org/wiki/Vein%20%28geology%29 | Vein (geology) | In geology, a vein is a distinct sheetlike body of crystallized minerals within a rock. Veins form when mineral constituents carried by an aqueous solution within the rock mass are deposited through precipitation. The hydraulic flow involved is usually due to hydrothermal circulation.
Veins are classically thought of as being planar fractures in rocks, with the crystal growth occurring normal to the walls of the cavity, and the crystal protruding into open space. This certainly is the method for the formation of some veins. However, it is rare in geology for significant open space to remain open in large volumes of rock, especially several kilometers below the surface. Thus, there are two main mechanisms considered likely for the formation of veins: open-space filling and crack-seal growth.
Open space filling
Open space filling is the hallmark of epithermal vein systems, such as a stockwork, in greisens or in certain skarn environments. For open space filling to take effect, the confining pressure is generally considered to be below 0.5 GPa, or less than . Veins formed in this way may exhibit a colloform, agate-like habit, of sequential selvages of minerals which radiate out from nucleation points on the vein walls and appear to fill up the available open space. Often evidence of fluid boiling is present. Vugs, cavities and geodes are all examples of open-space filling phenomena in hydrothermal systems.
Alternatively, hydraulic fracturing may create a breccia which is filled with vein material. Such breccia vein systems may be quite extensive, and can form the shape of tabular dipping sheets, diatremes or laterally extensive mantos controlled by boundaries such as thrust faults, competent sedimentary layers, or cap rocks.
Crack-seal veins
On the macroscopic scale, the formation of veins is controlled by fracture mechanics, providing the space for minerals to precipitate. Failure modes are classified as (1) shear fractures, (2) extensional fractures, and (3) hybrid fractures, and can be described by the Mohr-Griffith-Coulomb fracture criterion. The fracture criterion defines both the stress required for fracturing and the fracture orientation, as it is possible to construct on a Mohr diagram the shear fracture envelope that separates stable from unstable states of stresses. The shear fracture envelope is approximated by a pair of lines that are symmetric across the σn axis. As soon as the Mohr circle touches the lines of the fracture envelope that represent a critical state of stress, a fracture will be generated. The point of the circle that first touches the envelope represents the plane along which a fracture forms. A newly formed fracture leads to changes in the stress field and tensile strength of the fractured rock and causes a drop in stress magnitude. If a stress increases again, a new fracture will most likely be generated along the same fracture plane. This process is known as the crack-seal mechanism
Crack-seal veins are thought to form quite quickly during deformation by precipitation of minerals within incipient fractures. This happens swiftly by geologic standards, because pressures and deformation mean that large open spaces cannot be maintained; generally the space is in the order of millimeters or micrometers. Veins grow in thickness by reopening of the vein fracture and progressive deposition of minerals on the growth surface as well as being decomposable .
Tectonic implications
Veins generally need either hydraulic pressure in excess of hydrostatic pressure (to form hydraulic fractures or hydrofracture breccias) or they need open spaces or fractures, which requires a plane of extension within the rock mass.
In all cases except brecciation, therefore, a vein measures the plane of extension within the rock mass, give or take a sizeable bit of error. Measurement of enough veins will statistically form a plane of principal extension.
In ductilely deforming compressional regimes, this can in turn give information on the stresses active at the time of vein formation. In extensionally deforming regimes, the veins occur roughly normal to the axis of extension.
Mineralization and veining
Veins are common features in rocks and are evidence of fluid flow in fracture systems. Veins provide information on stress, strain, pressure, temperature, fluid origin and fluid composition during their formation. Typical examples include gold lodes, as well as skarn mineralisation. Hydrofracture breccias are classic targets for ore exploration as there is plenty of fluid flow and open space to deposit ore minerals.
Ores related to hydrothermal mineralisation, which are associated with vein material, may be composed of vein material and/or the rock in which the vein is hosted.
Gold-bearing veins
In many gold mines exploited during the gold rushes of the 19th century, vein material alone was typically sought as ore material. In most of today's mines, ore material is primarily composed of the veins and some component of the wall rocks which surrounds the veins.
The difference between 19th-century and 21st-century mining techniques and the type of ore sought is based on the grade of material being mined and the methods of mining which are used. Historically, hand-mining of gold ores permitted the miners to pick out the lode quartz or reef quartz, allowing the highest-grade portions of the lodes to be worked, without dilution from the unmineralised wall rocks.
Today's mining, which uses larger machinery and equipment, forces the miners to take low-grade waste rock in with the ore material, resulting in dilution of the grade.
However, today's mining and assaying allows the delineation of lower-grade bulk tonnage mineralisation, within which the gold is invisible to the naked eye. In these cases, veining is the subordinate host to mineralisation and may only be an indicator of the presence of metasomatism of the wall-rocks which contains the low-grade mineralisation.
For this reason, veins within hydrothermal gold deposits are no longer the exclusive target of mining, and in some cases gold mineralisation is restricted entirely to the altered wall rocks within which entirely barren quartz veins are hosted.
| Physical sciences | Structural geology | Earth science |
3465676 | https://en.wikipedia.org/wiki/Fluoroantimonic%20acid | Fluoroantimonic acid | Fluoroantimonic acid is a mixture of hydrogen fluoride and antimony pentafluoride, containing various cations and anions (the simplest being and ). This mixture is a superacid that, in terms of corrosiveness, is trillions of times stronger than pure sulfuric acid when measured by its Hammett acidity function. It even protonates some hydrocarbons to afford pentacoordinate carbocations (carbonium ions). Like its precursor hydrogen fluoride, it attacks glass, but can be stored in containers lined with PTFE (Teflon) or PFA.
Chemical composition
Fluoroantimonic acid is formed by combining hydrogen fluoride and antimony pentafluoride:
SbF5 + 2 HF + H2F+
The speciation (i.e., the inventory of components) of fluoroantimonic acid is complex. Spectroscopic measurements show that fluoroantimonic acid consists of a mixture of HF-solvated protons, [ (such as ), and SbF5-adducts of fluoride, [(SbF5)nF]– (such as ). Thus, the formula "" is a convenient but oversimplified approximation of the true composition. Nevertheless, the extreme acidity of this mixture is evident from the inferior proton-accepting ability of the species present in the solution. Hydrogen fluoride, a weak acid in aqueous solution that is normally not thought to have any appreciable Brønsted basicity at all, is in fact the strongest Brønsted base in the mixture, protonating to H2F+ in the same way water protonates to H3O+ in aqueous acid. It is the fluoronium ion that accounts for fluoroantimonic acid's extreme acidity. The protons easily migrate through the solution, moving from H2F+ to HF, when present, by the Grotthuss mechanism.
Two related products have been crystallized from HF-SbF5 mixtures, and both have been analyzed by single crystal X-ray crystallography. These salts have the formulas and . In both salts, the anion is . As mentioned above, is weakly basic; the larger anion is expected to be a still weaker base.
Acidity
Fluoroantimonic acid is the strongest superacid based on the measured value of its Hammett acidity function (H0), which has been determined for various ratios of HF:SbF5. The H0 of HF is −15. A solution of HF containing 1 mol % of SbF5 is −20. The H0 is −21 for 10 mol%. For > 50 mol % SbF5, the H0 is between −21 and −23. The lowest attained H0 is about -28. The following H0 values show that fluoroantimonic acid is stronger than other superacids. Increased acidity is indicated by lower (in this case, more negative) values of H0.
Fluoroantimonic acid (−23 > H0 > −28)
Magic acid (H0 = −23)
Carborane acid (H0 < −18)
Fluorosulfuric acid (H0 = −15)
Triflic acid (H0 = −15)
Perchloric acid (H0 = −13)
Of the above, only the carborane acids, whose H0 could not be directly determined due to their high melting points, may be stronger acids than fluoroantimonic acid.
The H0 value measures the protonating ability of the bulk, liquid acid, and this value has been directly determined or estimated for various compositions of the mixture. The pKa on the other hand, measures the equilibrium of proton dissociation of a discrete chemical species when dissolved in a particular solvent. Since fluoroantimonic acid is not a single chemical species, its pKa value is not well-defined.
The gas-phase acidity (GPA) of individual species present in the mixture have been calculated using density functional theory methods. (Solution-phase pKas of these species can, in principle, be estimated by taking into account solvation energies, but do not appear to be reported in the literature as of 2019.) For example, the ion-pair [H2F]+· was estimated to have a GPA of 254 kcal/mol. For comparison, the commonly encountered superacid triflic acid, TfOH, is a substantially weaker acid by this measure, with a GPA of 299 kcal/mol. However, certain carborane superacids have GPAs lower than that of [H2F]+·. For example, H(CHB11Cl11) has an experimentally determined GPA of 241 kcal/mol.
Reactions
Fluoroantimonic acid solution is so reactive that it is challenging to identify media with which it is unreactive. Materials compatible with fluoroantimonic acid as a solvent include SO2ClF, and sulfur dioxide; some chlorofluorocarbons have also been used. Containers for HF/SbF5 are made of PTFE.
Fluoroantimonic acid solutions decompose when heated, generating free hydrogen fluoride gas and liquid antimony pentafluoride at a temperature of 40 °C.
As a superacid, fluoroantimonic acid solutions protonate nearly all organic compounds, often causing dehydrogenation, or dehydration. In 1967, Bickel and Hogeveen showed that 2HF·SbF5 reacts with isobutane and neopentane to form carbenium ions:
(CH3)3CH + H+ → (CH3)3C+ + H2
(CH3)4C + H+ → (CH3)3C+ + CH4
It is also used in the synthesis of tetraxenonogold complexes.
Safety
HF/SbF5 is a highly corrosive substance that reacts violently with water. Heating it is dangerous as well, as it decomposes into toxic hydrogen fluoride gas. With superacids that are fuming and toxic, proper personal protective equipment should be used. In addition to the obligatory gloves and goggles, the use of a face shield and respirator are also required. Regular lab gloves are not recommended, as this acid can react with the gloves. Safety gear must be worn at all times when handling or going anywhere near this corrosive substance, as fluoroantimonic acid can protonate every compound in the human body.
| Physical sciences | Specific acids | Chemistry |
3468246 | https://en.wikipedia.org/wiki/Vaccinium%20myrtillus | Vaccinium myrtillus | Vaccinium myrtillus or European blueberry is a holarctic species of shrub with edible fruit of blue color, known by the common names bilberry, blaeberry, wimberry, and whortleberry. It is more precisely called common bilberry or blue whortleberry to distinguish it from other Vaccinium relatives.
Description
Vaccinium myrtillus is a small deciduous shrub that grows tall, heavily branched with upright, angular to narrow winged, green-colored branches that are glabrous. It grows rhizomes, creating extensive patches. The shrub can live up to 30 years, with roots reaching depths of up to . It has light green leaves that turn red in autumn and are simple and alternate in arrangement. The leaves are long and ovate to lanceolate or broadly elliptic in shape, with glandular to finely toothed margins; they are prominently veined on the lower surface. In winter, the foliage turns deep red and becomes deciduous.
Small, hermaphrodite flowers with thick stems (about long) grow individually from the leaf axils and nod downward. These flowers, blooming from April to May, have crowns 4 to 6 mm long that are greenish to reddish. The small calyx is fused with minimal lobes on the cup-shaped flower. The rounded, urn-shaped, white-to-pink petals have short, curved lobes. The 8–10 stamens are short, and the anthers are awned and horned. The four- or five-chambered ovary is inferior with a long style.
From July to September, the plants produce black-blue, flattened, round fruits with a diameter up to 1 cm. These multi-seeded berries have calyx remnants on the tip and a blue-gray frosted appearance. Rarely, forms with white, yellow, red, or reddish-spotted berries occur. The small, brownish seeds are crescent-shaped. This species differs from V. corymbosum in that its anthocyanins, which produce color, are found in both the peel and the flesh.
Chromosome count is 2n=24.
Chemistry
Bilberry and the related V. uliginosum both produce lignins, in part because they are used as defensive chemicals. Although many plants change their lignin production – usually to increase it – to handle the stresses of climate change, lignin levels of both Vaccinium species appear to be unaffected. The leaves contain catechins, tannins, quinic acid, arbutin, chlorogenic acid, various glycosides, the fruits contain anthocyanins, pectin, ursolic acid, chlorogenic acid, and ascorbic acid.
V. myrtillus contains a high concentration of triterpenes which remain under laboratory research for their possible biological effects.
Common names
Regional names include blaeberry (Scotland), urts or hurts (Cornwall and Devon), hurtleberry, myrtleberry, wimberry, whinberry, winberry, and fraughan.
Distribution and habitat
Vaccinium myrtillus is a Holarctic species native to almost every country in Europe, north and central Asia, Japan, Greenland, Western Canada, and the Western United States. Within Europe it is only absent from Sardinia, Sicily, the European portion of Turkey, Crete, the Aegean Islands, Cyprus, Crimea, and southern European Russia. It occurs in the acidic soils of heaths, boggy barrens, moorlands, degraded meadows, open forests at the base of pine and mountain spruce forest, and parklands, slopes, and moraines at elevations up to .
Toxicity
Consuming the leaves may be unsafe.
Uses
Fruit
The berry is edible. The fruits will stain hands, teeth and tongue deep blue or purple while eating and so it was traditionally used as a dye for food and clothes in Britain.
Vaccinium myrtillus has been used for centuries in traditional medicine, particularly in traditional Austrian medicine as a tea or liqueur in attempts to treat various disorders. Bilberry dietary supplements are marketed in the United States, although there is little evidence these products have any effect on health or diseases.
In cooking, the bilberry fruit is commonly used for pies, tarts and flans, cakes, jams, muffins, cookies, sauces, syrups, juices, and candies.
Although bilberries are in high demand by consumers in Northern Europe, the berries are harvested in the wild without any cultivation. Some authors state that opportunities exist to improve the crop if cultivated using common agricultural practices.
Leaves
In traditional medicine, the (potentially toxic) leaves were mainly used for treating skin disorders.
| Biology and health sciences | Berries | Plants |
19444970 | https://en.wikipedia.org/wiki/Dark%20flow | Dark flow | In astrophysics, dark flow is a controversial hypothesis to explain certain non-random measurements of peculiar velocity of galaxy clusters. The actual measured velocity is the sum of the velocity predicted by Hubble's law plus a possible small velocity flowing in a common direction. Very large scale correlated flow, called bulk flow, is proposed in this model to be related to certain models of inflationary cosmology.
According to standard cosmological models, the motion of galaxy clusters with respect to the cosmic microwave background should be randomly distributed in all directions. However, analyzing the three-year Wilkinson Microwave Anisotropy Probe (WMAP) data using the kinematic Sunyaev–Zeldovich effect, a team of astronomers led by Alexander Kashlinsky found evidence of a "surprisingly coherent" 600–1000 km/s flow of clusters toward a 20-degree patch of sky between the constellations of Centaurus and Vela.
The researchers had suggested that the motion may be a remnant of the influence of no-longer-visible regions of the universe prior to inflation. Telescopes cannot see events earlier than about 380,000 years after the Big Bang, when the universe became transparent (the cosmic microwave background); this corresponds to the particle horizon at a distance of about 46 billion (4.6×10) light years. Since the matter causing the net motion in this proposal is outside this range, it would in a certain sense be outside our visible universe; however, it would still be in our past light cone.
The results appeared in the October 20, 2008, issue of Astrophysical Journal Letters.
In 2013, data from the Planck space telescope showed no evidence of "dark flow" on that sort of scale, discounting the claims of evidence for either gravitational effects reaching beyond the visible universe or existence of a multiverse. However, in 2015 Atrio-Barandela et al. claim to have found support for its existence using both Planck and WMAP data. The paper stated that a more complete analysis was in preparation to exploit the full Planck cluster sample to further build evidence; however, the team have published no further papers on the topic.
Location
The dark flow was determined to be flowing in the direction of the Centaurus and Hydra constellations. This corresponds with the direction of the Great Attractor, which is a gravitational mystery originally discovered in 1973. However, the source of the Great Attractor's attraction was thought to originate from a massive cluster of galaxies called the Norma Cluster, located about 250 million light-years away from Earth.
In a study from March 2010, Kashlinsky extended his work from 2008, by using the 5-year WMAP results rather than the 3-year results, and doubling the number of galaxy clusters observed from 700. The team also sorted the cluster catalog into four "slices" representing different distance ranges. They then examined the preferred flow direction for the clusters within each slice. The report concluded that while the size and exact position of this direction display some variation, the overall trends among the slices exhibit remarkable agreement. "We detect motion along this axis, but right now our data cannot state as strongly as we'd like whether the clusters are coming or going," Kashlinsky said.
The team has so far catalogued the effect as far out as 2.5 billion light-years, and hopes to expand its catalog out further still to twice the current distance.
Criticisms
Astrophysicist Ned Wright posted an online response to the study arguing that its methods are flawed. The original authors released a statement in return, claiming that the criticism is largely invalid.
A more recent statistical work done by Ryan Keisler claims to rule out the possibility that the dark flow is a physical phenomenon because Kashlinsky et al. did not consider the primary anisotropies of the cosmic microwave background (CMB) to be as important as they are.
Some have suggested that this could be the effect of a sibling universe or a region of space-time fundamentally different from the observable universe. Data on more than 1,000 galaxy clusters have been measured, including some as distant as 3 billion light-years. Alexander Kashlinsky claims these measurements show the universe's steady flow is clearly not a statistical fluke. Kashlinsky said: "At this point we don't have enough information to see what it is, or to constrain it. We can only say with certainty that somewhere very far away the world is very different than what we see locally. Whether it's 'another universe' or a different fabric of space-time we don't know." Laura Mersini-Houghton and Rich Holman observe that some anisotropy is predicted both by theories involving interaction with another universe, or when the frame of reference of the CMB does not coincide with that of the universe's expansion.
In 2013, data from the European Space Agency's Planck satellite was claimed to show no statistically significant evidence of existence of dark flow. However, another analysis by a member of the Planck collaboration, Fernando Atrio-Barandela, suggested the data were consistent with the earlier findings from WMAP. Popular media continued to be interested in the idea, with Mersini-Houghton claiming the Planck results support existence of a multiverse.
| Physical sciences | Basics_2 | Astronomy |
19452043 | https://en.wikipedia.org/wiki/Health%20facility | Health facility | A health facility is, in general, any location where healthcare is provided. Health facilities range from small clinics and doctor's offices to urgent care centers and large hospitals with elaborate emergency rooms and trauma centers. The number and quality of health facilities in a country or region is one common measure of that area's prosperity and quality of life. In many countries, health facilities are regulated to some extent by law; licensing by a regulatory agency is often required before a facility may open for business. Health facilities may be owned and operated by for-profit businesses, non-profit organizations, governments, and, in some cases, individuals, with proportions varying by country. | Biology and health sciences | Health facilities | Health |
4664479 | https://en.wikipedia.org/wiki/Corypha%20umbraculifera | Corypha umbraculifera | Corypha umbraculifera, the talipot palm, is a species of palm native to eastern and southern India and Sri Lanka. It is also grown in Cambodia, Myanmar, Thailand, Mauritius and the Andaman Islands. It is one of the five accepted species in the genus Corypha. It is a flowering plant with the largest inflorescence in the world. It lives up to 60 years before bearing flowers and fruits. It dies shortly after.
Description
It is one of the largest palms with individual specimens having reached heights of up to with stems up to in diameter. It is a fan palm (Arecaceae tribe Corypheae), with large, palmate leaves up to in diameter, with a petiole up to , and up to 130 leaflets.
The talipot palm bears the largest inflorescence of any plant, long, consisting of one to several million small flowers borne on a branched stalk that forms at the top of the trunk (the titan arum, Amorphophallus titanum, from the family Araceae, has the largest unbranched inflorescence, and the species Rafflesia arnoldii has the world's largest single flower). The talipot palm is monocarpic, flowering only once, when it is 30 to 80 years old. It takes about a year for the fruit to mature, producing thousands of round, yellow-green fruit in diameter, each containing a single seed. The plant dies after fruiting.
Distribution
The talipot palm is cultivated in South India and Sri Lanka. It is also cultivated in Southeast Asian countries of Cambodia, Myanmar, Thailand and the Andaman Islands. It is also grown sparsely in China.
Uses
Historically, the leaves were written upon in various South Asian and South-East Asian cultures using an iron stylus to create palm leaf manuscripts. In the Philippines, it is locally known as buri or buli. The leaves are also used for thatching, and the sap is tapped to make palm wine. In South India, the palm leaves are used to make umbrellas for agricultural workers. The tree is known as kudapana (കുടപ്പന) in Malayalam, talo (, ତାଳ) in Odia, sreetalam (శ్రీతాళం) in Telugu and kudaipanai (குடைப்பனை) in Tamil, which means umbrella palm. The plant is known as tala (තල) in Sri Lanka, by local Sinhalese people.
In Cambodia, the palm is known as tréang (it was also known by the French name latanier), and as noted above was extensively used in the past to write religious manuscripts. In recent times the leaf media has been used by traditional healers and soothsayers. The mature leaves are used to make thatches, mats and hats. The petioles can be used in the manufacture of canes, arrows and netting needles. At low tide, fishers use the fruit to stupefy fish.
Gallery
| Biology and health sciences | Arecales (inc. Palms) | Plants |
4664489 | https://en.wikipedia.org/wiki/Coprinus%20comatus | Coprinus comatus | Coprinus comatus, commonly known as the shaggy ink cap, lawyer's wig, or shaggy mane, is a common fungus often seen growing on lawns, along gravel roads and waste areas. The young fruit bodies first appear as white cylinders emerging from the ground, then the bell-shaped caps open out. The caps are white, and covered with scales—this is the origin of the common names of the fungus. The gills beneath the cap are white, then pink, then turn black and deliquesce ('melt') into a black liquid filled with spores (hence the "ink cap" name). This mushroom is unusual because it will turn black and dissolve itself in a matter of hours after being picked or depositing spores.
When young it is an excellent edible mushroom provided that it is eaten soon after being collected (it keeps very badly because of the autodigestion of its gills and cap). If long-term storage is desired, microwaving, sauteing or simmering until limp will allow the mushrooms to be stored in a refrigerator for several days or frozen. Also, placing the mushrooms in a glass of ice water will delay the decomposition for a day or two so that one has time to incorporate them into a meal. Processing or icing must be done whether for eating or storage within four to six hours of harvest to prevent undesirable changes to the mushroom. The species is cultivated in China as food.
Taxonomy
The shaggy ink cap was first described by Danish naturalist Otto Friedrich Müller in 1780 as Agaricus comatus, before being given its current binomial name in 1797 by Christiaan Hendrik Persoon. Its specific name derives from coma, or "hair", hence comatus, "hairy" or "shaggy". Other common names include lawyer's wig, and shaggy mane.
Coprinus comatus is the type species for the genus Coprinus. This genus was formerly considered to be a large one with well over 100 species. However, molecular analysis of DNA sequences showed that the former species belonged in two families, the Agaricaceae and the Psathyrellaceae. Coprinus comatus is the best known of the true Coprinus.
Description
The shaggy ink cap is easily recognizable from its almost cylindrical cap which initially covers most of its stem. The cap ranges from in width and in height. It is mostly white with shaggy scales, which are more pale brown at the apex. The free gills change rapidly from white to pink, then to black. It is deliquescent. The white and fairly thick stipe measures high by in diameter and has a loose ring near the bottom. Microscopically, the mushroom lacks pleurocystidia. The spore print is black-brown and the spores measure 10–13 by 6.5–8 μm. The flesh is white and the taste mild.
Similar species
The mushroom can sometimes be confused with the magpie fungus which is poisonous. In America, the 'vomiter' mushroom Chlorophyllum molybdites is responsible for most cases of mushroom poisoning due to its similarity with shaggy mane and other edible mushrooms. Coprinopsis atramentaria (the common Ink Cap) is similar, and contains coprine and can induce coprine poisoning, particularly when consumed with alcohol. Podaxis pistillaris is also similar.
Distribution, habitat and ecology
It grows in groups in places which are often unexpected, such as green areas in towns. It occurs widely in grasslands and meadows in Europe and North America, from June through to November in the UK. It appears to have been introduced to Australia, New Zealand and Iceland. In Australia the species is sufficiently common to have been featured on a postage stamp issued by Australia Post in 1981.
Coprinus comatus is a nematophagous fungus capable of killing and digesting the nematode species Panagrellus redivivus and Meloidogyne arenaria.
Edibility
The young mushrooms, before the gills start to turn black, are a choice edible mushroom, but should be prepared soon after being collected as the black areas quickly turn bitter. The taste is mild; cooking produces a large quantity of liquid. It can sometimes be used in mushroom soup with parasol mushroom. Large quantities of microwaved-then-frozen shaggy manes can be used as the liquid component of risotto, replacing the usual chicken stock.
Coprinus comatus is not to be confused with Coprinopsis atramentaria, which can induce coprine poisoning, particularly when consumed with alcohol. Symptoms of coprine poisoning include vomiting, diarrhoea, palpitations and a metallic taste in the mouth.
Gallery
| Biology and health sciences | Edible fungi | Plants |
4665849 | https://en.wikipedia.org/wiki/Silver%20azide | Silver azide | Silver azide is the chemical compound with the formula . It is a silver(I) salt of hydrazoic acid. It forms a colorless crystals. Like most azides, it is a primary explosive.
Structure and chemistry
Silver azide can be prepared by treating an aqueous solution of silver nitrate with sodium azide. The silver azide precipitates as a white solid, leaving sodium nitrate in solution.
X-ray crystallography shows that is a coordination polymer with square planar coordinated by four azide ligands. Correspondingly, each end of each azide ligand is connected to a pair of centers. The structure consists of two-dimensional layers stacked one on top of the other, with weaker Ag–N bonds between layers. The coordination of can alternatively be described as highly distorted 4 + 2 octahedral, the two more distant nitrogen atoms being part of the layers above and below.
In its most characteristic reaction, the solid decomposes explosively, releasing nitrogen gas:
The first step in this decomposition is the production of free electrons and azide radicals; thus the reaction rate is increased by the addition of semiconducting oxides. Pure silver azide explodes at 340 °C, but the presence of impurities lowers this down to 270 °C. This reaction has a lower activation energy and initial delay than the corresponding decomposition of lead azide.
Safety
, like most heavy metal azides, is a dangerous primary explosive. Decomposition can be triggered by exposure to ultraviolet light or by impact. Ceric ammonium nitrate is used as an oxidising agent to destroy in spills.
| Physical sciences | Nitride salts | Chemistry |
20637356 | https://en.wikipedia.org/wiki/Closure%20temperature | Closure temperature | In radiometric dating, closure temperature or blocking temperature refers to the temperature of a system, such as a mineral, at the time given by its radiometric date. In physical terms, the closure temperature is the temperature at which a system has cooled so that there is no longer any significant diffusion of the parent or daughter isotopes out of the system and into the external environment. The concept's initial mathematical formulation was presented in a seminal paper by Martin H. Dodson,
"Closure temperature in cooling geochronological and petrological systems" in the journal Contributions to Mineralogy and Petrology, 1973, with refinements to a usable experimental formulation by other scientists in later years. This temperature varies broadly among different minerals and also differs depending on the parent and daughter atoms being considered. It is specific to a particular material and isotopic system.
The closure temperature of a system can be experimentally determined in the lab by artificially resetting sample minerals using a high-temperature furnace. As the mineral cools, the crystal structure begins to form and diffusion of isotopes slows. At a certain temperature, the crystal structure has formed sufficiently to prevent diffusion of isotopes. This temperature is what is known as blocking temperature and represents the temperature below which the mineral is a closed system to measurable diffusion of isotopes. The age that can be calculated by radiometric dating is thus the time at which the rock or mineral cooled to blocking temperature.
These temperatures can also be determined in the field by comparing them to the dates of other minerals with well-known closure temperatures.
Closure temperatures are used in geochronology and thermochronology to date events and determine rates of processes in the geologic past.
Table of values
The following table represents the closure temperatures of some materials. These values are the approximate values of the closure temperatures of certain minerals listed by the isotopic system being used. These values are approximations; better values of the closure temperature require more precise calculations and characterizations of the diffusion characteristics of the mineral grain being studied.
Potassium-argon method
Uranium-lead method
Electron spin resonance dating
| Physical sciences | Geochronology | Earth science |
446216 | https://en.wikipedia.org/wiki/Decision%20theory | Decision theory | Decision theory or the theory of rational choice is a branch of probability, economics, and analytic philosophy that uses the tools of expected utility and probability to model how individuals would behave rationally under uncertainty. It differs from the cognitive and behavioral sciences in that it is mainly prescriptive and concerned with identifying optimal decisions for a rational agent, rather than describing how people actually make decisions. Despite this, the field is important to the study of real human behavior by social scientists, as it lays the foundations to mathematically model and analyze individuals in fields such as sociology, economics, criminology, cognitive science, moral philosophy and political science.
Branches
Normative decision theory is concerned with identification of optimal decisions where optimality is often determined by considering an ideal decision maker who is able to calculate with perfect accuracy and is in some sense fully rational. The practical application of this prescriptive approach (how people ought to make decisions) is called decision analysis and is aimed at finding tools, methodologies, and software (decision support systems) to help people make better decisions.
In contrast, descriptive decision theory is concerned with describing observed behaviors often under the assumption that those making decisions are behaving under some consistent rules. These rules may, for instance, have a procedural framework (e.g. Amos Tversky's elimination by aspects model) or an axiomatic framework (e.g. stochastic transitivity axioms), reconciling the Von Neumann-Morgenstern axioms with behavioral violations of the expected utility hypothesis, or they may explicitly give a functional form for time-inconsistent utility functions (e.g. Laibson's quasi-hyperbolic discounting).
Prescriptive decision theory is concerned with predictions about behavior that positive decision theory produces to allow for further tests of the kind of decision-making that occurs in practice. In recent decades, there has also been increasing interest in "behavioral decision theory", contributing to a re-evaluation of what useful decision-making requires.
Types of decisions
Choice under uncertainty
The area of choice under uncertainty represents the heart of decision theory. Known from the 17th century (Blaise Pascal invoked it in his famous wager, which is contained in his Pensées, published in 1670), the idea of expected value is that, when faced with a number of actions, each of which could give rise to more than one possible outcome with different probabilities, the rational procedure is to identify all possible outcomes, determine their values (positive or negative) and the probabilities that will result from each course of action, and multiply the two to give an "expected value", or the average expectation for an outcome; the action to be chosen should be the one that gives rise to the highest total expected value. In 1738, Daniel Bernoulli published an influential paper entitled Exposition of a New Theory on the Measurement of Risk, in which he uses the St. Petersburg paradox to show that expected value theory must be normatively wrong. He gives an example in which a Dutch merchant is trying to decide whether to insure a cargo being sent from Amsterdam to St. Petersburg in winter. In his solution, he defines a utility function and computes expected utility rather than expected financial value.
In the 20th century, interest was reignited by Abraham Wald's 1939 paper pointing out that the two central procedures of sampling-distribution-based statistical-theory, namely hypothesis testing and parameter estimation, are special cases of the general decision problem. Wald's paper renewed and synthesized many concepts of statistical theory, including loss functions, risk functions, admissible decision rules, antecedent distributions, Bayesian procedures, and minimax procedures. The phrase "decision theory" itself was used in 1950 by E. L. Lehmann.
The revival of subjective probability theory, from the work of Frank Ramsey, Bruno de Finetti, Leonard Savage and others, extended the scope of expected utility theory to situations where subjective probabilities can be used. At the time, von Neumann and Morgenstern's theory of expected utility proved that expected utility maximization followed from basic postulates about rational behavior.
The work of Maurice Allais and Daniel Ellsberg showed that human behavior has systematic and sometimes important departures from expected-utility maximization (Allais paradox and Ellsberg paradox). The prospect theory of Daniel Kahneman and Amos Tversky renewed the empirical study of economic behavior with less emphasis on rationality presuppositions. It describes a way by which people make decisions when all of the outcomes carry a risk. Kahneman and Tversky found three regularities – in actual human decision-making, "losses loom larger than gains"; people focus more on changes in their utility-states than they focus on absolute utilities; and the estimation of subjective probabilities is severely biased by anchoring.
Intertemporal choice
Intertemporal choice is concerned with the kind of choice where different actions lead to outcomes that are realized at different stages over time. It is also described as cost-benefit decision making since it involves the choices between rewards that vary according to magnitude and time of arrival. If someone received a windfall of several thousand dollars, they could spend it on an expensive holiday, giving them immediate pleasure, or they could invest it in a pension scheme, giving them an income at some time in the future. What is the optimal thing to do? The answer depends partly on factors such as the expected rates of interest and inflation, the person's life expectancy, and their confidence in the pensions industry. However even with all those factors taken into account, human behavior again deviates greatly from the predictions of prescriptive decision theory, leading to alternative models in which, for example, objective interest rates are replaced by subjective discount rates.
Interaction of decision makers
Some decisions are difficult because of the need to take into account how other people in the situation will respond to the decision that is taken. The analysis of such social decisions is often treated under decision theory, though it involves mathematical methods. In the emerging field of socio-cognitive engineering, the research is especially focused on the different types of distributed decision-making in human organizations, in normal and abnormal/emergency/crisis situations.
Complex decisions
Other areas of decision theory are concerned with decisions that are difficult simply because of their complexity, or the complexity of the organization that has to make them. Individuals making decisions are limited in resources (i.e. time and intelligence) and are therefore boundedly rational; the issue is thus, more than the deviation between real and optimal behavior, the difficulty of determining the optimal behavior in the first place. Decisions are also affected by whether options are framed together or separately; this is known as the distinction bias.
Heuristics
Heuristics are procedures for making a decision without working out the consequences of every option. Heuristics decrease the amount of evaluative thinking required for decisions, focusing on some aspects of the decision while ignoring others. While quicker than step-by-step processing, heuristic thinking is also more likely to involve fallacies or inaccuracies.
One example of a common and erroneous thought process that arises through heuristic thinking is the gambler's fallacy — believing that an isolated random event is affected by previous isolated random events. For example, if flips of a fair coin give repeated tails, the coin still has the same probability (i.e., 0.5) of tails in future turns, though intuitively it might seems that heads becomes more likely. In the long run, heads and tails should occur equally often; people commit the gambler's fallacy when they use this heuristic to predict that a result of heads is "due" after a run of tails. Another example is that decision-makers may be biased towards preferring moderate alternatives to extreme ones. The compromise effect operates under a mindset that the most moderate option carries the most benefit. In an incomplete information scenario, as in most daily decisions, the moderate option will look more appealing than either extreme, independent of the context, based only on the fact that it has characteristics that can be found at either extreme.
Alternatives
A highly controversial issue is whether one can replace the use of probability in decision theory with something else.
Probability theory
Advocates for the use of probability theory point to:
the work of Richard Threlkeld Cox for justification of the probability axioms,
the Dutch book paradoxes of Bruno de Finetti as illustrative of the theoretical difficulties that can arise from departures from the probability axioms, and
the complete class theorems, which show that all admissible decision rules are equivalent to the Bayesian decision rule for some utility function and some prior distribution (or for the limit of a sequence of prior distributions). Thus, for every decision rule, either the rule may be reformulated as a Bayesian procedure (or a limit of a sequence of such), or there is a rule that is sometimes better and never worse.
Alternatives to probability theory
The proponents of fuzzy logic, possibility theory, Dempster–Shafer theory, and info-gap decision theory maintain that probability is only one of many alternatives and point to many examples where non-standard alternatives have been implemented with apparent success. Notably, probabilistic decision theory can sometimes be sensitive to assumptions about the probabilities of various events, whereas non-probabilistic rules, such as minimax, are robust in that they do not make such assumptions.
Ludic fallacy
A general criticism of decision theory based on a fixed universe of possibilities is that it considers the "known unknowns", not the "unknown unknowns": it focuses on expected variations, not on unforeseen events, which some argue have outsized impact and must be considered – significant events may be "outside model". This line of argument, called the ludic fallacy, is that there are inevitable imperfections in modeling the real world by particular models, and that unquestioning reliance on models blinds one to their limits.
History
The roots of decision theory lie in probability theory, developed by Blaise Pascal and Pierre de Fermat in the 17th century, which was later refined by others like Christiaan Huygens. These developments provided a framework for understanding risk and uncertainty, which are central to decision-making.
In the 18th century, Daniel Bernoulli introduced the concept of "expected utility" in the context of gambling, which was later formalized by John von Neumann and Oskar Morgenstern in the 1940s. Their work on Game Theory and Expected Utility Theory helped establish a rational basis for decision-making under uncertainty.
After World War II, decision theory expanded into economics, particularly with the work of economists like Milton Friedman and others, who applied it to market behavior and consumer choice theory. This era also saw the development of Bayesian decision theory, which incorporates Bayesian probability into decision-making models.
By the late 20th century, scholars like Daniel Kahneman and Amos Tversky challenged the assumptions of rational decision-making. Their work in behavioral economics highlighted cognitive biases and heuristics that influence real-world decisions, leading to the development of prospect theory, which modified expected utility theory by accounting for psychological factors.
| Mathematics | Applied mathematics | null |
446372 | https://en.wikipedia.org/wiki/Herto%20Man | Herto Man | Herto Man refers to human remains (Homo sapiens) discovered in 1997 from the Upper Herto member of the Bouri Formation in the Afar Triangle, Ethiopia. The remains have been dated as between 154,000 and 160,000 years old. The discovery of Herto Man was especially significant at the time, falling within a long gap in the fossil record between 300 and 100 thousand years ago and representing the oldest dated H. sapiens remains then described.
In the original description paper, these 12 (at minimum) individuals were described as falling just outside the umbrella of "anatomically modern human". Thus, Herto Man was classified into a new subspecies as "Homo sapiens idaltu" ( "elder"). It supposedly represented a transitional morph between the more archaic H. (s.?) rhodesiensis and H. s. sapiens (that is, a stage in a chronospecies). Subsequent researchers have rejected this classification. The validity of such subspecies is difficult to justify because of the vague definitions of "species" and "subspecies", especially when discussing a chronospecies, as the exact end-morphology and start-morphology of the ancestor and descendant species are inherently unresolvable.
Herto Man produced many stone tools which can fit into the vaguely defined "Transitional Acheulean", the long-lasting cultural tradition with both characteristically Acheulean (made by archaic humans) and Middle Stone Age (made by modern humans) tools. They seem to have been butchering mainly hippo, but also bovines, in a lakeside environment. The three most complete skulls (one a 6- to 7-year-old child) bear manmade cut marks and other alterations, which could be evidence of mortuary practices like excarnation.
Research history
Discovery
Fossils of Herto Man were first recovered in 1997 from the Upper Herto Member of the Bouri Formation in the Middle Awash site of the Afar Triangle, Ethiopia. The materials are: BOU-VP-16/1, a nearly complete skull missing the left skullcap; BOU-VP-16/2, skull fragments; BOU-VP-16/3, a parietal bone fragment; BOU-VP-16/4, a parietal fragment; BOU-VP-16/5, a nearly complete skull of a 6- or 7-year old; BOU-VP-16/6, a right upper molar; BOU-VP-16/7, a parietal fragment; BOU-VP-16/18, parietal fragments; BOU-VP-16/42, an upper premolar; and BOU-VP-16/43, a parietal fragment. Further excavation has yielded a total of 12 individuals.
This region of the world is famous for yielding a series of ancient human and hominin species stretching as far back as 6 million years. In 2003, using argon–argon dating, the Upper Herto Member was dated to 160 to 154 thousand years ago. The Herto Man was a major fossil find as, at the time, there was a significant gap in the human fossil record between 300 and 100 thousand years ago, obfuscating the evolution of "Homo (sapiens?) rhodesiensis" into H. s. sapiens.
By the time Herto Man was discovered, based on genetic analyses and the fossil record after 120,000 years ago, it was largely agreed that modern humans H. s. sapiens evolved in Africa (recent African origin model), but it was debated if this was a continent-wide or localised process. In regard to the localised model, the antiquity of the Herto Man and the several similar specimens of presumably equal or even older age distributed across East Africa shifted the focus to that region. In 2017, the Jebel Irhoud remains were dated to 315,000 years ago, making them the oldest specimens classified as H. sapiens. Because this date overlaps with "H. rhodesiensis", the Irhoud remains also demonstrate that these transitional morphs, including Herto Man, represent a rapid evolution of the sapiens face, with gradual modifications to the braincase among populations distributed across Africa, beginning as early as 300,000 years ago.
"H. s. idaltu"
In a simultaneously published paper, anthropologists Tim D. White, Berhane Asfaw, David DeGusta, Henry Gilbert, Gary D. Richards, Gen Suwa, and Francis Clark Howell described the material as just barely outside what is considered an "anatomically modern human" (AMH), beyond the range of variation for any present-day human. They instead considered the earliest "AMHs" specimens from Klasies River Caves, South Africa, or Qafzeh cave, Israel. They did this by comparing BOU-VP-16/1 with the Qafzeh 6 skull, the La Ferrassie 1 skull (a male Neanderthal, H. (s.?) neanderthalensis), the Kabwe 1 skull ("H. (s.?) rhodesiensis"), and 28 present-day male skulls. Consequently, they classified Herto Man as a new palaeosubspecies of H. sapiens as "H. s. idaltu" (with the presumed male BOU-VP-16/1 as the holotype), which represents an intermediary morph between "H. (s.?) rhodesiensis" and present-day H. s. sapiens. The name comes from the local Afar language idàltu "elder". Similarly transitional specimens (at the time, not well-dated) tentatively assigned to "late archaic H. sapiens" had been reported from Ngaloba, Tanzania; Omo, Ethiopia; Eliye Springs, Kenya; and Jebel Irhoud, Morocco.
In another simultaneously published paper, British physical anthropologist Chris Stringer doubted the validity of "H. s. idaltu", saying the material was similar to some Late Pleistocene Australasian specimens. White et al. made note of this, but still considered Herto Man "clearly distinct". In 2011, American anthropologists Kyle Lubsen and Robert Corruccini compared BOU-VP-16/1 with Skhul 5 from Es-Skhul Cave, Israel (temporally close to the Qafzeh material), and instead reported these two skulls are closely allied with each other. That is, their analysis found no support for Herto Man's position as a transitional morph, nor the nomen idaltu. In 2014, anthropologists Robert McCarthy and Lynn Lucas considered a much larger sample than White et al.—using several specimens representing "archaic Homo", Neanderthal, "early modern H. s. sapiens", and Late Pleistocene H. s. sapiens—and arrived at the same conclusion as Lubsen and Corruccini. Citing these two studies, in 2016, Stringer, in his review of literature regarding the derivation of H. s. sapiens, said the name idaltu, "does not seem justified."
The main issue of palaeosubspecies validity lies in the vague definitions of "species" and "subspecies", especially when discussing a chronospecies (an unbroken lineage which gradually changes, making the exact end-morphology and start-morphology of the ancestor and descendant species unresolvable). The original describers in 2019 still upheld the name "H. s. idaltu" because their argument, "depended largely on discrete traits," whereas Mcarthy and Lucas, "focused only on the gross cranial metrics", but also stated debating the exact taxonomic names and labels is overall not as important as understanding trends in human evolution.
Anatomy
Like what could be considered an "anatomically modern human", the Herto skull has a high cranial vault (a raised forehead), an overall globular shape in side-view, and a flat face. The brain volume was about 1,450 cc. The skull is quite robust in having a projecting brow ridge, weakly curved parietal bones, and a strongly flexed occipital at the back of the skull. These traits are well within the range of variation of modern humans. Compared to the average present-day human skull, the Herto skull is notably long and has overall large dimensions, although the cheekbones are relatively weak.
Culture
Technology
The Upper Herto Member is a sandy fluvial (deposited by rivers) unit recording a freshwater lake environment, and has yielded archaeologically relevant remains across a stretch. Locality BOU-A19 preserved 71 artefacts, BOU-A26 331 artefacts, and BOU-A29 194 artefacts, a total of 640. Additionally, BOU-A19B has 29 artefacts, and BOU-A19H 15 artefacts. The tool assemblage contains tools made using the Levallois technique (associated with the African "Middle Stone Age"), as well as cleavers and other bifaces (associated with the earlier Acheulean). Though bifaces and blades are rare (respectively less than 5% and 1% of the tools), it is more likely these tools were frequently made by Herto Man at a different location than that they were indeed rarely produced at all. Such an assemblage is typically labelled as the vaguely defined "Transitional Acheulean", which is found as far back as 280,000 years ago. The Herto site thus indicates the transitional phase was long-lived, and the actual derivation of what is considered "Middle Stone Age" proper was not gradual nor simple.
Points and blades were made with obsidian, and other tools with fine-grained basalt, though a few scrapers were made with cryptocrystalline rock. Of the pool of 640, 48 flakes, blades, and points were made with the Levallois technique. The 28 bifaces include ovates, elongate ovates, triangulars, cleavers, and a pick, scraper, and biface core. All 17 handaxes were made with flakes and finished with soft hammering. Out of the 25 side scrapers, 22 were simple (only one side could scrape). There were 15 end-scrapers (only one or both of the ends could scrape), and a few were rounded off, somewhat resembling Aurignacian (40,000 years ago) end-scrapers.
Both the Lower and Upper Herto Members preserve several bovine and hippo carcasses with manmade cut marks, recording a long-lasting butchering tradition with a predilection for hippo. One location records the accumulation of numerous hippo calves (newborn to a few weeks old) and adults.
Mortuary practices
The adult BOU-VP-16/1 shows a weak, thin vertical cut on the bottom corner of his right parietal bone, and another smaller vertical line across the right temporal line. The adult BOU-VP-16/2 bears intense modification of 15 of his 24 associated skullcap fragments, as well as deep cut marks consistent with defleshing on his parietals, left cheekbone, frontal bone, and occipital bone. BOU-VP-16/2 also presents evidence of repetitive scraping around the circumference of the braincase (generally interpreted as a symbolic modification rather than for consumption), and the lack of fragments from the base of the skull may mean the specimen was deposited as an isolated skullcap to begin with. The juvenile BOU-VP-16/5 has deep cut marks consistent with defleshing all along the undersides of the sphenoid and temporal bones, likely after the jawbone was removed. The occipital bone and foramen magnum (the base of the skull) were broken into, and the edges were polished and smoothed off, which is similar to the mortuary practices of some Papuan tribes. These could indicate that Herto Man was symbolically preparing the dead in some mortuary ritual.
| Biology and health sciences | Homo | Biology |
446457 | https://en.wikipedia.org/wiki/Iron%28III%29%20chloride | Iron(III) chloride | Iron(III) chloride describes the inorganic compounds with the formula (H2O)x. Also called ferric chloride, these compounds are some of the most important and commonplace compounds of iron. They are available both in anhydrous and in hydrated forms, which are both hygroscopic. They feature iron in its +3 oxidation state. The anhydrous derivative is a Lewis acid, while all forms are mild oxidizing agents. It is used as a water cleaner and as an etchant for metals.
Electronic and optical properties
All forms of ferric chloride are paramagnetic, owing to the presence of unpaired electrons residing in 3d orbitals. Although Fe(III) chloride can be octahedral or tetrahedral (or both, see structure section), all of these forms have five unpaired electrons, one per d-orbital. The high spin d5 electronic configuration requires that d-d electronic transitions are spin forbidden, in addition to violating the Laporte rule. This double forbidden-ness results in its solutions being only pale colored. Or, stated more technically, the optical transitions are non-intense. Aqueous ferric sulfate and ferric nitrate, which contain , are nearly colorless, whereas the chloride solutions are yellow. Thus, the chloride ligands significantly influence the optical properties of the iron center.
Structure
Iron(III) chloride can exist as an anhydrous material and a series of hydrates, which results in distinct structures.
Anhydrous
The anhydrous compound is a hygroscopic crystalline solid with a melting point of 307.6 °C. The colour depends on the viewing angle: by reflected light, the crystals appear dark green, but by transmitted light, they appear purple-red. Anhydrous iron(III) chloride has the structure, with octahedral Fe(III) centres interconnected by two-coordinate chloride ligands.
Iron(III) chloride has a relatively low melting point and boils at around 315 °C. The vapor consists of the dimer , much like aluminium chloride. This dimer dissociates into the monomeric (with D3h point group molecular symmetry) at higher temperatures, in competition with its reversible decomposition to give iron(II) chloride and chlorine gas.
Hydrates
Ferric chloride form hydrates upon exposure to water, reflecting its Lewis acidity. All hydrates exhibit deliquescence, meaning that they become liquid by absorbing moisture from the air. Hydration invariably gives derivatives of aquo complexes with the formula . This cation can adopt either trans or cis stereochemistry, reflecting the relative location of the chloride ligands on the octahedral Fe center. Four hydrates have been characterized by X-ray crystallography: the dihydrate , the disesquihydrate , the trisesquihydrate , and finally the hexahydrate . These species differ with respect to the stereochemistry of the octahedral iron cation, the identity of the anions, and the presence or absence of water of crystallization. The structural formulas are , , , and . The first three members of this series have the tetrahedral tetrachloroferrate () anion.
Solution
Like the solid hydrates, aqueous solutions of ferric chloride also consist of the octahedral of unspecified stereochemistry. Detailed speciation of aqueous solutions of ferric chloride is challenging because the individual components do not have distinctive spectroscopic signatures. Iron(III) complexes, with a high spin d5 configuration, is kinetically labile, which means that ligands rapidly dissociate and reassociate. A further complication is that these solutions are strongly acidic, as expected for aquo complexes of a tricationic metal. Iron aquo complexes are prone to olation, the formation of polymeric oxo derivatives. Dilute solutions of ferric chloride produce soluble nanoparticles with molecular weight of 104, which exhibit the property of "aging", i.e., the structure change or evolve over the course of days. The polymeric species formed by the hydrolysis of ferric chlorides are key to the use of ferric chloride for water treatment.
In contrast to the complicated behavior of its aqueous solutions, solutions of iron(III) chloride in diethyl ether and tetrahydrofuran are well-behaved. Both ethers form 1:2 adducts of the general formula FeCl3(ether)2. In these complexes, the iron is pentacoordinate.
Preparation
Several hundred tons of anhydrous iron(III) chloride are produced annually. The principal method, called direct chlorination, uses scrap iron as a precursor:
The reaction is conducted at several hundred degrees such that the product is gaseous. Using excess chlorine guarantees that the intermediate ferrous chloride is converted to the ferric state. A similar but laboratory-scale process also has been described.
Aqueous solutions of iron(III) chloride are also produced industrially from a number of iron precursors, including iron oxides:
In complementary route, iron metal can be oxidized by hydrochloric acid followed by chlorination:
A number of variables apply to these processes, including the oxidation of iron by ferric chloride and the hydration of intermediates. Hydrates of iron(III) chloride do not readily yield anhydrous ferric chloride. Attempted thermal dehydration yields hydrochloric acid and iron oxychloride. In the laboratory, hydrated iron(III) chloride can be converted to the anhydrous form by treatment with thionyl chloride or trimethylsilyl chloride:
Reactions
Being high spin d5 electronic configuration iron(III) chlorides are labile, meaning that its Cl- and H2O ligands exchange rapidly with free chloride and water. In contrast to their kinetic lability, iron(III) chlorides are thermodynamically robust, as reflected by the vigorous methods applied to their synthesis, as described above.
Anhydrous FeCl3
Aside from lability, which applies to anhydrous and hydrated forms, the reactivity of anhydrous ferric chloride reveals two trends: It is a Lewis acid and an oxidizing agent.
Reactions of anhydrous iron(III) chloride reflect its description as both oxophilic and a hard Lewis acid. Myriad manifestations of the oxophiliicty of iron(III) chloride are available. When heated with iron(III) oxide at 350 °C it reacts to give iron oxychloride:
Alkali metal alkoxides react to give the iron(III) alkoxide complexes. These products have more complicated structures than anhydrous iron(III) chloride. In the solid phase a variety of multinuclear complexes have been described for the nominal stoichiometric reaction between and sodium ethoxide:
Iron(III) chloride forms a 1:2 adduct with Lewis bases such as triphenylphosphine oxide; e.g., . The related 1:2 complex , has been crystallized from ether solution.
Iron(III) chloride also reacts with tetraethylammonium chloride to give the yellow salt of the tetrachloroferrate ion (). Similarly, combining FeCl3 with NaCl and KCl gives and , respectively.
In addition to these simple stoichiometric reactions, the Lewis acidity of ferric chloride enables its use in a variety of acid-catalyzed reactions as described below in the section on organic chemistry.
In terms of its being an oxidant, iron(III) chloride oxidizes iron powder to form iron(II) chloride via a comproportionation reaction:
A traditional synthesis of anhydrous ferrous chloride is the reduction of FeCl3 with chlorobenzene:
iron(III) chloride releases chlorine gas when heated above 160 °C, generating ferrous chloride:
To suppress this reaction, the preparation of iron(III) chloride requires an excess of chlorinating agent, as discussed above.
Hydrated FeCl3
Unlike the anhydrous material, hydrated ferric chloride is not a particularly strong Lewis acid since water ligands have quenched the Lewis acidity by binding to Fe(III).
Like the anhydrous material, hydrated ferric chloride is oxophilic. For example, oxalate salts react rapidly with aqueous iron(III) chloride to give , known as ferrioxalate. Other carboxylate sources, e.g., citrate and tartrate, bind as well to give carboxylate complexes. The affinity of iron(III) for oxygen ligands was the basis of qualitative tests for phenols. Although superseded by spectroscopic methods, the ferric chloride test is a traditional colorimetric test. The affinity of iron(III) for phenols is exploited in the Trinder spot test.
Aqueous iron(III) chloride serves as a one-electron oxidant illustrated by its reaction with copper(I) chloride to give copper(II) chloride and iron(II) chloride.
This fundamental reaction is relevant to the use of ferric chloride solutions in etching copper.
Organometallic chemistry
The interaction of anhydrous iron(III) chloride with organolithium and organomagnesium compounds has been examined often. These studies are enabled because of the solubility of FeCl3 in ethereal solvents, which avoids the possibility of hydrolysis of the nucleophilic alkylating agents. Such studies may be relevant to the mechanism of FeCl3-catalyzed cross-coupling reactions. The isolation of organoiron(III) intermediates requires low-temperature reactions, lest the [FeR4]− intermediates degrade. Using methylmagnesium bromide as the alkylation agent, salts of Fe(CH3)4]− have been isolated. Illustrating the sensitivity of these reactions, methyl lithium reacts with iron(III) chloride to give lithium tetrachloroferrate(II) :
To a significant extent, iron(III) acetylacetonate and related beta-diketonate complexes are more widely used than FeCl3 as ether-soluble sources of ferric ion. These diketonate complexes have the advantages that they do not form hydrates, unlike iron(III) chloride, and they are more soluble in relevant solvents.
Cyclopentadienyl magnesium bromide undergoes a complex reaction with iron(III) chloride, resulting in ferrocene:
This conversion, although not of practical value, was important in the history of organometallic chemistry where ferrocene is emblematic of the field.
Uses
Water treatment
The largest applications of iron(III) chloride are sewage treatment and drinking water production. By forming highly dispersed networks of Fe-O-Fe containing materials, ferric chlorides serve as coagulant and flocculants. In this application, an aqueous solution of is treated with base to form a floc of iron(III) hydroxide (), also formulated as FeO(OH) (ferrihydrite). This floc facilitates the separation of suspended materials, clarifying the water.
Iron(III) chloride is also used to remove soluble phosphate from wastewater. Iron(III) phosphate is insoluble and thus precipitates as a solid. One potential advantage of its use in water treatment, is that the ferric ion oxidizes (deodorizes) hydrogen sulfide.
Etching and metal cleaning
It is also used as a leaching agent in chloride hydrometallurgy, for example in the production of Si from FeSi (Silgrain process by Elkem).
In another commercial application, a solution of iron(III) chloride is useful for etching copper according to the following equation:
The soluble copper(II) chloride is rinsed away, leaving a copper pattern. This chemistry is used in the production of printed circuit boards (PCB).
Iron(III) chloride is used in many other hobbies involving metallic objects.
Organic chemistry
In industry, iron(III) chloride is used as a catalyst for the reaction of ethylene with chlorine, forming ethylene dichloride (1,2-dichloroethane):
Ethylene dichloride is a commodity chemical, which is mainly used for the industrial production of vinyl chloride, the monomer for making PVC.
Illustrating it use as a Lewis acid, iron(III) chloride catalyses electrophilic aromatic substitution and chlorinations. In this role, its function is similar to that of aluminium chloride. In some cases, mixtures of the two are used.
Organic synthesis research
Although iron(III) chlorides are seldom used in practical organic synthesis, they have received considerable attention as reagents because they are inexpensive, earth abundant, and relatively nontoxic. Many experiments probe both its redox activity and its Lewis acidity. For example, iron(III) chloride oxidizes naphthols to naphthoquinones: 3-Alkylthiophenes are polymerized to polythiophenes upon treatment with ferric chloride. Iron(III) chloride has been shown to promote C-C coupling reaction.
Several reagents have been developed based on supported iron(III) chloride. On silica gel, the anhydrous salt has been applied to certain dehydration and pinacol-type rearrangement reactions. A similar reagent but moistened induces hydrolysis or epimerization reactions. On alumina, ferric chloride has been shown to accelerate ene reactions.
When pretreated with sodium hydride, iron(III) chloride gives a hydride reducing agent that convert alkenes and ketones into alkanes and alcohols, respectively.
Histology
Iron(III) chloride is a component of useful stains, such as Carnoy's solution, a histological fixative with many applications. Also, it is used to prepare Verhoeff's stain.
Natural occurrence
Like many metal halides, naturally occurs as a trace mineral. The rare mineral molysite is usually associated with volcanoes and fumaroles.
-based aerosol are produced by a reaction between iron-rich dust and hydrochloric acid from sea salt. This iron salt aerosol causes about 1-5% of naturally-occurring oxidization of methane and is thought to have a range of cooling effects; thus, it has been proposed as a catalyst for Atmospheric Methane Removal.
The clouds of Venus are hypothesized to contain approximately 1% dissolved in sulfuric acid.
Safety
Iron(III) chlorides are widely used in the treatment of drinking water, so they pose few problems as poisons, at low concentrations. Nonetheless, anhydrous iron(III) chloride, as well as concentrated aqueous solution, is highly corrosive, and must be handled using proper protective equipment.
| Physical sciences | Halide salts | Chemistry |
446558 | https://en.wikipedia.org/wiki/Solo%20Man | Solo Man | Solo Man (Homo erectus soloensis) is a subspecies of H. erectus that lived along the Solo River in Java, Indonesia, about 117,000 to 108,000 years ago in the Late Pleistocene. This population is the last known record of the species. It is known from 14 skullcaps, two tibiae, and a piece of the pelvis excavated near the village of Ngandong, and possibly three skulls from Sambungmacan and a skull from Ngawi depending on classification. The Ngandong site was first excavated from 1931 to 1933 under the direction of Willem Frederik Florus Oppenoorth, Carel ter Haar, and Gustav Heinrich Ralph von Koenigswald, but further study was set back by the Great Depression, World War II and the Indonesian War of Independence. In accordance with historical race concepts, Indonesian H. erectus subspecies were originally classified as the direct ancestors of Aboriginal Australians, but Solo Man is now thought to have no living descendants because the remains far predate modern human immigration into the area, which began roughly 55,000 to 50,000 years ago.
The Solo Man skull is oval-shaped in top view, with heavy brows, inflated cheekbones, and a prominent bar of bone wrapping around the back. The brain volume was quite large, measuring from , which is within the range of variation for present-day modern humans. One potentially female specimen may have been tall and weighed ; males were probably much bigger than females. Solo Man was in many ways similar to the Java Man (H. e. erectus) that had earlier inhabited Java, but was far less archaic.
Solo Man likely inhabited an open woodland environment much cooler than present-day Java, along with elephants, tigers, wild cattle, water buffalo, tapirs, and hippopotamuses, among other megafauna. They manufactured simple flakes and choppers (hand-held stone tools), and possibly spears or harpoons from bones, daggers from stingray stingers, as well as bolas or hammerstones from andesite. They may have descended from or were at least closely related to Java Man. The Ngandong specimens likely died during a volcanic eruption. The species probably went extinct with the takeover of tropical rainforest and loss of preferred habitat, beginning by 125,000 years ago. The skulls sustained damage, but it is unclear if it resulted from an assault, cannibalism, the volcanic eruption, or the fossilisation process.
Research history
Despite what English naturalist Charles Darwin had hypothesised in his 1871 book Descent of Man, many late-19th century evolutionary naturalists postulated that Asia, not Africa, was the birthplace of humankind as it is midway between Europe and America, providing optimal dispersal routes throughout the world (the Out of Asia theory). Among these was German naturalist Ernst Haeckel who argued that the first human species (which he named "Homo primigenius") evolved on the now-disproven hypothetical continent "Lemuria" in what is now Southeast Asia, from a genus he termed "Pithecanthropus" ("ape-man"). "Lemuria" had supposedly sunk below the Indian Ocean, so no fossils could be found to prove this. Nevertheless, Haeckel's model inspired Dutch scientist Eugène Dubois to join the Royal Netherlands East Indies Army (KNIL) and search for his "missing link" in the Indonesian Archipelago. On Java, he found a skullcap and a femur (Java Man) dating to the late Pliocene or early Pleistocene at the Trinil site along the Solo River, which he named "P." erectus (using Haeckel's hypothetical genus name) in 1893. He attempted unsuccessfully to convince the European scientific community that he had found an upright-walking ape-man. They largely dismissed his findings as a malformed non-human ape.
The "apeman of Java" nonetheless stirred up academic interest and, to find more remains, the Prussian Academy of Sciences in Berlin tasked German zoologist Emil Selenka with continuing the excavation of Trinil. Following his death in 1907, excavation was carried out by his wife and fellow zoologist Margarethe Lenore Selenka. Among the members was Dutch geologist Willem Frederik Florus Oppenoorth. The yearlong expedition was unfruitful, but the Geological Survey of Java continued to sponsor the excavation along the Solo River. Some two decades later, the Survey funded several expeditions to update maps of the island. Oppenoorth was made the head of the Java Mapping Program in 1930. One of their missions was to firmly distinguish Tertiary and Quaternary deposits, among the relevant sites a bed dating to the Pleistocene discovered by Dutch geologist Carel ter Haar in 1931, downriver from the Trinil site, near the village of Ngandong.
From 1931 to 1933, 12 human skull pieces (including well-preserved skullcaps), as well as two right tibiae (shinbones), one of which was essentially complete, were recovered under the direction of Oppenoorth, ter Haar, and German-Dutch geologist Gustav Heinrich Ralph von Koenigswald. Midway through excavation, Oppenoorth retired from the Survey and returned to the Netherlands, replaced by Polish geologist in 1933. At the same time, because of the Great Depression, the Survey's focus shifted to economically relevant geology, namely petroleum deposits, and the excavation of Ngandong ceased completely. In 1934, ter Haar published important summaries of the Ngandong operations before contracting tuberculosis. He returned to the Netherlands and died two years later. Von Koenigswald, who was hired principally to study Javan mammals, was fired in 1934. After much lobbying by Zwierzycki in the Survey, and after receiving funding from the Carnegie Institution for Science, von Koenigswald regained his position in 1937, but was too preoccupied with the Sangiran site to continue research at Ngandong.
In 1935, the Solo Man remains were transported to Batavia (today, Jakarta, Java, Indonesia) in the care of local university professor Willem Alphonse Mijsberg, with the hope he would take over study of the specimens. Before he had the opportunity, the fossils were moved to Bandung, West Java in 1942 because of the Japanese occupation of the Dutch East Indies. Japanese forces interned von Koenigswald for 32 months. At the cessation of the war, he was released, but the Indonesian War of Independence erupted. Jewish-German anthropologist Franz Weidenreich (who fled China before the Japanese invasion in 1941) arranged with the Rockefeller Foundation and The Viking Fund for von Koenigswald, his wife Luitgarde, and the Javan human remains (including Solo Man) to come to New York. Von Koenigswald and Weidenreich studied the material at the American Museum of Natural History until Weidenreich's death in 1948 (leaving behind a monograph on Solo Man posthumously published in 1951). In his 1956 book Meeting Prehistoric Men, von Koenigswald included a 14-page account of the Ngandong project with several unpublished results. The Solo Man remains came to be stored at Utrecht University, the Netherlands. In 1967, von Koenigswald gave the material to Teuku Jacob for his doctoral research. Jacob oversaw the excavation of Ngandong from 1976 to 1978 and recovered two more skull specimens and a pelvic fragment. In 1978, von Koenigswald returned the material to Indonesia, and the Solo Man remains were moved to the Gadjah Mada University, Special Region of Yogyakarta (south-central Java).
The specimens are:
Skull I, an almost complete skullcap probably belonging to an elderly female;
Skull II, a frontal bone probably belonging to a three to seven-year-old child;
Skull III, a warped skullcap probably belonging to an elderly individual;
Skull IV, a skullcap probably belonging to a middle-aged female;
Skull V, a probable male skullcap—indicated by its great length of ;
Skull VI, an almost complete skullcap probably belonging to an adult female;
Skull VII, a right parietal bone fragment probably belonging to a young, possibly female, individual;
Skull VIII, both parietal bones (separated) possibly belonging to a young male;
Skull IX, a skullcap missing the base probably belonging to an elderly individual (the small size is consistent with a female, but the heaviness is consistent with a male);
Skull X, a shattered skullcap probably belonging to a robust elderly female;
Skull XI, a nearly complete skullcap;
Tibia A, a few fragments of the shaft, measuring in diameter at the mid-shaft, probably belonging to an adult male;
Tibia B, a nearly complete right tibia measuring in length at in diameter at the mid-shaft, probably belonging to an adult female;
Ngandong 15, a partial skullcap;
Ngandong 16, a left parietal fragment; and
Ngandong 17, a left acetabulum (on the pelvis which forms part of the hip joint).
Age and taphonomy
The location of these fossils in the Solo terrace at the time of discovery was poorly documented. Oppenoorth, ter Haar, and von Koenigswald were only on site for 24 days of the 27 months of operation as they needed to oversee other Tertiary sites for the Survey. They left their geological assistants — Samsi and Panudju — to oversee the dig; their records are now lost. The Survey's site map remained unpublished until 2010 (over 75 years later) and is of limited use now, so the taphonomy and geological age of Solo Man have been contentious matters. All 14 specimens were reported to have been found in the upper section of Layer II (of six layers), which is a -thick stratum with gravelly sand and volcaniclastic hypersthene andesite. They are thought to have been deposited at around the same time, probably in a now-dry arm of the Solo River, about above the modern river. The site is about above sea level.
Volcaniclastic rock indicates deposition occurred soon after a volcanic eruption. Because of the sheer volume of fossils, humans and animals may have concentrated in great numbers in the valley upstream from the site due to the eruption or extreme drought. The ash would have poisoned the vegetation, or at least impeded its growth, leading to starvation and death among herbivores and humans, accumulating a mass of carcasses decomposing over several months. A lack of carnivore damage may indicate sufficient feeding was possible without having to resort to crunching through the bone. When the monsoon season came, lahars streaming from the volcano through the river channels swept the carcasses to the Ngandong site, where they and other debris created a jam because of the channel narrowing there. The H. erectus fossils from Sambungmacan, also along the Solo River, were possibly deposited in the same event.
The dating attempts are:
In 1932, based on the site's height above the present-day river, Oppenoorth suggested Solo Man dated to the Eemian interglacial, which at the time was roughly constrained to 150 to 100 thousand years ago from the Middle/Late Pleistocene transition. Later biochronological studies (using the animal remains to constrain the age) within the next few years by Oppenoorth in 1932, von Koenigswald in 1934, and ter Haar in 1936 agreed with a Late Pleistocene date.
The Solo Man remains were first radiometrically dated in 1988 and again in 1989, using uranium–thorium dating, to 200 to 30 thousand years ago, a wide error range.
In 1996, Solo Man teeth were dated, using electron spin resonance dating (ESR) and uranium–thorium isotope-ratio mass spectrometry, to 53.3 to 27 thousand years ago; this would mean Solo Man outlasted continental H. erectus by at minimum 250,000 years and was contemporaneous with modern humans in Southeast Asia, who immigrated roughly 55 to 50 thousand years ago.
In 2008, gamma spectroscopy on three of the skulls showed they experienced uranium leaching, and the Solo Man remains were re-dated to roughly 70 to 40 thousand years ago. This would still make it possible Solo Man was contemporaneous with modern humans.
In 2011, argon–argon dating of pumice hornblende yielded a maximum age of 546 ± 12 thousand years ago, and ESR and uranium–thorium dating of a mammal bone just downstream at the Jigar I site a minimum age of 143 to 77 thousand years ago. This extended interval would make it possible Solo Man was contemporaneous with continental H. erectus, long before modern humans dispersed across the continent.
In 2020, the first comprehensive chronology of the Ngandong site was published which found the Solo River was diverted through the site 500,000 years ago; the Solo terrace was deposited over 316 to 31 thousand years ago; the Ngandong terrace 141 to 92 thousand years ago; and the H. erectus bone bed 117 to 108 thousand years ago. This would mean Solo Man is indeed the last known H. erectus population and did not interact with modern humans.
Classification
Multiregional model
The racial classification of Aboriginal Australians, because of the robustness of the skull compared to that of other modern-day populations, has historically been a complicated question for European science since Johann Friedrich Blumenbach (the founder of physical anthropology) introduced the topic in 1795 in his De Generis Humani Varietate Nativa ("On the Natural History of Mankind"). Following the conception of evolution by Darwin, English anthropologist Thomas Henry Huxley suggested an ancestor–descendant relationship between European Neanderthals and Aboriginal Australians in 1863, which was furthered by later racial anthropologists until the discovery of Indonesian archaic humans.
In 1932, Oppenoorth preliminarily drew parallels between the Solo Man skull and that of Rhodesian Man from Africa, Neanderthals, and modern day Aboriginal Australians. At the time, humans were generally believed to have originated in Central Asia, as championed primarily by American palaeontologist Henry Fairfield Osborn and his protégé William Diller Matthew. They believed Asia was the "mother of continents" and the rising of the Himalayas and Tibet and subsequent drying of the region forced human ancestors to become terrestrial and bipedal. They maintained that populations which retreated to the tropics – namely Dubois's Java Man and the "Negroid race" — substantially regressed (degeneration theory). They also rejected Raymond Dart's South African Taung child (Australopithecus africanus) as a human ancestor, favouring the hoax Piltdown Man from Britain. At first, Oppenoorth believed the Ngandong material represented an Asian type of Neanderthal which was more closely allied with the Rhodesian Man (also considered a Neanderthal type), and gave it a generic distinction as "Javanthropus soloensis". Dubois considered Solo Man to be more or less identical to the East Javan Wajak Man (now classified as a modern human), so Oppenoorth subsequently began using the name "Homo (Javanthropus) soloensis". Oppenoorth hypothesised that the Java Man evolved in Indonesia and was the predecessor of modern day Aboriginal Australians, Solo Man being a transitional fossil. He considered Rhodesian Man a member of this same group. As for the Chinese Peking Man (now H. e. pekinensis), he believed it dispersed west and gave rise to the Neanderthals.
Thus, the ancient Java Man, Solo Man, and Rhodesian Man were commonly grouped together in the "Pithecanthropoid-Australoid" lineage. "Australoid" includes Aboriginal Australians and Melanesians. This was an extension of the multiregional origin of modern humans championed by Weidenreich and American racial anthropologist Carleton S. Coon, who believed that all modern races and ethnicities (which were classified into separate subspecies or even species until the mid-20th century) evolved independently from a local archaic human species (polygenism). Aboriginal Australians were considered the most primitive race alive. In the 1950s, German evolutionary biologist Ernst Mayr entered the field of palaeoanthropology, and, surveying a "bewildering diversity of names", decided to define only three species of Homo: "H. transvaalensis" (the australopithecines), H. erectus (including Solo Man and several putative African and Asian taxa), and Homo sapiens (including anything younger than H. erectus, such as modern humans and Neanderthals). Mayr defined them as a sequential lineage, each species evolving into the next (chronospecies). Though Mayr later changed his opinion on the australopithecines (recognising Australopithecus), and a few species have since been named or regained some acceptance, his more conservative view of archaic human diversity became widely adopted in the subsequent decades.
Though Mayr did not expand upon the subspecies of H. erectus, subsequent authors began formally sinking species from all parts of the Old World into it. Solo Man was placed into the "Neanderthal/Neanderthalien/Neanderthaloid group" by Weidenreich in the 1940s, which he reserved for specimens apparently transitional between H. erectus and H. sapiens. The group could also be classified under the now-defunct genus "Palaeoanthropus". Solo man was first classified as a subspecies of H. erectus by Coon in his 1962 book The Origin of Races.
Assimilation model
The claim that Aboriginal Australians were descended from Asian H. erectus was expanded upon in the 1960s and 1970s as some of the oldest known (modern) human fossils were being recovered from Australia, primarily under the direction of Australian anthropologist Alan Thorne. He noted some populations were prominently more robust than others, so he suggested Australia was colonised in two waves ("di-hybrid model"): the first wave being highly robust and descending from nearby H. erectus, and the second wave more gracile (less robust) and descending from anatomically modern East Asians (who, in turn, descended from Chinese H. erectus). It was subsequently discovered that some of the more robust specimens are geologically younger than the gracile ones.
By the 1980s, as African species like A. africanus became widely accepted as human ancestors and race became less salient in anthropology, the Out of Africa theory overturned the Out of Asia and multiregional models. The multiregional model was consequently reworked into local populations of archaic humans having interbred and contributed at least some ancestry to modern populations in their respective regions, otherwise known as the assimilation model. Solo Man fits into this by having hybridised with the fully modern ancestors of Aboriginal Australians travelling south through Southeast Asia. The assimilation model was not ubiquitously supported.
In 2006, Australian palaeoanthropologist Steve Webb speculated instead that Solo Man was the first human species to reach Australia, and more robust modern Australian specimens represent hybrid populations.
Present
The date of 117 to 108 thousand years ago for Solo Man, predating modern human dispersal through Southeast Asia (and eventually into Australia), is at odds with this conclusion. Such an ancient date leaves Solo Man with no living descendants. Similarly, a 2021 genomic study looking at the genomes of over 400 modern humans (of which 200 came from Island Southeast Asia) found no evidence of any "super-archaic" (i. e. H. erectus) introgression.
Solo Man has generally been considered to have descended from Java Man (H. e. erectus, typified by the Sangiran/Trinil populations), and the three skulls from Sambungmacan and the skull from Ngawi have been assigned to H. e. soloensis or some intermediary stage between H. e. erectus and H. e. soloensis. It is largely unclear if there was gene flow from the continent. The alternate hypothesis, first proposed by Jacob in 1973, is that the Sangiran/Trinil and Ngandong/Ngawi/Sambungmacan populations were sister groups that evolved parallel to each other. If the alternate is correct, this could warrant species distinction as "H. soloensis", but the definitions of species and subspecies, especially in palaeoanthropology, are poorly drawn.
Anatomy
The identification as adult or juvenile was based on the closure of the cranial sutures, assuming they closed at a rate similar to modern humans (though they may have closed at earlier ages in H. erectus). Characteristic of H. erectus, the skull is exceedingly thick in Solo Man, ranging from double to triple what would be seen in modern humans. Male and female specimens were distinguished by assuming males were more robust than females, though both males and females are exceptionally robust compared to other Asian H. erectus. The adult skulls average in length times breadth, and are proportionally similar to that of the Peking Man but have a much larger circumference. Skull V is the longest at . For comparison, the dimensions of modern human skulls average for men and for women.
The Solo Man remains are characterised by more derived traits than more archaic Javan H. erectus, most notably a larger brain size, an elevated cranial vault, reduced postorbital constriction, and less developed brow ridges. They still closely resemble earlier H. erectus. Like Peking Man, there was a slight sagittal keel running across the midline of the skull. Compared to other Asian H. erectus, the forehead is proportionally low and also has a low angle of inclination. The brow ridges do not form a continuous bar like in Peking Man, but curve downwards at the midpoint, forming a nasal bridge. The brows are quite thick, especially at the lateral ends (nearest the edge of the face). Like Peking Man, the frontal sinuses are confined to between the eyes rather than extending into the brow region. Compared to Neanderthals and modern humans, the area the temporal muscle would have covered is rather flat. The brow ridges merge into markedly thickened cheek bones. The skull is phenozygous, in that the skullcap is proportionally narrow compared to the cheekbones, so that the latter are still visible when looking down at the skull in top-view. The squamous part of the temporal bone is triangular like that of Peking Man, and the infratemporal crest is quite sharp. Like earlier Javan H. erectus, the inferior and superior temporal lines (on the parietal bone) diverge towards the back of the skull.
At the back of the skull, there is a sharp, thick occipital torus (a projecting bar of bone) which marks a clear separation between the occipital and nuchal planes. The occipital torus projects the most at the part corresponding to the external occipital protuberance in modern humans. The base of the temporal bone is consistent with Java Man and Peking Man rather than Neanderthals and modern humans. Unlike Neanderthals and modern humans, there is a defined bony pyramid structure near the root of the pterygoid bone. The mastoid part of the temporal bone at the base of the skull notably juts out. The occipital condyles (which connect the skull to the spine) are proportionally small compared to the foramen magnum (where the spinal cord passes into the skull). Large, irregular bony projections lie directly behind the occipital condyles.
The brain volumes of the six Ngandong specimens for which the metric is calculable range from . The Ngawi I skull measures ; and the three Sambungmacan skulls . This makes for an average of over . Overall, Asian H. erectus are big-brained, averaging roughly . For comparison, a 1955 survey of 63 Aboriginal Australians reported a brain volume range of ; that is, Asian H. erectus brain volume fits within the modern human range of variation. The base of the braincase, and thus the brain, seems to have been flat rather than curved. The sella turcica at the base of the skull, near the pituitary gland, is much larger than that of modern humans, which Weidenreich in 1951 cautiously attributed to an enlarged gland which caused the extraordinary thickening of the bones.
Of the two known tibiae, Tibia A is much more robust than Tibia B and is consistent overall with Neanderthal tibiae. Like other H. erectus, the tibiae are thick and heavy. Based on the reconstructed length of , Tibia B may have belonged to a tall, individual. Tibia A is assumed to have belonged to a larger individual. Asian H. erectus, for which height estimates are taken (a rather small sample size), typically range from , with Indonesian H. erectus in tropical environments typically scoring on the higher end, and continental specimens in colder latitudes on the lower end. The single pelvic fragment from Ngandong has not yet been described formally.
Culture
Palaeohabitat
At the species level, the Ngandong fauna is similar overall to the older Kedung Brubus fauna roughly 800 to 700 thousand years ago, a time of mass immigration of large mammal species to Java, including Asian elephants and Stegodon. Other Ngandong fauna include the tiger Panthera tigris soloensis, Malayan tapir, the hippo Hexaprotodon, sambar deer, water buffalo, the cow Bos palaesondaicus, pigs, and crab-eating macaque. These are consistent with an open woodland environment. The presence of the common crane in the nearby contemporaneous Watualang site could indicate much cooler conditions than today. The driest conditions probably corresponded to the glacial maximum roughly 135,000 years ago, exposing the Sunda shelf and connecting the major Indonesian islands to the continent. By 125,000 years ago, the climate became much wetter, making Java an island, and allowing for the expansion of tropical rainforests. This caused the succession of the Ngandong fauna by the Punung fauna, which represents the modern day animal assemblage of Java, though more typical Punung fauna — namely orangutans and gibbons — probably could not penetrate the island until it was reconnected to the continent after 80,000 years ago. H. erectus, a specialist in woodland and savannah biomes, likely went extinct with the loss of the last open-habitat refugia.
H. e. soloensis was the last population of a long occupation history of the island of Java by H. erectus, beginning 1.51 to 0.93 million years ago at the Sangiran site, continuing 540 to 430 thousand years ago at the Trinil site, and finally 117 to 108 thousand years ago at Ngandong. If the date is correct for Solo Man, then they would represent a terminal population of H. erectus which sheltered in the last open-habitat refuges of East Asia before the rainforest takeover. Before the immigration of modern humans, Late Pleistocene Southeast Asia was also home to H. floresiensis endemic to the island of Flores, Indonesia, and H. luzonensis endemic to the island of Luzon, the Philippines. Genetic analysis of present-day Southeast Asian populations indicates the widespread dispersal of the Denisovans (a species currently recognisable only by their genetic signature) across Southeast Asia, whereupon they interbred with immigrating modern humans 45.7 and 29.8 thousand years ago. A 2021 genomic study indicates that, aside from the Denisovans, modern humans never interbred with any of these endemic human species, unless the offspring were unviable or the hybrid lineages have since died out.
Judging by the sheer number of specimens deposited at Ngandong at the same time, there may have been a sizeable population of H. e soloensis before the volcanic eruption which resulted in their interment, but population is difficult to approximate with certainty. The Ngandong site was some distance away from the northern coast of the island, but it is unclear where the southern shoreline and the mouth of the Solo River would have been.
Technology
In 1936, while studying photos taken by Dutch archaeologist , Oppenoorth made note of several broken animal bone remains, most notably damage to a large tiger skull and some deer antlers, which he considered evidence of bone technology. He suggested some deer antlers had a carved bird skull hafted onto the end to be used as axes. In 1951, Weidenreich voiced his scepticism—as the bones were invariably damaged by the river, and perhaps crocodiles and other natural processes—arguing instead that none of the bones reliably show any evidence of human modification. Oppenoorth further suggested a long piece of bone carved with an undulating pattern on both sides was used as a harpoon, similar to harpoons manufactured in the Magdalenian of Europe, but Weidenreich interpreted it as a spearhead. Weidenreich made note of anomalous inland stingray stingers at Ngandong, which he supposed were collected by Solo Man for use as daggers or arrowheads, similar to some recent South Pacific peoples. It is unclear if this apparent bone technology can be associated with Solo Man or later modern human activity, though the Trinil H. e. erectus population seems to have worked with such material, manufacturing scrapers from Pseudodon shells and possibly opening them up with shark teeth.
Oppenoorth also identified a perfectly round andesite stone ball from Ngandong, a common occurrence in the Solo Valley, ranging in diameter from . As well, similar balls have been identified in contemporaneous and younger European Mousterian and African Middle Stone Age sites, as ancient as African Acheulean sites (notably Olorgesailie, Kenya). On Java, they have been found at Watualang (contemporaneous with Ngandong) and Sangiran. Traditionally, these have been interpreted as bolas (tied together in twos or threes and flung as a hunting weapon), but also individually thrown projectiles, club heads, or plant-processing or bone-breaking tools. In 1993, American archaeologists Kathy Schick and Nicholas Toth demonstrated the spherical shape could be reproduced simply if the stone is used as a hammer for an extended period.
In 1938, von Koenigswald returned to the Ngandong site along with archaeologists Helmut de Terra, Hallam L. Movius and Pierre Teilhard de Chardin to collect lithic cores and flakes (i.e. stone tools). Because of wear caused by the river, it is difficult to identify with confidence that some of these rocks are actual tools. They are small and simple, usually smaller than and made most commonly of chalcedony (but also chert and jasper) washed up by the river. A few volcanic rocks and wood fragments seem to have been modified into heavy duty chopping tools. In 1973, the nearby Sambungmacan site yielded a unifacial chopper (as well as a flake) made of andesite. Because of how few tools have been recovered, it is impossible to categorise Solo Man into any distinct industry. Like many other Southeast Asian sites predating modern humans, the Ngandong site lacks sophisticated choppers, hand axes, or any other complex chopping tool characteristic of the Acheulean of Western Eurasian and African sites. In 1948, Movius suggested this was because of a great technological divide between western and eastern H. erectus (the "Movius Line") caused by a major difference in habitat (open area vs. tropical rainforest), as the chopping tools are generally interpreted as evidence of big game hunting, which he believed was only possible when humans spread out onto open plains.
Though a strict "Movius Line" is not well supported anymore with the discovery of some hand axe technology in Middle Pleistocene East Asia, handaxes are still conspicuously rare and crude in East Asia compared to western contemporaries. This has been explained as: the Acheulean emerged in Africa after human dispersal through East Asia (but this would require that the two populations remained separated for nearly two million years); East Asia had poorer quality raw materials, namely quartz and quartzite (but some Chinese localities produced handaxes from these materials and East Asia is not completely void of higher-quality minerals); East Asian H. erectus used biodegradable bamboo instead of stone for chopping tools (but this is difficult to test); or East Asia had a lower population density, leaving few tools behind in general (though demography is difficult to approximate in the fossil record).
Possible cannibalism
In 1951, Weidenreich and von Koenigswald made note of major injuries in Skulls IV and VI, which they believed were caused by a cutting instrument and a blunt instrument, respectively. They bear evidence of inflammation and healing, so the individuals probably survived the altercation. Weidenreich and von Koenigswald noted that only the skullcaps were found, lacking even the teeth, which is highly unusual. So, they interpreted at least Skulls IV and VI as victims of an "unsuccessful assault", and the other skulls where the base was broken out "the result of more successful attempts to slay the victims," presuming this was done by other humans to access and consume the brain. They were unsure if this was done by a neighbouring H. e. soloensis tribe, or "by more advanced human beings who would have given evidence of their 'superior' culture by slaying their more primitive fellowsman". The latter scenario had already been proposed for the Peking Man (which has similarly conspicuous pathology) by French palaeontologist Marcellin Boule in 1937. Nonetheless, Weidenreich and von Koenigswald conceded that some of the injuries could have been related to the volcanic eruption instead. Von Koenigswald suggested only skullcaps exist because Solo Man was modifying skulls into skull cups, but Weidenreich was sceptical of this as the jagged rims of especially Skulls I, V, and X are not well suited for this purpose.
Cannibalism and ritual headhunting have also been proposed for the Trinil, Sangiran, and Modjokerto sites (all in Java) based on the conspicuous lack of any remains other than the skullcap. This had been reinforced by the historic practice of headhunting and cannibalism in some modern Indonesian, Australian, and Polynesian groups, which at the time were believed to have descended from these H. erectus populations. In 1972, Jacob alternatively suggested that because the base of the skull is weaker than the skullcap, and since the remains had been transported through a river with large stone and boulders, this was a purely natural phenomenon. As for the lack of the rest of the skeleton, if tiger predation was a factor, tigers usually only leave the head since it has the least amount of meat on it. Further, the Ngandong material, especially Skulls I and IX, were damaged during excavation, cleaning, and preparation.
| Biology and health sciences | Homo | Biology |
446712 | https://en.wikipedia.org/wiki/Josephson%20effect | Josephson effect | In physics, the Josephson effect is a phenomenon that occurs when two superconductors are placed in proximity, with some barrier or restriction between them. The effect is named after the British physicist Brian Josephson, who predicted in 1962 the mathematical relationships for the current and voltage across the weak link. It is an example of a macroscopic quantum phenomenon, where the effects of quantum mechanics are observable at ordinary, rather than atomic, scale. The Josephson effect has many practical applications because it exhibits a precise relationship between different physical measures, such as voltage and frequency, facilitating highly accurate measurements.
The Josephson effect produces a current, known as a supercurrent, that flows continuously without any voltage applied, across a device known as a Josephson junction (JJ). These consist of two or more superconductors coupled by a weak link. The weak link can be a thin insulating barrier (known as a superconductor–insulator–superconductor junction, or S-I-S), a short section of non-superconducting metal (S-N-S), or a physical constriction that weakens the superconductivity at the point of contact (S-c-S).
Josephson junctions have important applications in quantum-mechanical circuits, such as SQUIDs, superconducting qubits, and RSFQ digital electronics. The NIST standard for one volt is achieved by an array of 20,208 Josephson junctions in series.
History
The DC Josephson effect had been seen in experiments prior to 1962, but had been attributed to "super-shorts" or breaches in the insulating barrier leading to the direct conduction of electrons between the superconductors.
In 1962, Brian Josephson became interested into superconducting tunneling. He was then 23 years old and a second-year graduate student of Brian Pippard at the Mond Laboratory of the University of Cambridge. That year, Josephson took a many-body theory course with Philip W. Anderson, a Bell Labs employee on sabbatical leave for the 1961–1962 academic year. The course introduced
Josephson to the idea of broken symmetry in superconductors, and he "was fascinated by the idea of broken symmetry, and wondered whether there could be any way of observing it experimentally". Josephson studied the experiments by Ivar Giaever and Hans Meissner, and theoretical work by Robert Parmenter. Pippard initially believed that the tunneling effect was possible but that it would be too small to be noticeable, but Josephson did not agree, especially after Anderson introduced him to a preprint of "Superconductive Tunneling" by Cohen, Falicov, and Phillips about the superconductor-barrier-normal metal system.
Josephson and his colleagues were initially unsure about the validity of Josephson's calculations. Anderson later remembered:
We were all—Josephson, Pippard and myself, as well as various other people who also habitually sat at the Mond tea and participated in the discussions of the next few weeks—very much puzzled by the meaning of the fact that the current depends on the phase.
After further review, they concluded that Josephson's results were valid. Josephson then submitted "Possible new effects in superconductive tunnelling" to Physics Letters in June 1962. The newer journal Physics Letters was chosen instead of the better established Physical Review Letters due to their uncertainty about the results. John Bardeen, by then already Nobel Prize winner, was initially publicly skeptical of Josephson's theory in 1962, but came to accept it after further experiments and theoretical clarifications. | Physical sciences | Electrical circuits | Physics |
446844 | https://en.wikipedia.org/wiki/Betula%20pendula | Betula pendula | Betula pendula, commonly known as silver birch, warty birch, European white birch, or East Asian white birch, is a species of tree in the family Betulaceae, native to Europe and parts of Asia, though in southern Europe, it is only found at higher altitudes. Its range extends into Siberia, China, and southwest Asia in the mountains of northern Turkey, the Caucasus, and northern Iran. It has been introduced into North America, where it is known as the European white birch or weeping birch and is considered invasive in some states in the United States and parts of Canada. The tree can also be found in more temperate regions of Australia.
The silver birch is a medium-sized deciduous tree that owes its common name to the white peeling bark on the trunk. The twigs are slender and often pendulous and the leaves are roughly triangular with doubly serrate margins and turn yellow and brown in autumn before they fall. The flowers are catkins and the light, winged seeds get widely scattered by the wind. The silver birch is a hardy tree, a pioneer species, and one of the first trees to appear on bare or fire-swept land. Many species of birds and animals are found in birch woodland, the tree supports a wide range of insects and the light shade it casts allows shrubby and other plants to grow beneath its canopy. It is planted decoratively in parks and gardens and is used for forest products such as joinery timber, firewood, tanning, racecourse jumps, and brooms. Various parts of the tree are used in traditional medicine and the bark contains triterpenes, which have been shown to have medicinal properties.
Description
The silver birch typically reaches tall (exceptionally up to ), with a slender trunk usually under diameter. The bark on the trunk and branches is golden-brown at first, but later this turns to white as a result of papery tissue developing on the surface and peeling off in flakes, in a similar manner to the closely related paper birch (B. papyrifera). The bark remains smooth until the tree gets quite large, but in older trees, the bark thickens, becoming irregular, dark, and rugged. Young branches have whitish resin warts and the twigs are slender, hairless, and often pendulous. The buds are small and sticky, and development is sympodial – the terminal bud dies away and growth continues from a lateral bud. The species is monoecious with male and female catkins found on the same tree. Some shoots are long and bear the male catkins at the tip, while others are short and bear female catkins. The immature male catkins are present during the winter, but the female catkins develop in the spring, soon after the leaves unfurl.
The leaves have short, slender stalks and are long, triangular with broad, untoothed, wedge-shaped bases, slender pointed tips, and coarsely double-toothed, serrated margins. They are sticky with resin at first, but this dries as they age, leaving small, white scales. The foliage is a pale to medium green and turns yellow early in the autumn before the leaves fall. In midsummer, the female catkins mature and the male catkins expand and release pollen, and wind pollination takes place. A catkin of Silver birch could produce an average of 1.66 million pollen grains. The small, 1- to 2-mm winged seeds ripen in late summer on pendulous, cylindrical catkins long and broad. The seeds are very numerous and are separated by scales, and when ripe, the whole catkin disintegrates and the seeds are spread widely by the wind.
Silver birch can easily be confused with the similar downy birch (Betula pubescens). Yet, downy birches are characterised by hairy leaves and young shoots, whereas the same parts on silver birch are hairless. The leaf base of silver birch is usually a right angle to the stalk, while for downy birches, it is rounded. In terms of genetic structure, the trees are quite different, but do, however, occasionally hybridize.
Distribution and habitat
The silver birch grows naturally from western Europe eastwards to Kazakhstan, the Sakha Republic in Siberia, Mongolia, and the Xinjiang province in China, and southwards to the mountains of the Caucasus and northern Iran, Iraq, and Turkey. It is also native to northern Morocco and has become naturalised in some other parts of the world. In the southern parts of its range, it is mainly found in mountainous regions. Its light seeds are easily blown by the wind and it is a pioneer species, one of the first trees to sprout on bare land or after a forest fire. It needs plenty of light and does best on dry, acid soils and is found on heathland, mountainsides, and clinging to crags. Its tolerance to pollution make it suitable for planting in industrial areas and exposed sites. It has been introduced into North America, where it is known as the European white birch, and is considered invasive in the states of Kentucky, Maryland, Washington, and Wisconsin. It is naturalised and locally invasive in parts of Canada.
Taxonomy
Three subspecies of silver birch are accepted:
Betula pendula subsp. pendula – Europe and eastwards to central Asia
Betula pendula subsp. mandshurica (Regel) Ashburner & McAll. – eastern Asia and western North America; treated by some botanists as Betula platyphylla
Betula pendula subsp. szechuanica (C.K.Schneid.) Ashburner & McAll. – western China, from Qinghai and Gansu to Yunnan and southeast Xizang (Tibet), treated by some botanists as Betula szechuanica
B. pendula is distinguished from the related B. pubescens, the other common European birch, in having hairless, warty shoots (hairy and without warts in downy birch), more triangular leaves with double serration on the margins (more ovoid and with single serrations in downy birch), and whiter bark often with scattered black fissures (greyer, less fissured, in downy birch). It is also distinguished cytologically, silver birch being diploid (with two sets of chromosomes), whereas downy birch is tetraploid (four sets of chromosomes). Hybrids between the two are known, but are very rare, and being triploid, are sterile. The two have differences in habitat requirements, with silver birch found mainly on dry, sandy soils, and downy birch more common on wet, poorly drained sites such as clay soils and peat bogs. Silver birch also demands slightly more summer warmth than does downy birch, which is significant in the cooler parts of Europe. Many North American texts treat the two species as conspecific (and cause confusion by combining the downy birch's alternative vernacular name 'white birch', with the scientific name B. pendula of the other species), but they are regarded as distinct species throughout Europe.
Several varieties of B. pendula are no longer accepted, including B. pendula var. carelica, fontqueri, laciniata, lapponica, meridionalis, microlepis, and parvibracteata, as well as forms Betula pendula f. bircalensis, crispa, and palmeri. Other synonyms include:
Ecology
The silver birch has an open canopy which allows plenty of light to reach the ground. This allows a variety of mosses, grasses, and flowering plants to grow beneath, which in turn attract insects. Flowering plants often found in birch woods include primrose (Primula vulgaris), violet (Viola riviniana), bluebell (Hyacinthoides non-scripta), wood anemone (Anemone nemorosa), and wood sorrel (Oxalis acetosella). Small shrubs that grow on the forest floor include blaeberry (Vaccinium myrtillus) and cowberry (Vaccinium vitis-idaea).
Birds found in birch woodland include the chaffinch, tree pipit, willow warbler, nightingale, robin, woodcock, redpoll, and green woodpecker.
The branches of the silver birch often have tangled masses of twigs known as witch's brooms growing among them, caused by the fungus Taphrina betulina. Old trees are often killed by the decay fungus Fomitopsis betulina and fallen branches rot rapidly on the forest floor. This tree commonly grows with the mycorrhizal fungus Amanita muscaria in a mutualistic relationship. This applies particularly to acidic or nutrient-poor soils. Other mycorrhizal associates include Leccinum scabrum and Cantharellus cibarius. In addition to mycorrhiza, the presence of microfauna in the soil assists the growth of the tree, as it enhances the mobilization of nutrients.
The larvae of a large number of species of butterflies, moths, and other insects feed on the leaves and other parts of the silver birch. In Germany, almost 500 species of insects have been found on silver and downy birch including 106 beetles and 105 lepidopterans, with 133 insect species feeding almost exclusively on birch. Birch dieback disease can affect planted trees, while naturally regenerated trees seem less susceptible. This disease also affects B. pubescens and in 2000 was reported at many of the sites planted with birch in Scotland during the 1990s. In the United States, the wood is attacked by the bronze birch borer (Agrilus anxius), an insect pest to which it has no natural resistance.
Conservation
Betula pendula is considered a species of least concern by the IUCN Red List. The synonym Betula oycowiensis (as B. oycoviensis) was previously listed on the Red List as vulnerable, though it is now considered a synonym of B. pendula subsp. pendula. B. szaferi was previously considered extinct in the wild on the Red List, but is now considered a form of B. pendula with the presence of a mutant gene, causing it to grow weakly and fruit heavily.
Uses
The silver birch is Finland's national tree. Leafy, fragrant bunches of young silver birch boughs (called vihta or vasta) are used to gently beat oneself while bathing in the Finnish sauna. Silver birch is often planted in parks and gardens, grown for its white bark and gracefully drooping shoots, sometimes even in warmer-than-optimum places such as Los Angeles and Sydney. In Scandinavia and other regions of northern Europe, it is grown for forest products such as lumber and pulp, as well as for aesthetic purposes and ecosystem services. It is sometimes used as a pioneer and nurse tree elsewhere.
Silver birch wood is pale in colour with a light reddish-brown heartwood and is used in making furniture, plywood, veneers, parquet blocks, skis, and kitchen utensils, and in turnery. It makes a good firewood, but is quickly consumed by the flames. Slabs of bark are used for making roof shingles and strips are used for handicrafts such as bast shoes and small containers. Historically, the bark was used for tanning. Bark can be heated and the resin collected; the resin is an excellent waterproof glue and useful for starting fires. The thin sheets of bark that peel off young wood contain a waxy resin and are easy to ignite even when wet. The dead twigs are also useful as kindling for outdoor fires. The removal of bark was at one time so widespread that Carl Linnaeus expressed his concern for the survival of the woodlands.
Birch brushwood is used for racecourse jumps and besom brooms. In the spring, large quantities of sap rise up the trunk and this can be tapped. It contains around 1% sugars and can be used in a similar way to maple syrup, being drunk fresh, concentrated by evaporation, or fermented into a "wine".
Phytochemicals
The outer part of the bark contains up to 20% betulin. The main components in the essential oil of the buds are α-copaene (~10%), germacrene D (~15%), and δ-cadinene (~13%). Also present in the bark are other triterpene substances which have been used in laboratory research to identify its possible biological properties.
Medical uses
Standardized allergen extract, white birch, sold under the brand name Itulatek, is indicated for the treatment of allergy to tree pollen from birch, alder and/or hazel and have allergic rhinitis (with or without conjunctivitis).
The combination of Betula pendula/Betula pubescens is used to treat epidermolysis bullosa. The combination of these is also used to make Episalvan gel, which is used to treat wounds in upper layers of the skin.
Leaf extracts of Betula pendula have been used to treat both rheumatoid arthritis and osteoarthritis. The extracts inhibit cell growth and cell division of the activated T lymphocytes by inducing apoptosis in the cell. This causes a decrease in inflammation caused by arthritis.
Betula pendula and Betula pubescens have the potential to treat cancer because of anti-carcinogenic properties. These buds contain stantin and cirsimaritin. Santin is a flavonol that expresses anti-inflammatory characteristics, which suppresses genes associated with cancer. Both santin and cirsimaritin induce apoptosis of cancer cells. Betula pendula bark extracts inhibit growth of in vitro malignant human cell lines: skin epidermoid carcinoma, ovarian carcinoma, cervix adenocarcinoma, and breast adenocarcinoma. Betula pendula bark extract is also effective for treating actinic keratosis.
Cultivation
Successful birch cultivation requires a climate cool enough for at least the occasional winter snowfall. As they are shallow-rooted, they may require water during dry periods. They grow best in full sun planted in deep, well-drained soil.
Cultivars and varieties
'Carelica' or "curly birch" is called visakoivu in Finland. The wood is hard and burled throughout; it is prized for its decorative appearance and is used in wood-carving and as veneer.
'Laciniata' (commonly misidentified as 'Dalecarlica') has deeply incised leaves and weeping branches
'Purpurea' has dark purple leaves
'Tristis' has an erect trunk with weeping branchlets
'Youngii' has dense, twiggy, weeping growth with no central leader and requires being grafted onto a standard stem of normal silver birch.
The cultivars marked above have gained the Royal Horticultural Society's Award of Garden Merit.
| Biology and health sciences | Fagales | Plants |
446976 | https://en.wikipedia.org/wiki/GALEX | GALEX | Galaxy Evolution Explorer (GALEX or Explorer 83 or SMEX-7) was a NASA orbiting space telescope designed to observe the universe in ultraviolet wavelengths to measure the history of star formation in the universe. In addition to paving the way for future ultraviolet missions, the space telescope allowed astronomers to uncover mysteries about the early universe and how it evolved, as well as better characterize phenomena like black holes and dark matter. The mission was extended three times over a period of 10 years before it was decommissioned in June 2013. GALEX was launched on 28 April 2003 and decommissioned in June 2013.
Spacecraft
The spacecraft was three-axis stabilized, with power coming from four fixed solar panels. The satellite bus is from Orbital Sciences Corporation based on OrbView 4. The telescope was a Modified Ritchey–Chrétien with a rotating grism. GALEX used the first ever UV light dichroic beam-splitter flown in space to direct photons to the Near UV (175–280 nanometers) and Far UV (135–174 nanometers) microchannel plate detectors. Each of the two detectors has a diameter. The target orbit is circular and inclined at 29.00° to the equator.
Launch
An air launched Pegasus launch vehicle, launched on 28 April 2003 at 11:59:57 UTC, placed the craft into a nearly circular orbit at an altitude of and an orbital inclination to the Earth's equator of 29.00°.
Mission
The Galaxy Evolution Explorer (GALEX) which explored the origin and evolution of galaxies, and the origins of stars and heavy elements over the redshift range of Z between 0 and 2. GALEX conducted an all-sky imaging survey, a deep imaging survey, and a survey of 200 galaxies nearest to the Milky Way galaxy. As well, GALEX performed three spectroscopic surveys over the 135–300 nanometre band. GALEX had a planned 29-month mission, and is a part of the Small Explorer (SMEX) program.
The first observation was dedicated to the crew of the Space Shuttle Columbia, and was images in the constellation of Hercules taken on 21 May 2003. This region was selected because it had been directly overhead the shuttle at the time of its last contact with the NASA Mission Control Center, Houston, Texas.
After its primary mission of 29 months, observation operations were extended. In 2009, one of its detectors, which observed in far-ultraviolet light, stopped functioning. Late in the mission, observations of more intense UV sources were allowed, including the Kepler field.
Observation operations were extended to almost 9 years, with NASA placing it into standby mode on 7 February 2012. NASA cut off financial support for operations of GALEX in early February 2011 as it was ranked lower than other projects which were seeking a limited supply of funding. The mission's life-cycle cost to NASA was US$150.6 million. The California Institute of Technology (Caltech) negotiated to transfer control of GALEX and its associated ground control equipment to the California Institute of Technology in keeping with the Stevenson-Wydler Technology Innovation Act. Under this Act, excess research equipment owned by the U.S. government can be transferred to educational institutions and non-profit organizations. On 17 May 2012, GALEX operations were transferred to Caltech.
On 28 June 2013, NASA decommissioned GALEX. It is expected that the spacecraft will remain in orbit until at least 2068 before it will re-enter the atmosphere.
Science mission
The telescope made observations in ultraviolet wavelengths to measure the history of star formation in the universe 80% of the way back to the Big Bang. Since scientists have evidence that the Universe to be about 13.8 billion years old, the mission studied galaxies and stars across about 10 billion years of cosmic history.
The spacecraft's mission was to observe hundreds of thousands of galaxies, with the goal of determining the distance of each galaxy from Earth and the rate of star formation in each galaxy. Near-UV (NUV) and Far-UV (FUV) emissions as measured by GALEX can indicate the presence of young stars, but may also originate from old stellar populations (e.g. sdB stars).
Partnering with the NASA Jet Propulsion Laboratory (JPL) on the mission were the California Institute of Technology, Orbital Sciences Corporation, University of California, Berkeley, Yonsei University, Johns Hopkins University, Columbia University, and Laboratoire d'Astrophysique de Marseille, France.
The observatory participated in GOALS with Spitzer Space Telescope, Chandra X-ray Observatory, and Hubble Space Telescope. GOALS stands for Great Observatories All-sky LIRG Survey, and Luminous Infrared Galaxies were studied at the multiple wavelengths allowed by the telescopes.
Science objectives
The primary objective of the Galaxy Evolution Explorer was to learn what factors trigger star formation inside galaxies; how quickly stars form, evolve and die; and how heavy chemical elements form in stars. Additional goals include:
Determining how fast stars are forming inside each galaxy
Determining when and how the stars we see today formed
Creating the first map of the ultraviolet universe
Helping scientists find and understand ultraviolet bright quasars. These objects can serve as background sources for the Hubble Space Telescope and FUSE as it probes the gases from which galaxies form stars
To accomplish its objectives, the Galaxy Evolution Explorer conducted eight surveys, grouped into two broad categories – a local universe investigation and a star formation history investigation. The local universe investigation includes the following four surveys:
All-sky imaging survey – will look at the entire sky and develop a comprehensive catalogue of ultraviolet galaxy images, useful to map the distribution of star formation within the local universe
Nearby galaxy survey – will study about 150 nearby galaxies that are familiar to scientists to understand how stars formed in individual galaxies
Wide-field spectroscopic survey – will analyze the light wavelengths of galaxies in a wide swath of the sky
Medium spectroscopic survey – will examine the light properties of galaxies within a narrower portion of the sky
The star formation history investigation will take information gathered by the local universe investigation and apply it to more distant galaxies by looking further back in time. It includes the following four surveys:
Deep imaging survey – will look at a portion of the sky to study the distribution of star formation in the deep universe
Deep spectroscopic survey – will look for the most distant galaxies
Ultra-deep imaging survey – will look as deep as possible at a very small portion of the sky
Medium imaging survey – will study star formation in galaxies beyond our local cosmic neighborhood, but not as deep as the deep imaging survey
Telescope specifications
The telescope had a diameter aperture primary, in a Ritchey–Chrétien telescope f/6.0 configuration. It can see light wavelengths from 135 nanometres to 280-nm, with a field of view of 1.2° wide (larger than a full Moon). It had gallium arsenide (GaAs) solar cells which supply nearly 300 watts to the spacecraft.
Experiment
Ultraviolet telescope
GALEX carries a single f/6.0, Ritchey–Chrétien telescope, with a diameter primary, and a secondary mirror. Beam-splitters direct the Near UV (NUV) and Far UV (FUV) components to separate photoelectric detectors of diameter . In each, the photoelectrons are multiplied by a microchannel plate, and detected by the anode grid. The grid enables determination of the exact position of electron impact, by the time delay of each pulse at the two ends. The telescope has a field of view (FoV) of 1.2°, and a resolution of five arcseconds, and enables either imaging or spectral composition of a single star/galaxy, by a rotatable wheel containing a clear window and a grism (a cross between a grating and a prism).
Pre-launch images
| Technology | Space-based observatories | null |
447559 | https://en.wikipedia.org/wiki/Alnus%20rubra | Alnus rubra | Alnus rubra, the red alder,
is a deciduous broadleaf tree native to western North America (Alaska, Yukon, British Columbia, Washington, Oregon, California, Idaho and Montana).
Description
Alnus rubra is the largest species of alder in North America and one of the largest in the world, reaching heights of . The official tallest red alder (as of 1979) stands tall in Clatsop County, Oregon (US). The trunks range from in diameter. The bark is mottled, ashy-gray and smooth, often colonized by white lichen and moss. The leaves are ovate, long, with bluntly serrated edges and a distinct point at the end; the leaf margin is revolute, the very edge being curled under, a diagnostic character which distinguishes it from all other alders. Rather than turning yellow in autumn, its leaves darken in colour and wither before they are shed. The male flowers are dangling reddish catkins long in early spring. Female flowers occur in clusters of (3) 4–6 (8). Female catkins are erect during anthesis, but otherwise pendant. They develop into small, woody, superficially cone-like oval dry fruit long. The seeds develop between the woody bracts of the 'cones' and are shed in late autumn and winter. Red alder seeds have a membranous winged margin that allows long-distance dispersal.
Specimens can live to about 60 years of age before being seriously afflicted by heart rot.
Taxonomy
The name derives from the bright rusty red color that develops in bruised or scraped bark.
Distribution
Alnus rubra grows from Southeast Alaska to central coastal California, nearly always within about of the Pacific coast, except for an extension inland across Washington and Oregon into northernmost Montana. It can be found from sea level to elevations of .
Ecology
In southern Alaska, western British Columbia and the northwestern Pacific Coast Ranges of the United States, red alder grows on cool and moist slopes; inland and at the southern end of its range (California) it grows mostly along the margins of watercourses and wetlands. It is shade intolerant.
In moist forest areas, Alnus rubra will rapidly cover a former burn or clearcut, often preventing the establishment of conifers. It is a prolific seed producer, but the small, wind-dispersed seeds require an open area of mineral soil to germinate, and so skid trails and other areas disturbed by logging or fire are ideal seedbeds. Such areas may host several hundred thousand to several million seedlings per hectare in the first year after landscape disturbance.
Twigs and buds of alder are only fair browse for wildlife, but deer and elk browse the twigs in fall and twigs and buds in the winter and spring. Beaver occasionally eat the bark, though it is not a preferred species. Several finches eat alder seeds, notably common redpoll and pine siskin, and as do deer mice. Tent caterpillars often feed on the leaves, but the trees usually recover within a year.
The tree hosts the nitrogen-fixing actinomycete Frankia in nodules on roots. This association allows alder to grow in nitrogen-poor soils, and makes the species an important early colonizer of disturbed forests and riparian areas. This self-fertilizing trait allows red alder to grow rapidly, and makes it effective in covering disturbed and/or degraded land, such as mine spoils. Imported Red Alder has been found to be able to make successful associations with Frankia strains present in the UK. Alder leaves, shed in the fall, decay readily to form a nitrogen-enriched humus making the nitrogen available to other species.
Common associates
Red alder is associated with coast Douglas-fir (Pseudotsuga menziesii subsp. menziesii), western hemlock (Tsuga heterophylla), grand fir (Abies grandis), western redcedar (Thuja plicata), and Sitka spruce (Picea sitchensis) forests.
Along stream banks, it is commonly associated with willows (Salix spp.), red osier dogwood (Cornus stolonifera), Oregon ash (Fraxinus latifolia), and bigleaf maple (Acer macrophyllum).
To the southeast of its range it is replaced by white alder (Alnus rhombifolia), which is a tree of similar stature, but which differs in the leaf margins not being rolled under, lack of distinct lobes, and lack of membranous wings on seed margins. In the high mountains it is replaced by the smaller and more shrub-like Sitka alder (Alnus viridis subsp. sinuata), and east of the Cascade Mountains by thinleaf alder (Alnus incana subsp. tenuifolia).
Uses
As dye
A russet dye can be made from a decoction of the bark, apparently due to the tannin it contains, and was used by Native Americans to dye fishing nets so as to make them less visible underwater.
Medicine
Native Americans used red alder bark to treat poison oak reactions, insect bites, and skin irritations. Blackfeet Indians used an infusion made from the bark of red alder to treat lymphatic disorders and tuberculosis. Recent clinical studies have verified that red alder contains betulin and lupeol, compounds shown to be effective against a variety of tumors.
Restoration
In addition to its use as a nitrogen fixer, red alder is occasionally used as a rotation crop to discourage the conifer root pathogen Phellinus weirii (causing laminated root rot).
Alnus rubra are occasionally planted as ornamental trees and will do well in Swales, riparian areas, or on stream banks, in light-textured soils that drain well. Red alder does not thrive in heavy, wet clay soils. If planted domestically, alders should be planted well away from drainpipes, sewage pipes, and water lines, as the roots may invade and clog the lines.
Woodworking
Alder lumber is not considered to be a durable option for outdoor applications, but due to its workability and ease of finishing it is increasingly used for furniture and cabinetry. Because it is softer than other popular hardwoods such as maple, walnut and ash, alder has historically been considered of low value for timber. However it is now becoming one of the more popular hardwood alternatives as it is economically priced compared to many other hardwoods. In the world of musical instrument construction, red alder is valued by some electric guitar / electric bass builders for its balanced tonality. Alder is frequently used by Native Americans for making masks, bowls, tool handles, and other small goods.
The appearance of alder lumber ranges from white through pinkish to light brown, has a relatively soft texture, minimal grain, and has medium luster. It is easily worked, glues well, and takes a good finish.
Fish smoking
Because of its oily smoke, A. rubra is the wood of choice for smoking salmon.
As an environmental indicator
Red alder is often used by scientists as a biomonitoring organism to locate areas prone to ozone pollution, as the leaves react to the presence of high ozone levels by developing red to brown or purple discolorations.
Forestry
With a current inventory of about , red alder comprises 60% of the total hardwood volume in the Pacific Northwest, and is by far the most valuable hardwood in term of diversity of products, commercial value, and manufacturing employment. Increasing value of alder logs, combined with a better understanding of the species' ecological role, has led some land managers to tolerate and, in some cases, manage for alder.
As an "aggressive pioneer" that was freely able to rapidly colonise areas to the detriment of the more valuable conifer species, it was regarded for a long time as a weed and was neglected for its timber potential, however breeding programmes to improve stem form and timber quality are now underway.
Since most forest land in the Northwest is managed for conifer production, over of timberland are sprayed with herbicides annually in Oregon alone to control red alder and other competing hardwood species. Red alder's rapid early growth can interfere with establishment of conifer plantations. Herbicide spraying of red alder over large areas of coastal Oregon and Washington has resulted in a number of lawsuits claiming it has caused health problems, including birth defects and other human health effects.
In addition to adding soil nitrogen, rotations of red alder are used to reduce laminated root rot in Douglas-fir forests. Nurse stands of red alder may also reduce spruce weevil damage in Sitka spruce stands on the Olympic Peninsula. Alder continues to attract interest as log values approach and often exceed those of Douglas-fir. This interest is limited by red alder's total stand productivity, which is significantly lower than that of Douglas-fir and western hemlock.
Gallery
| Biology and health sciences | Fagales | Plants |
447810 | https://en.wikipedia.org/wiki/Anomalopidae | Anomalopidae | Anomalopidae (lanterneye fishes or flashlight fishes) are a family of fish distinguished by bioluminescent organs located underneath their eyes, for which they are named. These light organs contain luminous bacteria and can be "shut off" by the fish using either a dark lid or by being drawn into a pouch. They are used to communicate, attract prey, and evade predators.
Flashlight fish are found in tropical ocean waters across the world. They are typically about in size, although some species can reach twice this length. They are nocturnal, feeding at night on small crustaceans. Some species move to shallow waters near coral reefs at night, but otherwise, they are exclusively deep water fish. This tends to make their collection difficult, and as such they are a poorly understood group.
Anomalopidae were originally divided into 5 distinct species: Anomalops katoptron and Photoblepharon palpebratus, widely distributed in the central and western Pacific Ocean; P. steinitzi from the Red Sea and Comoro Islands; Kryptophanaron alfredi from the Caribbean; and K. harveyi from Baja California. In 2019 the genus Photoblepharon was reduced to only 2 species: P. palpebratum from the eastern Indian Ocean and the western Pacific Ocean and P. steinitzi from the Red Sea, Oman, and western Indian Ocean. Other genera include Parmops and Phthanophaneron.
| Biology and health sciences | Acanthomorpha | Animals |
447832 | https://en.wikipedia.org/wiki/Helicase | Helicase | Helicases are a class of enzymes thought to be vital to all organisms. Their main function is to unpack an organism's genetic material. Helicases are motor proteins that move directionally along a nucleotidic backbone, separating two hybridized nucleic acid strands (hence helic- + -ase), using energy from ATP hydrolysis. There are many helicases, representing the great variety of processes in which strand separation must be catalyzed. Approximately 1% of eukaryotic genes code for helicases.
The human genome codes for 95 non-redundant helicases: 64 RNA helicases and 31 DNA helicases.
Many cellular processes, such as DNA replication, transcription, translation, recombination, DNA repair, and ribosome biogenesis involve the separation of nucleic acid strands that necessitates the use of helicases. Some specialized helicases are also involved in sensing of viral nucleic acids during infection and fulfill an immunological function. A helicase is an enzyme that plays a crucial role in the DNA replication and repair processes. Its primary function is to unwind the double-stranded DNA molecule by breaking the hydrogen bonds between the complementary base pairs, allowing the DNA strands to separate. This creates a replication fork, which serves as a template for synthesizing new DNA strands. Helicase is an essential component of cellular mechanisms that ensures accurate DNA replication and maintenance of genetic information. DNA helicase catalyzes regression. RecG and the enzyme PriA work together to rewind duplex DNA, creating a Holliday junction. RecG releases bound proteins and the PriA helicase facilitates DNA reloading to resume DNA replication. RecG replaces the single-strand binding protein (SSB), which regulates the helicase-fork loading sites during fork regression. The SSB protein interacts with DNA helicases PriA and RecG to recover stalled DNA replication forks. These enzymes must bind to the SSB-helicase to be loaded onto stalled forks. Thermal sliding and DNA duplex binding are possibly supported by the wedge domain of RecG's association with the SSB linker. In a regression reaction facilitated by RecG and ATPHollidayjunctions are created for later processing.
Function
Helicases are often used to separate strands of a DNA double helix or a self-annealed RNA molecule using the energy from ATP hydrolysis, a process characterized by the breaking of hydrogen bonds between annealed nucleotide bases. They also function to remove nucleic acid-associated proteins and catalyze homologous DNA recombination. Metabolic processes of RNA such as translation, transcription, ribosome biogenesis, RNA splicing, RNA transport, RNA editing, and RNA degradation are all facilitated by helicases. Helicases move incrementally along one nucleic acid strand of the duplex with a directionality and processivity specific to each particular enzyme.
Helicases adopt different structures and oligomerization states. Whereas DnaB-like helicases unwind DNA as ring-shaped hexamers, other enzymes have been shown to be active as monomers or dimers. Studies have shown that helicases may act passively, waiting for uncatalyzed unwinding to take place and then translocating between displaced strands, or can play an active role in catalyzing strand separation using the energy generated in ATP hydrolysis. In the latter case, the helicase acts comparably to an active motor, unwinding and translocating along its substrate as a direct result of its ATPase activity. Helicases may process much faster in vivo than in vitro due to the presence of accessory proteins that aid in the destabilization of the fork junction.
Activation barrier in helicase activity
Enzymatic helicase action, such as unwinding nucleic acids is achieved through the lowering of the activation barrier () of each specific action. The activation barrier is a result of various factors, and can be defined by
where
= number of unwound base pairs (bps),
= free energy of base pair formation,
= reduction of free energy due to helicase, and
= reduction of free energy due to unzipping forces.
Factors that contribute to the height of the activation barrier include: specific nucleic acid sequence of the molecule involved, the number of base pairs involved, tension present on the replication fork, and destabilization forces.
Active and passive helicases
The size of the activation barrier to overcome by the helicase contributes to its classification as an active or passive helicase. In passive helicases, a significant activation barrier exists (defined as , where is the Boltzmann constant and is temperature of the system). Due to this significant activation barrier, its unwinding progression is affected largely by the sequence of nucleic acids within the molecule to unwind, and the presence of destabilization forces acting on the replication fork. Certain nucleic acid combinations will decrease unwinding rates (i.e. guanine and cytosine), while various destabilizing forces can increase the unwinding rate. In passive systems, the rate of unwinding () is less than the rate of translocation () (translocation along the single-strand nucleic acid, ssNA), due to its reliance on the transient unraveling of the base pairs at the replication fork to determine its rate of unwinding.
In active helicases, , where the system lacks a significant barrier, as the helicase can destabilize the nucleic acids, unwinding the double-helix at a constant rate, regardless of the nucleic acid sequence. In active helicases, is closer to , due to the active helicase ability to directly destabilize the replication fork to promote unwinding.
Active helicases show similar behaviour when acting on both double-strand nucleic acids, dsNA, or ssNA, in regards to the rates of unwinding and rates of translocation, where in both systems and are approximately equal.
These two categories of helicases may also be modeled as mechanisms. In such models, the passive helicases are conceptualized as Brownian ratchets, driven by thermal fluctuations and subsequent anisotropic gradients across the DNA lattice. The active helicases, in contrast, are conceptualized as stepping motors – also known as powerstroke motors – utilizing either a conformational "inch worm" or a hand-over-hand "walking" mechanism to progress. Depending upon the organism, such helix-traversing progress can occur at rotational speeds in the range of 5,000 to 10,000 R.P.M.
History of DNA helicases
DNA helicases were discovered in E. coli in 1976. This helicase was described as a "DNA unwinding enzyme" that is "found to denature DNA duplexes in an ATP-dependent reaction, without detectably degrading". The first eukaryotic DNA helicase discovered was in 1978 in the lily plant. Since then, DNA helicases were discovered and isolated in other bacteria, viruses, yeast, flies, and higher eukaryotes. To date, at least 14 different helicases have been isolated from single celled organisms, 6 helicases from bacteriophages, 12 from viruses, 15 from yeast, 8 from plants, 11 from calf thymus, and approximately 25 helicases from human cells. Below is a history of helicase discovery:
1976 – Discovery and isolation of E. coli-based DNA helicase
1978 – Discovery of the first eukaryotic DNA helicases, isolated from the lily plant
1982 – "T4 gene 41 protein" is the first reported bacteriophage DNA helicase
1985 – First mammalian DNA helicases isolated from calf thymus
1986 – SV40 large tumor antigen reported as a viral helicase (1st reported viral protein that was determined to serve as a DNA helicase)
1986 – ATPaseIII, a yeast protein, determined to be a DNA helicase
1988 – Discovery of seven conserved amino acid domains determined to be helicase motifs
1989 – Designation of DNA helicase Superfamily I and Superfamily II
1989 – Identification of the DEAD box helicase family
1990 – Isolation of a human DNA helicase
1992 – Isolation of the first reported mitochondrial DNA helicase (from bovine brain)
1996 – Report of the discovery of the first purified chloroplast DNA helicase from the pea
2002 – Isolation and characterization of the first biochemically active malarial parasite DNA helicase – Plasmodium cynomolgi.
Structural features
The common function of helicases accounts for the fact that they display a certain degree of amino acid sequence homology; they all possess sequence motifs located in the interior of their primary structure, involved in ATP binding, ATP hydrolysis and translocation along the nucleic acid substrate. The variable portion of the amino acid sequence is related to the specific features of each helicase.
The presence of these helicase motifs allows putative helicase activity to be attributed to a given protein, but does not necessarily confirm it as an active helicase. Conserved motifs do, however, support an evolutionary homology among enzymes. Based on these helicase motifs, a number of helicase superfamilies have been distinguished.
Superfamilies
Helicases are classified in 6 groups (superfamilies) based on their shared sequence motifs. Helicases not forming a ring structure are in superfamilies 1 and 2, and ring-forming helicases form part of superfamilies 3 to 6. Helicases are also classified as α or β depending on if they work with single or double-strand DNA; α helicases work with single-strand DNA and β helicases work with double-strand DNA. They are also classified by translocation polarity. If translocation occurs 3’-5’ the helicase is type A; if translocation occurs 5’-3’ it is type B.
Superfamily 1 (SF1): This superfamily can be further subdivided into SF1A and SF1B helicases. In this group helicases can have either 3’-5’ (SF1A subfamily) or 5’-3’(SF1B subfamily) translocation polarity. The most known SF1A helicases are Rep and UvrD in gram-negative bacteria and PcrA helicase from gram-positive bacteria. The most known Helicases in the SF1B group are RecD and Dda helicases. They have a RecA-like-fold core.
Superfamily 2 (SF2): This is the largest group of helicases that are involved in varied cellular processes. They are characterized by the presence of nine conserved motifs: Q, I, Ia, Ib, and II through VI. This group is mainly composed of DEAD-box RNA helicases. Some other helicases included in SF2 are the RecQ-like family and the Snf2-like enzymes. Most of the SF2 helicases are type A with a few exceptions such as the XPD family. They have a RecA-like-fold core.
Superfamily 3 (SF3): Superfamily 3 consists of AAA+ helicases encoded mainly by small DNA viruses and some large nucleocytoplasmic DNA viruses. They have a 3’-5’ translocation directionality, meaning that they are all type A helicases. The most known SF3 helicase is the papilloma virus E1 helicase.
Superfamily 4 (SF4): All SF4 family helicases have a type B polarity (5’-3’). They have a RecA fold. The most studied SF4 helicase is gp4 from bacteriophage T7.
Superfamily 5 (SF5): Rho proteins conform the SF5 group. They have a RecA fold.
Superfamily 6 (SF6): They contain the core AAA+ that is not included in the SF3 classification. Some proteins in the SF6 group are: mini chromosome maintenance MCM, RuvB, RuvA, and RuvC.
All helicases are members of a P-loop, or Walker motif-containing family.
Helicase disorders and diseases
ATRX helicase mutations
The ATRX gene encodes the ATP-dependent helicase, ATRX (also known as XH2 and XNP) of the SNF2 subgroup family, that is thought to be responsible for functions such as chromatin remodeling, gene regulation, and DNA methylation. These functions assist in prevention of apoptosis, resulting in cortical size regulation, as well as a contribution to the survival of hippocampal and cortical structures, affecting memory and learning. This helicase is located on the X chromosome (Xq13.1-q21.1), in the pericentromeric heterochromatin and binds to heterochromatin protein 1. Studies have shown that ATRX plays a role in rDNA methylation and is essential for embryonic development. Mutations have been found throughout the ATRX protein, with over 90% of them being located in the zinc finger and helicase domains. Mutations of ATRX can result in X-linked-alpha-thalassaemia-mental retardation (ATR-X syndrome).
Various types of mutations found in ATRX have been found to be associated with ATR-X, including most commonly single-base missense mutations, as well as nonsense, frameshift, and deletion mutations. Characteristics of ATR-X include: microcephaly, skeletal and facial abnormalities, mental retardation, genital abnormalities, seizures, limited language use and ability, and alpha-thalassemia. The phenotype seen in ATR-X suggests that the mutation of ATRX gene causes the downregulation of gene expression, such as the alpha-globin genes. It is still unknown what causes the expression of the various characteristics of ATR-X in different patients.
XPD helicase point mutations
XPD (Xeroderma pigmentosum factor D, also known as protein ERCC2) is a 5'-3', Superfamily II, ATP-dependent helicase containing iron-sulphur cluster domains. Inherited point mutations in XPD helicase have been shown to be associated with accelerated aging disorders such as Cockayne syndrome (CS) and trichothiodystrophy (TTD). Cockayne syndrome and trichothiodystrophy are both developmental disorders involving sensitivity to UV light and premature aging, and Cockayne syndrome exhibits severe mental retardation from the time of birth. The XPD helicase mutation has also been implicated in xeroderma pigmentosum (XP), a disorder characterized by sensitivity to UV light and resulting in a several 1000-fold increase in the development of skin cancer.
XPD is an essential component of the TFIIH complex, a transcription and repair factor in the cell. As part of this complex, it facilitates nucleotide excision repair by unwinding DNA. TFIIH assists in repairing damaged DNA such as sun damage. A mutation in the XPD helicase that helps form this complex and contributes to its function causes the sensitivity to sunlight seen in all three diseases, as well as the increased risk of cancer seen in XP and premature aging seen in trichothiodystrophy and Cockayne syndrome.
XPD helicase mutations leading to trichothiodystrophy are found throughout the protein in various locations involved in protein-protein interactions. This mutation results in an unstable protein due to its inability to form stabilizing interactions with other proteins at the points of mutations. This, in turn, destabilizes the entire TFIIH complex, which leads to defects with transcription and repair mechanisms of the cell.
It has been suggested that XPD helicase mutations leading to Cockayne syndrome could be the result of mutations within XPD, causing rigidity of the protein and subsequent inability to switch from repair functions to transcription functions due to a "locking" in repair mode. This could cause the helicase to cut DNA segments meant for transcription. Although current evidence points to a defect in the XPD helicase resulting in a loss of flexibility in the protein in cases of Cockayne syndrome, it is still unclear how this protein structure leads to the symptoms described in Cockayne syndrome.
In xeroderma pigmentosa, the XPD helicase mutation exists at the site of ATP or DNA binding. This results in a structurally functional helicase able to facilitate transcription, however it inhibits its function in unwinding DNA and DNA repair. The lack of a cell's ability to repair mutations, such as those caused by sun damage, is the cause of the high cancer rate in xeroderma pigmentosa patients.
RecQ family mutations
RecQ helicases (3'-5') belong to the Superfamily II group of helicases, which help to maintain stability of the genome and suppress inappropriate recombination. Deficiencies and/or mutations in RecQ family helicases display aberrant genetic recombination and/or DNA replication, which leads to chromosomal instability and an overall decreased ability to proliferate. Mutations in RecQ family helicases BLM, RECQL4, and WRN, which play a role in regulating homologous recombination, have been shown to result in the autosomal recessive diseases Bloom syndrome (BS), Rothmund–Thomson syndrome (RTS), and Werner syndrome (WS), respectively.
Bloom syndrome is characterized by a predisposition to cancer with early onset, with a mean age-of-onset of 24 years. Cells of Bloom syndrome patients show a high frequency of reciprocal exchange between sister chromatids (SCEs) and excessive chromosomal damage. There is evidence to suggest that BLM plays a role in rescuing disrupted DNA replication at replication forks.
Werner syndrome is a disorder of premature aging, with symptoms including early onset of atherosclerosis and osteoporosis and other age related diseases, a high occurrence of sarcoma, and death often occurring from myocardial infarction or cancer in the 4th to 6th decade of life. Cells of Werner syndrome patients exhibit a reduced reproductive lifespan with chromosomal breaks and translocations, as well as large deletions of chromosomal components, causing genomic instability.
Rothmund-Thomson syndrome, also known as poikiloderma congenitale, is characterized by premature aging, skin and skeletal abnormalities, rash, poikiloderma, juvenile cataracts, and a predisposition to cancers such as osteosarcomas. Chromosomal rearrangements causing genomic instability are found in the cells of Rothmund-Thomson syndrome patients. RecQ is a family of DNA helicase enzymes that are found in various organisms including bacteria, archaea, and eukaryotes (like humans). These enzymes play important roles in DNA metabolism during DNA replication, recombination, and repair. There are five known RecQ helicase proteins in humans: RecQ1, BLM, WRN, RecQ4, and RecQ5. Mutations in some of these genes are associated with genetic disorders. For instance, mutations in the BLM gene cause Bloom syndrome, which is characterized by increased cancer risk and other health issues. Mutations in the WRN gene lead to Werner syndrome, a condition characterized by premature aging and an increased risk of age-related diseases. RecQ helicases are crucial for maintaining genomic stability and integrity. They help prevent the accumulation of genetic abnormalities that can lead to diseases like cancer. Genome integrity depends on the RecQ DNA helicase family, which includes DNA repair, recombination, replication, and transcription processes. Genome instability and early aging are conditions that arise from mutations in human RecQ helicases. RecQ helicase Sgs1 is missing in yeast cells, making them useful models for comprehending human cell abnormalities and the RecQ helicase function. The RecQ helicase family member, RECQ1, is connected to a small number of uncommon genetic cancer disorders in individuals. It participates in transcription, the cell cycle, and DNA repair. According to recent research, missense mutations in the RECQ1 gene may play a role in the development of familial breast cancer. DNA helicases are frequently attracted to regions of DNA damage and are essential for cellular DNA replication, recombination, repair, and transcription. Chemical manipulation of their molecular processes can change the rate at which cancer cells divide, as well as, the efficiency of transactions and cellular homeostasis. Small-molecule-induced entrapment of DNA helicases, a type of DNA metabolic protein, may have deleterious consequences on rapidly proliferating cancer cells, which could be effective in cancer treatment.
During meiosis DNA double-strand breaks and other DNA damages in a chromatid are repaired by homologous recombination using either the sister chromatid or a homologous non-sister chromatid as template. This repair can result in a crossover (CO) or, more frequently, a non-crossover (NCO) recombinant. In the yeast Schizosaccharomyces pombe the FANCM-family DNA helicase FmI1 directs NCO recombination formation during meiosis. The RecQ-type helicase Rqh1 also directs NCO meiotic recombination. These helicases, through their ability to unwind D-loop intermediates, promote NCO recombination by the process of synthesis-dependent strand annealing.
In the plant Arabidopsis thaliana, FANCM helicase promotes NCO and antagonizes the formation of CO recombinants. Another helicase, RECQ4A/B, also independently reduces COs. It was suggested that COs are restricted because of the long term costs of CO recombination, that is, the breaking up of favourable genetic combinations of alleles built up by past natural selection.
RNA helicases
RNA helicases are essential for most processes of RNA metabolism such as ribosome biogenesis, pre-mRNA splicing, and translation initiation. They also play an important role in sensing viral RNAs. RNA helicases are involved in the mediation of antiviral immune response because they can identify foreign RNAs in vertebrates. About 80% of all viruses are RNA viruses and they contain their own RNA helicases. Defective RNA helicases have been linked to cancers, infectious diseases and neuro-degenerative disorders. Some neurological disorders associated with defective RNA helicases are: amyotrophic lateral sclerosis, spinal muscular atrophy, spinocerebellar ataxia type-2, Alzheimer disease, and lethal congenital contracture syndrome.
RNA helicases and DNA helicases can be found together in all the helicase superfamilies except for SF6. All the eukaryotic RNA helicases that have been identified up to date are non-ring forming and are part of SF1 and SF2. On the other hand, ring-forming RNA helicases have been found in bacteria and viruses. However, not all RNA helicases exhibit helicase activity as defined by enzymatic function, i.e., proteins of the Swi/Snf family. Although these proteins carry the typical helicase motifs, hydrolize ATP in a nucleic acid-dependent manner, and are built around a helicase core, in general, no unwinding activity is observed.
RNA helicases that do exhibit unwinding activity have been characterized by at least two different mechanisms: canonical duplex unwinding and local strand separation. Canonical duplex unwinding is the stepwise directional separation of a duplex strand, as described above, for DNA unwinding. However, local strand separation occurs by a process wherein the helicase enzyme is loaded at any place along the duplex. This is usually aided by a single-strand region of the RNA, and the loading of the enzyme is accompanied with ATP binding. Once the helicase and ATP are bound, local strand separation occurs, which requires binding of ATP but not the actual process of ATP hydrolysis. Presented with fewer base pairs the duplex then dissociates without further assistance from the enzyme. This mode of unwinding is used by the DEAD/DEAH box helicases.
An RNA helicase database is currently available online that contains a comprehensive list of RNA helicases with information such as sequence, structure, and biochemical and cellular functions.
Diagnostic tools for helicase measurement
Measuring and monitoring helicase activity
Various methods are used to measure helicase activity in vitro. These methods range from assays that are qualitative (assays that usually entail results that do not involve values or measurements) to quantitative (assays with numerical results that can be utilized in statistical and numerical analysis). In 1982–1983, the first direct biochemical assay was developed for measuring helicase activity. This method was called a "strand displacement assay".
Strand displacement assay involves the radiolabeling of DNA duplexes. Following helicase treatment, the single-strand DNA is visually detected as separate from the double-strand DNA by non-denaturing PAGE electrophoresis. Following detection of the single-strand DNA, the amount of radioactive tag that is on the single-strand DNA is quantified to give a numerical value for the amount of double-strand DNA unwinding.The strand displacement assay is acceptable for qualitative analysis, its inability to display results for more than a single time point, its time consumption, and its dependence on radioactive compounds for labeling warranted the need for development of diagnostics that can monitor helicase activity in real time.
Other methods were later developed that incorporated some, if not all of the following: high-throughput mechanics, the use of non-radioactive nucleotide labeling, faster reaction time/less time consumption, real-time monitoring of helicase activity (using kinetic measurement instead of endpoint/single point analysis). These methodologies include: "a rapid quench flow method, fluorescence-based assays, filtration assays, a scintillation proximity assay, a time resolved fluorescence resonance energy transfer assay, an assay based on flashplate technology, homogenous time-resolved fluorescence quenching assays, and electrochemiluminescence-based helicase assays". With the use of specialized mathematical equations, some of these assays can be utilized to determine how many base paired nucleotides a helicase can break per hydrolysis of 1 ATP molecule.
Commercially available diagnostic kits are also available. One such kit is the "Trupoint" diagnostic assay from PerkinElmer, Inc. This assay is a time-resolved fluorescence quenching assay that utilizes the PerkinElmer "SignalClimb" technology that is based on two labels that bind in close proximity to one another but on opposite DNA strands. One label is a fluorescent lanthanide chelate, which serves as the label that is monitored through an adequate 96/384 well plate reader. The other label is an organic quencher molecule. The basis of this assay is the "quenching" or repressing of the lanthanide chelate signal by the organic quencher molecule when the two are in close proximity – as they would be when the DNA duplex is in its native state. Upon helicase activity on the duplex, the quencher and lanthanide labels get separated as the DNA is unwound. This loss in proximity negates the quenchers ability to repress the lanthanide signal, causing a detectable increase in fluorescence that is representative of the amount of unwound DNA and can be used as a quantifiable measurement of helicase activity.
The execution and use of single-molecule fluorescence imaging techniques, focusing on methods that include optical trapping in conjunction with epifluorescent imaging, and also surface immobilization in conjunction with total internal reflection fluorescence visualization. Combined with microchannel flow cells and microfluidic control, allow individual fluorescently labeled protein and DNA molecules to be imaged and tracked, affording measurement of DNA unwinding and translocation at single-molecule resolution.
Determining helicase polarity
Helicase polarity, which is also deemed "directionality", is defined as the direction (characterized as 5'→3' or 3'→5') of helicase movement on the DNA/RNA single-strand along which it is moving. This determination of polarity is vital in f.ex. determining whether the tested helicase attaches to the DNA leading strand, or the DNA lagging strand. To characterize this helicase feature, a partially duplex DNA is used as the substrate that has a central single-strand DNA region with different lengths of duplex regions of DNA (one short region that runs 5'→3' and one longer region that runs 3'→5') on both sides of this region. Once the helicase is added to that central single-strand region, the polarity is determined by characterization on the newly formed single-strand DNA.
| Biology and health sciences | Molecular biology | Biology |
448010 | https://en.wikipedia.org/wiki/Septic%20shock | Septic shock | Septic shock is a potentially fatal medical condition that occurs when sepsis, which is organ injury or damage in response to infection, leads to dangerously low blood pressure and abnormalities in cellular metabolism. The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3) defines septic shock as a subset of sepsis in which particularly profound circulatory, cellular, and metabolic abnormalities are associated with a greater risk of mortality than with sepsis alone. Patients with septic shock can be clinically identified by requiring a vasopressor to maintain a mean arterial pressure of 65 mm Hg or greater and having serum lactate level greater than 2 mmol/L (>18 mg/dL) in the absence of hypovolemia. This combination is associated with hospital mortality rates greater than 40%.
The primary infection is most commonly caused by bacteria, but also may be by fungi, viruses or parasites. It may be located in any part of the body, but most commonly in the lungs, brain, urinary tract, skin or abdominal organs. It can cause multiple organ dysfunction syndrome (formerly known as multiple organ failure) and death.
Frequently, people with septic shock are cared for in intensive care units. It most commonly affects children, immunocompromised individuals, and the elderly, as their immune systems cannot deal with infection as effectively as those of healthy adults. The mortality rate from septic shock is approximately 25–50%.
Causes
Septic shock is a result of a systemic response to infection or multiple infectious causes. The precipitating infections that may lead to septic shock if severe enough include but are not limited to appendicitis, pneumonia, bacteremia, diverticulitis, pyelonephritis, meningitis, pancreatitis, necrotizing fasciitis, MRSA and mesenteric ischemia.
According to the earlier definitions of sepsis updated in 2001, sepsis is a constellation of symptoms secondary to an infection that manifests as disruptions in heart rate, respiratory rate, temperature, and white blood cell count. If sepsis worsens to the point of end-organ dysfunction (kidney failure, liver dysfunction, altered mental status, or heart damage), then the condition is called severe sepsis. In septic shock, events within tissue capillaries induce distributive shock in which the recovery of blood pressure is not achieved upon the administration of additional intravenous fluids, and requires a vasoconstrictive agent such as noradrenaline and/or vasopressin.
Pathophysiology
The pathophysiology of septic shock is not entirely understood, but it is known that a key role in the development of severe sepsis is played by an immune and coagulation response to an infection. Both pro-inflammatory and anti-inflammatory responses play a role in septic shock. Septic shock involves a widespread inflammatory response that produces a hypermetabolic effect. This is manifested by increased cellular respiration, protein catabolism, and metabolic acidosis with a compensatory respiratory alkalosis.
Most cases of septic shock are caused by gram-positive bacteria, followed by endotoxin-producing gram-negative bacteria, although fungal infections are an increasingly prevalent cause of septic shock. Toxins produced by pathogens cause an immune response; in gram-negative bacteria these are endotoxins, which are bacterial membrane lipopolysaccharides (LPS).
Gram-positive
In gram-positive bacteria, these are exotoxins or enterotoxins, which may vary depending on the species of bacteria. These are divided into three types. Type I, cell surface-active toxins, disrupt cells without entering, and include superantigens and heat-stable enterotoxins. Type II, membrane-damaging toxins, destroy cell membranes in order to enter and include hemolysins and phospholipases. Type III, intracellular toxins or A/B toxins interfere with internal cell function and include shiga toxin, cholera toxin, and anthrax lethal toxin. (note that Shigella and Vibrio cholerae are Gram negative organisms).
Gram-negative
In gram-negative sepsis, free LPS attaches to a circulating LPS-binding protein, and the complex then binds to the CD14 receptor on monocytes, macrophages, and neutrophils. Engagement of CD14 (even at doses as minute as 10 pg/mL) results in intracellular signaling via an associated "Toll-like receptor" protein 4 (TLR-4). This signaling results in the activation of nuclear factor kappaB (NF-κB), which leads to transcription of a number of genes that trigger a proinflammatory response. It was the result of significant activation of mononuclear cells and synthesis of effector cytokines. It also results in profound activation of mononuclear cells and the production of potent effector cytokines such as IL-1, IL-6, and TNF-α. TLR-mediated activation helps to trigger the innate immune system to efficiently eradicate invading microbes, but the cytokines they produce also act on endothelial cells. There, they have a variety of effects, including reduced synthesis of anticoagulation factors such as tissue factor pathway inhibitor and thrombomodulin. The effects of the cytokines may be amplified by TLR-4 engagement on endothelial cells.
In response to inflammation, a compensatory reaction of production of anti-inflammatory substances such as IL-4, IL-10 antagonists, IL-1 receptor, and cortisol occurs. This is called compensatory anti-inflammatory response syndrome (CARS).
Both the inflammatory and anti-inflammatory reactions are responsible for the course of sepsis and are described as MARS (Mixed Antagonist Response Syndrome). The aim of these processes is to keep inflammation at an appropriate level. CARS often leads to suppression of the immune system, which leaves patients vulnerable to secondary infection. It was once thought that SIRS or CARS could predominate in a septic individual, and it was proposed that CARS follows SIRS in a two-wave process. It is now believed that the systemic inflammatory response and the compensatory anti-inflammatory response occur simultaneously.
At high levels of LPS, the syndrome of septic shock supervenes; the same cytokine and secondary mediators, now at high levels, result in systemic vasodilation (hypotension), diminished myocardial contractility, widespread endothelial injury, activation causing systemic leukocyte adhesion and diffuse alveolar capillary damage in the lung, and activation of the coagulation system culminating in disseminated intravascular coagulation (DIC).
The hypoperfusion from the combined effects of widespread vasodilation, myocardial pump failure, and DIC causes multiorgan system failure that affects the liver, kidneys, and central nervous system, among other organ systems. Recently, severe damage to liver ultrastructure has been noticed from treatment with cell-free toxins of Salmonella. Unless the underlying infection (and LPS overload) is rapidly brought under control, the patient usually dies.
The ability of TLR4 to respond to a distinct LPS species are clinically important. Pathogenic bacteria may employ LPS with low biological activity to evade proper recognition by the TLR4/MD-2 system, dampening the host immune response and increasing the risk of bacterial dissemination. On the other hand, such LPS would not be able to induce septic shock in susceptible patients, rendering septic complications more manageable. Yet, defining and understanding how even the smallest structural differences between the very similar LPS species may affect the activation of the immune response may provide the mechanism for the fine tuning of the latter and new insights to immunomodulatory processes.
Diagnosis
According to current guidelines, requirements for diagnosis with sepsis are "the presence (probable or documented) of infection together with systemic manifestations of infection". These manifestations may include:
Tachypnea (fast rate of breathing), which is defined as more than 20 breaths per minute, or when testing blood gas, a less than 32 mm Hg, which signifies hyperventilation
White blood cell count either significantly low (< 4000 cells/mm3), or elevated (> 12000 cells/mm3)
Tachycardia (rapid heart rate), which in sepsis is defined as a rate greater than 90 beats per minute
Altered body temperature: Fever > or hypothermia <
Documented evidence of infection may include positive blood culture, signs of pneumonia on chest x-ray, or other radiologic or laboratory evidence of infection. Signs of end-organ dysfunction are present in septic shock, including kidney failure, liver dysfunction, changes in mental status, or elevated serum lactate.
Septic shock is diagnosed if there is low blood pressure (BP) that does not respond to treatment. This means that intravenous fluid administration alone is not enough to maintain a patient's BP. Diagnosis of septic shock is made when systolic blood pressure is less than 90 mm Hg, a mean arterial pressure (MAP) is less than 70 mm Hg, or a systolic BP decrease of 40 mm Hg or more without other causes for low BP.
Definition
Septic shock is a subclass of distributive shock, a condition in which abnormal distribution of blood flow in the smallest blood vessels results in inadequate blood supply to the body tissues, resulting in ischemia and organ dysfunction. Septic shock refers specifically to distributive shock due to sepsis as a result of infection.
Septic shock may be defined as sepsis-induced low blood pressure that persists despite treatment with intravenous fluids. Low blood pressure reduces tissue perfusion pressure, causing the tissue hypoxia that is characteristic of shock. Cytokines released in a large scale inflammatory response result in massive vasodilation, increased capillary permeability, decreased systemic vascular resistance, and low blood pressure. Finally, in an attempt to offset decreased blood pressure, ventricular dilatation and myocardial dysfunction occur.
Septic shock may be regarded as a stage of SIRS (Systemic Inflammatory Response Syndrome), in which sepsis, severe sepsis and multiple organ dysfunction syndrome (MODS) represent different stages of a pathophysiological process. If an organism cannot cope with an infection, it may lead to a systemic response - sepsis, which may further progress to severe sepsis, septic shock, organ failure, and eventually, result in death.
Treatment
Treatment primarily consists of the following:
Giving intravenous fluids
Early antibiotic administration
Early goal directed therapy
Rapid source identification and control
Support of major organ dysfunction
Fluids
Because lowered blood pressure in septic shock contributes to poor perfusion, fluid resuscitation is an initial treatment to increase blood volume. Patients demonstrating sepsis-induced hypoperfusion should be initially resuscitated with at least 30 ml/kg of intravenous crystalloid within the first three hours. Crystalloids such as normal saline and lactated Ringer's solution are recommended as the initial fluid of choice, while the use of colloid solutions such as hydroxyethyl starch have not shown any advantage or decrease in mortality. When large quantities of fluids are given, administering albumin has shown some benefit. However, too high of a rate of fluid infusion can be more risky; the particular type of fluid's flow rate must be closely monitored, along with the patient's condition and vital signs.
Antibiotics
Treatment guidelines call for the administration of broad-spectrum antibiotics within the first hour following recognition of septic shock. Prompt antimicrobial therapy is important, as risk of dying increases by approximately 10% for every hour of delay in receiving antibiotics. Time constraints do not allow the culture, identification, and testing for antibiotic sensitivity of the specific microorganism responsible for the infection. Therefore, combination antimicrobial therapy, which covers a wide range of potential causative organisms, is tied to better outcomes. Antibiotics should be continued for 7–10 days in most patients, though treatment duration may be shorter or longer depending on clinical response.
Vasopressors
Among the choices for vasopressors, norepinephrine is superior to dopamine in septic shock. Norepinephrine is the preferred vasopressor, while epinephrine may be added to norepinephrine when needed. Low-dose vasopressin also may be used as an addition to norepinephrine, but is not recommended as a first-line treatment. Dopamine may cause rapid heart rate and arrhythmias, and is only recommended in combination with norepinephrine in those with slow heart rate and low risk of arrhythmia. In the initial treatment of low blood pressure in septic shock, the goal of vasopressor treatment is a mean arterial pressure (MAP) of 65 mm Hg. In 2017, the FDA approved angiotensin II injection for intravenous infusion to increase blood pressure in adults with septic or other distributive shock.
Methylene blue
Methylene blue has been found to be useful for this condition. Although use of methylene blue has mostly been in adults it has also been shown to work in children. Its mechanism of action is thought to be via the inhibition of the nitric oxide-cyclic guanosine monophosphate pathway. This pathway is excessively activated in septic shock. Methylene blue has been found to work in cases resistant to the usual agents. This effect was first reported in the early 1990s.
Other
While there is tentative evidence for β-Blocker therapy to help control heart rate, evidence is not significant enough for its routine use. There is tentative evidence that steroids may be useful in improving outcomes.
Tentative evidence exists that Polymyxin B-immobilized fiber column hemoperfusion may be beneficial in treatment of septic shock. Trials are ongoing and it is currently being used in Japan and Western Europe.
Recombinant activated protein C (drotrecogin alpha) in a 2011 Cochrane review was found not to decrease mortality and to increase bleeding, and thus, was not recommended for use. Drotrecogin alfa (Xigris), was withdrawn from the market in October 2011.
Epidemiology
Sepsis has a worldwide incidence of more than 20 million cases a year, with mortality due to septic shock reaching up to 50 percent even in industrialized countries.
According to the U.S. Centers for Disease Control, septic shock is the thirteenth leading cause of death in the United States and the most frequent cause of death in intensive care units. There has been an increase in the rate of septic shock deaths in recent decades, which is attributed to an increase in invasive medical devices and procedures, increases in immunocompromised patients, and an overall increase in elderly patients.
Tertiary care centers (such as hospice care facilities) have 2-4 times the rate of bacteremia than primary care centers, 75% of which are hospital-acquired infections.
The process of infection by bacteria or fungi may result in systemic signs and symptoms that are variously described. Approximately 70% of septic shock cases were once traceable to gram-negative bacteria that produce endotoxins, however, with the emergence of MRSA and the increased use of arterial and venous catheters, gram-positive bacteria are implicated approximately as commonly as bacilli. In rough order of increasing severity these are, bacteremia or fungemia; sepsis, severe sepsis or sepsis syndrome; septic shock, refractory septic shock, multiple organ dysfunction syndrome, and death.
35% of septic shock cases derive from urinary tract infections, 15% from the respiratory tract, 15% from skin catheters (such as IVs), and more than 30% of all cases are idiopathic in origin.
The mortality rate from sepsis, especially if it is not treated rapidly with the needed medications in a hospital, is approximately 40% in adults and 25% in children. It is significantly greater when sepsis is left untreated for more than seven days.
| Biology and health sciences | Cardiovascular disease | Health |
448787 | https://en.wikipedia.org/wiki/Greater%20painted-snipe | Greater painted-snipe | The greater painted-snipe (Rostratula benghalensis) is a species of wader in the small painted-snipe family Rostratulidae. It widely distributed across Africa and southern Asia and is found in a variety of wetland habitats, including swamps and the edges of larger water bodies such as lakes and rivers. This species is sexually dimorphic with the female being larger and more brightly coloured than the male. The female is normally polyandrous with the males incubating the eggs and caring for the young.
Taxonomy
The greater painted-snipe was formally described in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. He placed it with the rails in the genus Rallus and coined the binomial name Rallus benghalenis. Linnaeus based his account on the "Bengall water rail" that had been described and illustrated in 1738 by the English naturalist Eleazar Albin in his A Natural History of Birds. Albin had examined a drawing that had been sent to the English silk-pattern designer Joseph Dandridge from Bengal. The greater painted-snipe is now placed with the Australian painted-snipe in the genus Rostratula that was introduced in 1816 by the French ornithologist Louis Vieillot. The species is treated as monotypic: no subspecies are recognised.
The Australian painted-snipe (Rostratula australis) was formerly treated as a subspecies but was promoted to species status based on the differences in morphology and in the vocal calls.
Description
The greater painted-snipe is a medium-sized shorebird with an overall length of . The species is sexually dimorphic: females are larger, heavier, and have bolder plumage than males. The female has a black head with a buff stripe and a white eye-patch. The neck is dark rufous. The upperparts are mostly dark bronze-green finely barred with black. A white stripe curves around the shoulder mantle. The underbody is white. The male is much paler and less uniform with barring on the and wing-coverts. The juvenile resembles the male but lacks the darker band around the chest.
It is not a vocal species; apart from the breeding season, it is mostly silent. The female may make a "mellow hooting or booming" sound.
Distribution and habitat
Greater painted-snipe are very widely distributed; in mainland Africa as well as Madagascar and the Seychelles; in India, and Southeast Asia. Within Africa, they are found in the Nile River Valley and in the non-rainforested areas of Sub-Saharan Africa. They are notably absent from the eastern portion of Somalia, from the desert areas of Namibia, and from parts of Botswana and South Africa. Despite their wide distribution, they are uncommon within their range. There are between 31,000 and 1,000,000 mature individuals alive, according to BirdLife International.
Although this species inhabits a variety of wetland habitats, it prefers muddy areas with available cover (i.e., vegetation). It is also found on the edges of lakes and rivers, provided there is cover nearby, and in marshes and around swamps. They are usually found close to the fringes of reed beds along shorelines of marshes, swamps, ponds and streams.
Behaviour
Greater painted-snipe usually live solitarily or in pairs, but sometimes are found in large groups. They are rather shy and retiring, skulking close to the vegetation so that they can retreat to cover if disturbed. When flushed, the birds like rails, with their legs dangling.
Food and feeding
Feeds on insects, snails, earthworms and crustacea as well as vegetable matter such as plant seeds. Uses scythe-like action of the head and bill in shallow water. They are generally crepuscular, feeding in the early morning and near dusk.
Breeding
Greater painted-snipe are almost always polyandrous. The female initiates courtship and usually mates with two males in a season, but may mate with up to four. Males incubate the eggs and provide parental care. The females court the males, are polyandrous with males incubating and raising the young. The nest is a shallow scrape in soft ground, lined with plant material and situated among grass or reeds at the water's edge; sometimes a pad of vegetation or a nest of grass and weeds. It is usually well concealed. The clutch is normally 4 eggs. These have a light buff-yellow background and are covered with black-brown blotches, spots and lines. The eggs which measure are incubated by the male for around 19 days. The young are precocial and nidifugous. They are brooded when they are small. The chicks are buff coloured and have black stripes running along their length.
Conservation status
The greater painted-snipe is as "Least Concern" by the International Union for Conservation of Nature (IUCN), due to its large range and the relatively slow rate of population decrease.
Gallery
| Biology and health sciences | Charadriiformes | Animals |
449282 | https://en.wikipedia.org/wiki/Butterflyfish | Butterflyfish | The butterflyfish are a group of conspicuous tropical marine fish of the family Chaetodontidae; the bannerfish and coralfish are also included in this group. The approximately 129 species in 12 genera are found mostly on the reefs of the Atlantic, Indian, and Pacific Oceans. A number of species pairs occur in the Indian and Pacific Oceans, members of the huge genus Chaetodon.
Butterflyfish look like smaller versions of angelfish (Pomacanthidae), but unlike these, lack preopercle spines at the gill covers. Some members of the genus Heniochus resemble the Moorish idol (Zanclus cornutus) of the monotypic Zanclidae. Among the paraphyletic Perciformes, the former are probably not too distantly related to butterflyfish, whereas the Zanclidae seem far less close.
Description and ecology
Butterflyfish mostly range from in length. The largest species, the lined butterflyfish and the saddle butterflyfish, C. ephippium, grow to . The common name references the brightly coloured and strikingly patterned bodies of many species, bearing shades of black, white, blue, red, orange, and yellow. Other species are dull in colour. Butterflyfish are a boundless, different group of marine percoids with delegates on practically all coral reef frameworks and in every single tropical ocean. Their bright and color patterns have drawn in much consideration, creating an abundance of data about their conduct and environment. Many have eyespots on their flanks and dark bands across their eyes, not unlike the patterns seen on butterfly wings. Their deep, laterally narrow bodies are easily noticed through the profusion of reef life. The conspicuous coloration of butterflyfish may be intended for interspecies communication. Butterflyfish have uninterrupted dorsal fins with tail fins that may be rounded or truncated, but are never forked.
Generally diurnal and frequenting waters less than deep (though some species descend to , butterflyfish stick to particular home ranges. These corallivores are especially territorial, forming pairs and staking claim to a specific coral head. Contrastingly, the zooplankton feeders form large conspecific groups. By night, butterflyfish hide in reef crevices and exhibit markedly different coloration.
Their coloration also makes them popular aquarium fish. However, most species feed on coral polyps and sea anemones. Balancing the relative populations of prey and predator is complex, leading hobby aquarists to focus on the few generalists and specialist zooplankton feeders.
Butterflyfish are pelagic spawners; that is, they release many buoyant eggs into the water, which become part of the plankton, floating with the currents until hatching. The fry go through a tholichthys stage, wherein the body of the postlarval fish is covered in large, bony plates extending from the head. They lose their bony plates as they mature. Only one other family of fish, the scats (Scatophagidae) express such an armored stage.
Taxonomy, systematics and evolution
The Chaetodontidae can be, but are not usually, divided into two lineages that arguably are subfamilies. The subfamily name Chaetodontinae is a little-used leftover from the period when the Pomacanthidae and Chaetodontidae were united under the latter name as a single family. Hence, Chaetodontinae is today considered a junior synonym of Chaetodontidae. In any case, one lineage of Chaetodontidae (in the modern sense) contains the "typical" butterflyfish around Chaetodon, while the other unites the bannerfish and coralfish genera. As the Perciformes are highly paraphyletic, the precise relationships of the Chaetodontidae as a whole are badly resolved.
Chaetodontidae is classified within the suborder Percoidei by the 5th edition of Fishes of the World, but they are placed in an unnamed clade which sits outside the superfamily Percoidea. This clade contains 7 families which appear to have some relationship to Acanthuroidei, Monodactylidae, and Priacanthidae. Other authorities have paced the family in the order Chaetodontiformes alongside the family Leiognathidae.
Before DNA sequencing, the taxonomy was confused about whether to treat these as species or subspecies. Also, numerous subgenera have been proposed for splitting out of Chaetodon, and it is becoming clear how to subdivide the genus if that is desired.
The fossil record of this group is marginal. Their restriction to coral reefs means their carcasses are liable to be dispersed by scavengers, overgrown by corals, and any that do fossilize will not long survive erosion. However, Pygaeus, a very basal fossil from the mid- to late Eocene of Europe, dates from around the Bartonian 40–37 million years ago (Mya). Thus, the Chaetodontidae emerged probably in the early to mid-Eocene. A crude molecular clock in combination with the evidence given by Pygaeus allows placement of the initial split between the two main lineages to the middle to late Eocene, and together with the few other fossils, it allows the deduction that most living genera were probably distinct by the end of the Paleogene 23 Mya.
Genera
The bannerfish-coralfish lineage can be further divided in two groups; these might be considered tribes, but have not been formally named. Genera are listed in order of the presumed phylogeny, from the most ancient to the youngest:
Bannerfish/coralfish lineage 1:
Amphichaetodon Burgess, 1978
Coradion Kaup, 1860
Chelmon Cloquet, 1817
Chelmonops Bleeker, 1876
Bannerfish/coralfish lineage 2:
Forcipiger Jordan & McGregor, 1898
Hemitaurichthys Bleeker, 1876
Heniochus Cuvier, 1816
Johnrandallia Nalbant, 1974
The "typical" butterflyfishes may eventually come to contain more genera; see Chaetodon:
Chaetodon Linnaeus, 1758
Parachaetodon Bleeker, 1874
Prognathodes Gill, 1862
Roa Jordan, 1923
Timeline
Gallery
| Biology and health sciences | Acanthomorpha | Animals |
449568 | https://en.wikipedia.org/wiki/%E2%88%921 | −1 | In mathematics, −1 (negative one or minus one) is the additive inverse of 1, that is, the number that when added to 1 gives the additive identity element, 0. It is the negative integer greater than negative two (−2) and less than 0.
In mathematics
Algebraic properties
Multiplying a number by −1 is equivalent to changing the sign of the number – that is, for any we have . This can be proved using the distributive law and the axiom that 1 is the multiplicative identity:
.
Here we have used the fact that any number times 0 equals 0, which follows by cancellation from the equation
.
In other words,
,
so is the additive inverse of , i.e. , as was to be shown.
The square of −1 (that is −1 multiplied by −1) equals 1. As a consequence, a product of two negative numbers is positive. For an algebraic proof of this result, start with the equation
.
The first equality follows from the above result, and the second follows from the definition of −1 as additive inverse of 1: it is precisely that number which when added to 1 gives 0. Now, using the distributive law, it can be seen that
.
The third equality follows from the fact that 1 is a multiplicative identity. But now adding 1 to both sides of this last equation implies
.
The above arguments hold in any ring, a concept of abstract algebra generalizing integers and real numbers.
Although there are no real square roots of −1, the complex number satisfies , and as such can be considered as a square root of −1. The only other complex number whose square is −1 is − because there are exactly two square roots of any non‐zero complex number, which follows from the fundamental theorem of algebra. In the algebra of quaternions – where the fundamental theorem does not apply – which contains the complex numbers, the equation has infinitely many solutions.
Inverse and invertible elements
Exponentiation of a non‐zero real number can be extended to negative integers, where raising a number to the power −1 has the same effect as taking its multiplicative inverse:
.
This definition is then applied to negative integers, preserving the exponential law for real numbers and .
A −1 superscript in takes the inverse function of , where specifically denotes a pointwise reciprocal. Where is bijective specifying an output codomain of every from every input domain , there will be
and .
When a subset of the codomain is specified inside the function , its inverse will yield an inverse image, or preimage, of that subset under the function.
Exponentiation to negative integers can be further extended to invertible elements of a ring by defining as the multiplicative inverse of ; in this context, these elements are considered units.
In a polynomial domain over any field , the polynomial has no inverse. If it did have an inverse , then there would be
which is not possible, and therefore, is not a field. More specifically, because the polynomial is not constant, it is not a unit in .
| Mathematics | Basics | null |
22131077 | https://en.wikipedia.org/wiki/GHS%20hazard%20pictograms | GHS hazard pictograms | Hazard pictograms form part of the international Globally Harmonized System of Classification and Labelling of Chemicals (GHS). Two sets of pictograms are included within the GHS: one for the labelling of containers and for workplace hazard warnings, and a second for use during the transport of dangerous goods. Either one or the other is chosen, depending on the target audience, but the two are not used together for the same hazard. The two sets of pictograms use the same symbols for the same hazards, although certain symbols are not required for transport pictograms. Transport pictograms come in a wider variety of colors and may contain additional information such as a subcategory number.
Hazard pictograms are one of the key elements for the labelling of containers under the GHS, along with:
an identification of the product;
a signal word – either Danger or Warning – where necessary
hazard statements, indicating the nature and degree of the risks posed by the product
precautionary statements, indicating how the product should be handled to minimize risks to the user (as well as to other people and the general environment)
the identity of the supplier (who might be a manufacturer or importer)
The GHS chemical hazard pictograms are intended to provide the basis for or to replace national systems of hazard pictograms. It has still to be implemented by the European Union (CLP regulation) in 2009.
The GHS transport pictograms are the same as those recommended in the UN Recommendations on the Transport of Dangerous Goods, widely implemented in national regulations such as the U.S. Federal Hazardous Materials Transportation Act (49 U.S.C. 5101–5128) and D.O.T. regulations at 49 C.F.R. 100–185.
Physical hazards pictograms
Health hazards pictograms
Physical and health hazard pictograms
Environmental hazards pictograms
Transport pictograms
Class 1: Explosives
Class 2: Gases
Classes 3 and 4: Flammable liquids and solids
Other GHS transport classes
Non-GHS transport pictograms
The following pictograms are included in the UN Model Regulations but have not been incorporated into the GHS because of the nature of the hazards.
| Physical sciences | Basics: General | Chemistry |
753962 | https://en.wikipedia.org/wiki/Circulation%20%28physics%29 | Circulation (physics) | In physics, circulation is the line integral of a vector field around a closed curve embedded in the field. In fluid dynamics, the field is the fluid velocity field. In electrodynamics, it can be the electric or the magnetic field.
In aerodynamics, circulation was first used independently by Frederick Lanchester,Ludwig Prandtl, Martin Kutta and Nikolay Zhukovsky. It is usually denoted (Greek uppercase gamma).
Definition and properties
If is a vector field and is a vector representing the differential length of a small element of a defined curve, the contribution of that differential length to circulation is :
Here, is the angle between the vectors and .
The circulation of a vector field around a closed curve is the line integral:
In a conservative vector field this integral evaluates to zero for every closed curve. That means that a line integral between any two points in the field is independent of the path taken. It also implies that the vector field can be expressed as the gradient of a scalar function, which is called a potential.
Relation to vorticity and curl
Circulation can be related to curl of a vector field and, more specifically, to vorticity if the field is a fluid velocity field,
By Stokes' theorem, the flux of curl or vorticity vectors through a surface S is equal to the circulation around its perimeter,
Here, the closed integration path is the boundary or perimeter of an open surface , whose infinitesimal element normal is oriented according to the right-hand rule. Thus curl and vorticity are the circulation per unit area, taken around a local infinitesimal loop.
In potential flow of a fluid with a region of vorticity, all closed curves that enclose the vorticity have the same value for circulation.
Uses
Kutta–Joukowski theorem in fluid dynamics
In fluid dynamics, the lift per unit span (L') acting on a body in a two-dimensional flow field is directly proportional to the circulation, i.e. it can be expressed as the product of the circulation Γ about the body, the fluid density , and the speed of the body relative to the free-stream :
This is known as the Kutta–Joukowski theorem.
This equation applies around airfoils, where the circulation is generated by airfoil action; and around spinning objects experiencing the Magnus effect where the circulation is induced mechanically. In airfoil action, the magnitude of the circulation is determined by the Kutta condition.
The circulation on every closed curve around the airfoil has the same value, and is related to the lift generated by each unit length of span. Provided the closed curve encloses the airfoil, the choice of curve is arbitrary.
Circulation is often used in computational fluid dynamics as an intermediate variable to calculate forces on an airfoil or other body.
Fundamental equations of electromagnetism
In electrodynamics, the Maxwell-Faraday law of induction can be stated in two equivalent forms: that the curl of the electric field is equal to the negative rate of change of the magnetic field,
or that the circulation of the electric field around a loop is equal to the negative rate of change of the magnetic field flux through any surface spanned by the loop, by Stokes' theorem
Circulation of a static magnetic field is, by Ampère's law, proportional to the total current enclosed by the loop
For systems with electric fields that change over time, the law must be modified to include a term known as Maxwell's correction.
| Physical sciences | Fluid mechanics | Physics |
754487 | https://en.wikipedia.org/wiki/Permeability%20%28electromagnetism%29 | Permeability (electromagnetism) | In electromagnetism, permeability is the measure of magnetization produced in a material in response to an applied magnetic field. Permeability is typically represented by the (italicized) Greek letter μ. It is the ratio of the magnetic induction to the magnetizing field in a material. The term was coined by William Thomson, 1st Baron Kelvin in 1872, and used alongside permittivity by Oliver Heaviside in 1885. The reciprocal of permeability is magnetic reluctivity.
In SI units, permeability is measured in henries per meter (H/m), or equivalently in newtons per ampere squared (N/A2). The permeability constant μ0, also known as the magnetic constant or the permeability of free space, is the proportionality between magnetic induction and magnetizing force when forming a magnetic field in a classical vacuum.
A closely related property of materials is magnetic susceptibility, which is a dimensionless proportionality factor that indicates the degree of magnetization of a material in response to an applied magnetic field.
Explanation
In the macroscopic formulation of electromagnetism, there appear two different kinds of magnetic field:
the magnetizing field H which is generated around electric currents and displacement currents, and also emanates from the poles of magnets. The SI units of H are amperes per meter.
the magnetic flux density B which acts back on the electrical domain, by curving the motion of charges and causing electromagnetic induction. The SI units of B are volt-seconds per square meter, a ratio equivalent to one tesla.
The concept of permeability arises since in many materials (and in vacuum), there is a simple relationship between H and B at any location or time, in that the two fields are precisely proportional to each other:
where the proportionality factor μ is the permeability, which depends on the material. The permeability of vacuum (also known as permeability of free space) is a physical constant, denoted μ0. The SI units of μ are volt-seconds per ampere-meter, equivalently henry per meter. Typically μ would be a scalar, but for an anisotropic material, μ could be a second rank tensor.
However, inside strong magnetic materials (such as iron, or permanent magnets), there is typically no simple relationship between H and B. The concept of permeability is then nonsensical or at least only applicable to special cases such as unsaturated magnetic cores. Not only do these materials have nonlinear magnetic behaviour, but often there is significant magnetic hysteresis, so there is not even a single-valued functional relationship between B and H. However, considering starting at a given value of B and H and slightly changing the fields, it is still possible to define an incremental permeability as:
assuming B and H are parallel.
In the microscopic formulation of electromagnetism, where there is no concept of an H field, the vacuum permeability μ0 appears directly (in the SI Maxwell's equations) as a factor that relates total electric currents and time-varying electric fields to the B field they generate. In order to represent the magnetic response of a linear material with permeability μ, this instead appears as a magnetization M that arises in response to the B field: . The magnetization in turn is a contribution to the total electric current—the magnetization current.
Relative permeability and magnetic susceptibility
Relative permeability, denoted by the symbol , is the ratio of the permeability of a specific medium to the permeability of free space μ0:
where 4 × 10−7 H/m is the magnetic permeability of free space. In terms of relative permeability, the magnetic susceptibility is
The number χm is a dimensionless quantity, sometimes called volumetric or bulk susceptibility, to distinguish it from χp (magnetic mass or specific susceptibility) and χM (molar or molar mass susceptibility).
Diamagnetism
Diamagnetism is the property of an object which causes it to create a magnetic field in opposition of an externally applied magnetic field, thus causing a repulsive effect. Specifically, an external magnetic field alters the orbital velocity of electrons around their atom's nuclei, thus changing the magnetic dipole moment in the direction opposing the external field. Diamagnets are materials with a magnetic permeability less than μ0 (a relative permeability less than 1).
Consequently, diamagnetism is a form of magnetism that a substance exhibits only in the presence of an externally applied magnetic field. It is generally a quite weak effect in most materials, although superconductors exhibit a strong effect.
Paramagnetism
Paramagnetism is a form of magnetism which occurs only in the presence of an externally applied magnetic field. Paramagnetic materials are attracted to magnetic fields, hence have a relative magnetic permeability greater than one (or, equivalently, a positive magnetic susceptibility).
The magnetic moment induced by the applied field is linear in the field strength, and it is rather weak. It typically requires a sensitive analytical balance to detect the effect. Unlike ferromagnets, paramagnets do not retain any magnetization in the absence of an externally applied magnetic field, because thermal motion causes the spins to become randomly oriented without it. Thus the total magnetization will drop to zero when the applied field is removed. Even in the presence of the field, there is only a small induced magnetization because only a small fraction of the spins will be oriented by the field. This fraction is proportional to the field strength and this explains the linear dependency. The attraction experienced by ferromagnets is non-linear and much stronger so that it is easily observed, for instance, in magnets on one's refrigerator.
Gyromagnetism
For gyromagnetic media (see Faraday rotation) the magnetic permeability response to an alternating electromagnetic field in the microwave frequency domain is treated as a non-diagonal tensor expressed by:
Values for some common materials
The following table should be used with caution as the permeability of ferromagnetic materials varies greatly with field strength and specific composition and fabrication. For example, 4% electrical steel has an initial relative permeability (at or near 0 T) of 2,000 and a maximum of 38,000 at T = 1 and different range of values at different percent of Si and manufacturing process, and, indeed, the relative permeability of any material at a sufficiently high field strength trends toward 1 (at magnetic saturation).
A good magnetic core material must have high permeability.
For passive magnetic levitation a relative permeability below 1 is needed (corresponding to a negative susceptibility).
Permeability varies with a magnetic field. Values shown above are approximate and valid only at the magnetic fields shown. They are given for a zero frequency; in practice, the permeability is generally a function of the frequency. When the frequency is considered, the permeability can be complex, corresponding to the in-phase and out of phase response.
Complex permeability
A useful tool for dealing with high frequency magnetic effects is the complex permeability. While at low frequencies in a linear material the magnetic field and the auxiliary magnetic field are simply proportional to each other through some scalar permeability, at high frequencies these quantities will react to each other with some lag time. These fields can be written as phasors, such that
where is the phase delay of from .
Understanding permeability as the ratio of the magnetic flux density to the magnetic field, the ratio of the phasors can be written and simplified as
so that the permeability becomes a complex number.
By Euler's formula, the complex permeability can be translated from polar to rectangular form,
The ratio of the imaginary to the real part of the complex permeability is called the loss tangent,
which provides a measure of how much power is lost in material versus how much is stored.
| Physical sciences | Magnetostatics | Physics |
754740 | https://en.wikipedia.org/wiki/Steppe%20eagle | Steppe eagle | The steppe eagle (Aquila nipalensis) is a large bird of prey. Like all eagles, it belongs to the family Accipitridae. The steppe eagle's well-feathered legs illustrate it to be a member of the subfamily Aquilinae, also known as the "booted eagles". This species was once considered to be closely related to the sedentary tawny eagle (Aquila rapax) and the two forms have previously been treated as conspecific. They were split based on pronounced differences in morphology and anatomy; two molecular studies, each based on a very small number of genes, indicate that the species are distinct but disagree over how closely related they are.
The steppe eagle is in many ways a peculiar species of eagle. It is a specialized predator of ground squirrels on the breeding ground, also taking other rather small mammals and other prey, doing so more often when ground squirrels are less consistently found. In rather treeless areas of the steppe habitats, these eagles tend to nest on a slight rise, often on or near an outcrop, but may even be found on flat, wide-open ground, in a rather flat nest. They are the only eagle to nest primarily on the ground. Usually one to three eggs are laid and, in successful nests, one to two young eagles fledge. The steppe eagle undertakes a massive migration from essentially its entire breeding range, moving en masse past major migration flyways, especially those of the Middle East, Red Sea and the Himalayas. In winter, though less closely studied than during breeding, the steppe eagle is remarkable for its sluggish and almost passive feeding ecology, focusing on insect swarms, landfills, carrion and the semi-altricial young of assorted animals, lacking the bold and predatory demeanor of their cousin species. Although still seen by the thousands at migration sites in larger numbers than other migrating eagles of these areas, the steppe eagle's entire population has declined precipitously. The threats to this species consist of increasing steppe fires and pests around the nests (both probably increased by the warming climate) which can cause a large volume of nest failures. Rivaling these factors, declines are being exacerbated by disturbance and persecution by humans, as well trampling of nests by livestock. Free-flying steppe eagles are also being killed in alarmingly large numbers, especially in the stronghold nation for breeding of Kazakhstan, by electrocutions on dangerous electricial wires and pylons. Due to these and other reasons, the decline of the species is thought to be considerably in excess of 50%. Therefore, the species is considered to be endangered by the IUCN. The steppe eagle appears on the flag of Kazakhstan and is the national bird of both Kazakhstan and Egypt.
Taxonomy
British naturalist Brian Houghton Hodgson described the steppe eagle in 1833. Aquila is Latin for "eagle" while nipalensis means "from Nepal" based on the location where the type specimen was collected presumably while migrating. Samuel G. Gmelin however described a species of eagle from Tanais where they were found sitting on ancient mounds or graves of nomads. He called it Aquila mogilnik, the species name "mogilnik" meaning burial in Russian. This was included in the 13th edition of Systema Naturae by his cousin J.F. Gmelin in 1788. The identity of this bird was however confused with A. heliaca but research in 2019 suggests that A. mogilnik can reliably be identified and that it would have been a valid senior name for A. nipalensis had it not been declared as a nomen dubium, a doubtful name by Ernst Hartert in 1914. The lack of usage of the name in literature now makes the name nomen oblitum, a valid senior name lost by disuse. The steppe eagle is a member of the booted eagle subfamily within the Accipitridae family. The booted eagle clan are monophyletic and study of karyotypes has indicated that they likely have few to no close external relations within the overall extant accipitrid family. The booted eagle subfamily all have feathers covering their legs and may be found to some extent on every continent that contains accipitrids. The genus Aquila has been thought to traditionally composed of by large and fairly dark eagles that generally habituate to various open habitats. However, a significant division was determined to exist between superficially similar eagles such as the golden eagle (Aquila chrysaetos) and its three extant and similar-looking close cousins, as well as three very different smaller and pale-bellied eagles (the Bonelli's, African hawk- and Cassin's hawk-eagles) and the species complex which contains the steppe eagle. There is a similar genetic disparity with the golden eagle for the steppe eagle as with the spotted eagles which have been deemed distinct enough to form a separate genus, Clanga. The steppe eagle genetically clusters closely to the tawny eagle as well as, albeit more distantly, with the eastern (Aquila heliaca) and Spanish imperial eagles (Aquila adalberti). However, the loci evidenced in the Aquila genera have been found to be relatively homogenous, with general studies of isoenzymes showing their genes as about ten times less distanced than certain owl genera.
The steppe eagle has historically been considered conspecific with the tawny eagle, even until as recently as 1991. The latter species resides year-round in the African and Asian areas often used seasonally as wintering grounds by steppe eagle. The species were ultimately separated on the grounds of the differences in morphology, disparate coloring, distinct life histories and behaviours. Testing of genetic materials has reinforced the species distinction of the steppe and tawny eagles. Genetically, the steppe eagle is thought to be basal to related species such as the tawny and imperial eagles. A fossil species, Aquila nipaloides, has been found in Italy, Corsica, Sardinia and France and was hypothesized to most closely related to the steppe eagle based on osteology of the ramus (although did evidence some differences in leg morphology). Despite being even more strongly distinctive from the steppe eagle than the tawny eagle, the eastern imperial eagle has been seen to hybridize with the steppe eagles in the wild, once in Turkey and at least three times in Kazakhstan. Each hybrid with imperial eagles has been known to involve pairs of subadult or juvenile eagles and all known hybrid pairings were between male steppe eagles (or apparent steppe-imperial hybrids themselves) mated to female imperial eagles. Some of these hybrid pairs also produced seemingly healthy young with roughly intermediate characteristics.
The steppe eagle has been generally considered to contain two subspecies. One was the nominate subspecies, A. n. nipalensis, which breeds in the eastern portions of the range (perhaps from the East Kazakhstan Region to all points east) while the western breeding population, found in most of Kazakhstan and European Russia, was considered as the subspecies, A. n. orientalis. The separation of two subspecies was largely based on size, with the eastern population being larger and much heavier than the western eagles. The more eastern birds tend to be a shade darker and a have a more extensive nape patch, as well as having a more conspicuously deep gape-line. However, both western and Russian researchers have since made a convincing advocacy that the steppe eagle is actually a monotypical species. It was found that both previously claimed subspecies appear to broadly overlap in the breeding range and become indistinguishable at the Kazakh-Russian meeting point. The primary differences, i.e. in size and mildly in colour, can be explained as clinal variations due to the environment. The breeding populations of the eastern and western eagles are insufficiently allopatric and too extensively engage in introgression to be properly regarded as full subspecies. Erroneously, a checklist once included the former subspecies of A. n. orientalis as being part of the subspecies of tawny eagle from Asia, A. r. vindhiana, an error that was later corrected.
Description
The steppe eagle is a large, bulky and robust-looking eagle. It is mainly dark brown in colour with a longish but very thick neck and a relatively small head that nonetheless features a strong bill and long gape-line. It appears long-winged and has a longish and rather rounded tail and markedly well-feathered (almost with disheveled looking feathers) legs. Steppe eagles tend to perch somewhat upright and usually do so in the open, often utilizing isolated trees, posts, rocks or other suitable low lookouts such as mounds or straw-piles. The species often is seen on the ground where may stand for long periods of the day and walk with horizontal posture and with wingtips just exceed the tail-tip. Steppe eagles, like tawny eagles, can be relatively tame and approachable, at least compared to many of the other Aquila eagles. The adult is a somewhat variable brown with darker centers to the greater coverts. More pronouncedly in the eastern part of the range, adults have normally prominent pale rufous to dull orange-yellow to yellow-brown patches on the nape and hindcrown. Any other paler areas (such as the feather tips of the back and uppertail coverts) are obscured on perched adults. The massive gape-line runs to level with the rear of eye (further emphasized by dark border against paler chin) and is longer than in any other Aquila eagles including tawny eagles. Combined with their deep-set eyes, it lends steppe eagles an altogether rather fierce facial expression. Steppe eagle juveniles are almost invariably paler than adults, with some ranging overall from umber-brown to tawny-buff but then some are darker and more deeply brown. Juveniles tend to be brown to grey-brown on the upperparts but for generally rufous-buff nape patch (more so on eastern population). The juveniles bear conspicuously and broadly white-tipped black about the greater coverts, wings and tail and a bold but narrow cream band on the brown medians. The juvenile steppe eagle's white uppertail coverts is generally concealed when perched; the underparts are usually the same as the upperparts but may be somewhat paler tawny-buff hue. Upon their 2nd year, the plumage is still much as the 1st year appearance but show the pale tips to secondaries, median coverts and tail as often well-worn and narrower; by the start of 2nd winter the, tips of retained juvenile flight-feathers and coverts are heavily abraded and very thin. By the end of 2nd winter, often the immatures look very worn and have nearly lost pale tips altogether and from 3rd year onward manifest a variable mix of old and new feathers. Generally, immatures are often rather scruffy in appearance until adult-like plumage attained at year five, after which the feathers generally appear more compact. Adults have brown to hazel eyes, while juveniles have distinctly dark brown eyes; the cere and feet are yellow at all ages.
In flight, the steppe eagle appears as a large, impressive and visibly heavy raptor with a well-projecting large head and bill and rather broad neck and long, broad wings. They evidence proportionately long arms, especially in the larger eastern birds. The wings tend to be held almost parallel-edged and square-ended with 7 very elongated emarginations. Often juveniles can tend to appear somewhat narrower winged. The broad body of the species often looks suspended underneath and the tail appears rounded or even wedge-shaped, measuring about 3/4 of the length of wing-base. The wingspan is about 2.6 times greater than the total body length. On the upperwings, steppe eagles show a pale greyish primary patch that is often quite large and obvious (especially on non-adults), often being pale at the base on the greater primary coverts but on adults (especially dark birds) much less marked. On the underwing, a very small carpal crescent may be present but can vary from invisible to slightly more marked. The flight feathers are greyish and all have 7–8 well-spaced blackish bars (albeit less conspicuously than on spotted eagles), while the fingers are plain blackish. Adults are basically all fairly uniform dark brown (wings can be negligibly greyer or rarely yellowish brown). Adults may evidence in flight some whitish patches on back and tail coverts that are varying from insignificant to fairly prominent. Adult eagles that do show a dark-barred greyish primary patch usually have that confined to a wedge-shape on inner primaries though can sometimes be rather more prominent. Below adults show dark-barred grey flight feathers and tail with the broad blackish trail edges and wing ends being rather distinctive; the wing linings are often slightly paler to darker than remiges and often with an obscure remnant of broken paler central band. Juveniles are quite distinctive in flight if seen in reasonable view. Above, juveniles are pale greyish-brown to yellow-brown about the body and forewing-coverts, have a broad whitish U above the tail. They possess broad white tips to the blackish greater coverts, flights feathers and tail creating obvious whitish bars on the wings and trailing edges as well as a large and prominent whitish patch covering much of the inner primaries (causing barring to stand out more so and offsetting plain black wing end). On its underside, the juvenile is mid-brown to brownish-yellow with a paler throat and creamy crissum. Below, the creamy central wing band is even broader than above, while the greater coverts all white with some dark centres on primaries (rare extreme pale individuals appear to have almost uniform paler colour on the entire wing lining and lesser and medians buffish-white to pale sandy, often whitish pale primary-wedges). Despite reports that some juvenile 1st years have subtle or no central wing bands, these are believed to be cases where these feathers exist but are obscured by long median coverts. At the end of the first year, the young steppe eagle tends to have pale tips to wings, tail and upperwing coverts become rather abraded; thereafter the development young evidence much variation due to individual differences. Usually, by the end of 2nd winter, the wing looks even more worn and uneven in pattern, with any newly acquired narrowly white-tipped quills clearly longer than old worn juvenile ones that have lost their pale tips. From the 3rd winter on, the pale parts clearly reduced, flight feathers and tail often appear quite ragged and by the 4th year start to more resemble adults. From the end of the 3rd year to when they obtain adult plumage, the eagles tend to have adult-like broad blackish trailing edges and tail often coupled with dark-barred grey base to black fingers and traces of the pale band along greater underwing-coverts. Maturity is obtained between the 4th and 5th years, not at 6–7 years as previously reported despite some presumed five-year-old eagles still have flecks of pale on the wing coverts and the throat and more subtle nape patches than they will ultimately manifest.
Size
The steppe eagle is large and impressive raptor and quite a large eagle. However, as a member of the genus Aquila, it is fairly medium-sized. Females can range to 15% larger with greater dimorphism by weight, which is more pronouncedly dimorphic than by linear dimensions. Total length can range from in fully-grown steppe eagles. Wingspan in full grown eagles of this species is very variable, with the smallest steppe eagles spanning as little as while the largest ones can reportedly span up to . Although some sources list the maximum wingspan as only , the maximum wing dimensions were apparently confirmed for the most massive steppe eagles (i.e. from Altai). Body mass, like wingspan, as reported is also fairly variable. Steppe eagles weighed for a Russia handbook were found to scale from in males while in females weights reported to range from . Elsewhere, the minimum full-grown weights for smaller western eagles (formerly subspecies A. n. orientalis) were for the smallest males while the heaviest females were found to have attained a weight around , while weights in the eastern part of the breeding range are around 20% heavier. In one sample of steppe eagles of possibly varied origins, males weighed a mean of and females a mean of . Wintering eagles of the species in southern Africa weighed a mean of in a sample of four. In Saudi Arabia, 21 steppe eagles at one study site weighed a mean of while 27 eagles at another study site there weighed a mean of . Unpublished weights from Israel were much lower at a reported mean of , as in other raptors during passage migration in Israel, weight loss may be significant relative to the other seasons. Steppe eagles diagnosed as from the smaller-bodied, western part of the breeding range weighed a mean of in 13 males and a mean of just under in a sample of 18 females while the mean weight of larger, eastern breeding bird was listed as in 2 males and in 2 females. The maximum cited weight for steppe eagle males in the wild is while that for females is . Among standard measurements, the wing chord can measure from in males and from in females. The tail may measure from in both sexes and the tarsus may be from in males and from in females, both fairly short for the size of the eagle (although its cousin fossil speciea, A. nipaloides, was apparently even shorter legged). Wing chord length averaged and in males and females in a study, respectively. The huge gape of a steppe eagle is from wide, with an average of in males and females, respectively, while the gape length is , averaging in the two sexes. The hallux claw, the enlarged killing talon on the rear foot of essentially all accipitrids measures from , averaging , in males from , averaging , in females. As in the tawny eagle and imperial eagles, the talon size is modest, whereas most species in the golden eagle clade are markedly larger clawed relative to their size.
Confusion species
In many circumstances, the steppe eagle can be very difficult to distinguish from other similar eagles, often especially during passage and winter. Adults are often confused with spotted eagles but are best separated by their much more broad build, far greater wing areas with longer, more rectangular or squarish wing tips and longer, more conspicuous fingers, larger head (rather than small and bull-headed) and larger overall size. Compared to the spotted eagles, the flight of the steppe eagle is more aquiline, i.e. more powerful, labored and deep while spotted eagles tend to fly more like buzzards. The lesser spotted eagle (Clanga pomarina), the most similarly marked of spotted eagles, is particularly less powerful looking with a shorter neck, much smaller wing areas, shorter fingers and tail and less extensive, baggy leg-feathering. The greater spotted eagle (Clanga clanga) is also smaller and slighter but to a reduced extent. When plumage is clear to see, steppe eagles have more clearly and more extensively barred quills and lack the clear carpal arcs of the two widespread spotted eagles but these differences are obscured at greater distances. Some subadult steppe eagles, with their paler brown wing-coverts above and below and only traces of white underwing bands and clearly pale primary patch above in particular quite resemble the plumage of older lesser spotted eagles. The white wing bars of steppe eagles are usually more conspicuous than those of the lesser spotted eagle. At close range, the steppe eagle has a deeper gape than lesser and greater spotted eagles and has rounded rather than oval nostrils. When seen perched, either on a perch or on the ground, spotted eagles of all three species tend to stand quite tall and upright, emphasizing their more slender and lightly feathered legs, while the steppe eagle sits more horizontally and is always far bulkier than even the biggest greater spotted eagles. Some particularly dark adult and subadult steppe eagles with obscured paler wing feathers can greatly resemble adult greater spotted eagles (the latter species can appear almost blackish in certain lights) and would need to be identified by the differences in size and form. The Indian spotted eagle (Clanga hastata) has a deep gape reminiscent of the steppe eagle but is much slighter in overall size, being similar in size or even smaller than a lesser spotted eagle, and has even less conspicuous whitish wing markings than the lesser spotted. As a result of their rough similarities, many young steppe eagles are misidentified, particularly from a distance, with spotted eagles although generally, identification is possible via a combination of structure and plumage features. Juvenile steppe eagles are normally readily identified by distinctive plumage features but can recall juvenile eastern imperial eagles, the latter has a longer and less rounded tail, a more prominent (rather than deeply set) bill, has a much paler and more buff overall colour while the chest is overlaid with brown streaking and the quills are unbarred. Imperial and steppe eagles are often similar in size, with more western breeding birds usually being somewhat smaller when seen side by side with an imperial eagle and the eastern steppe eagles being of similar average size (but even larger maximum size) compared to full-grown imperial eagles. Steppe eagles are told from tawny eagles by that species being smaller and less bulky with shorter wings, a smaller gape, a more slender neck and a relatively longer tail. Both the tawny and steppe eagle tend to have a distinct S-shape curvature to the trailing edge of the wings. When perched on the ground, the tawny eagle tends to stand more upright, while the steppe eagle often appears to assume a more elongated, horizontal posture. Plumage variations of tawny eagles can render them a surprisingly close colour to the usually darker, duller and browner steppe eagle (especially so in south Asia), but they never obtain the distinct whitish wing band of the young steppe eagle nor the nape patch of most adult steppes. Despite slight individual and clinal variations, the steppe eagle, unlike the tawny eagle, is not polymorphic. These aforementioned eagles present the main possibilities for confusion, less likely mistakes can potentially range from relatively dainty and much smaller Wahlberg's eagles (Hieraeetus wahlbergi) (generally quite different in features but somewhat similarly hued) in Africa to the somewhat bigger but differently structured golden eagles (much longer tail, relatively smaller bill and much smaller gape, different wing shape, more aquiline build and bigger feet and talons) in much of the range.
Distribution
Breeding range
Although the breeding range is rather extensive, the steppe eagle is essentially confined to nesting in only four large nations: Russia, Kazakhstan, Mongolia and China. However, the steppe eagle once bred in Europe. Here, they bred into the 20th century in at least southeasternmost Ukraine and perhaps elsewhere in eastern Europe. These eagles still rarely occur as breeders in southwest Russia from Stavropol to Astrakhan. The steppe eagle is still mapped to breed down to Makhachkala and Maykop to as far west as Leningradskaya, up north as far as the lower Volga and down to the Caspian Sea nearly as far as Makhachkala and south of Fort-Shevchenko. The breeding range extends through appropriate habitat in much of Kazakhstan, from north of Nur-Sultan south to (albeit spottily) to Kyzylorda as well as around the former Aral Sea. From their main breeding areas to the north, steppe eagles breed also marginally in northeastern Kyrgyzstan and perhaps northern Uzbekistan. The breeding distribution is essentially continuous as sweep far to the east in Russia as Transbaikal and Altai. The steppe eagle also breeds in large stretches of western and northern China such as Tian Shan, Xinjiang, the Gobi area, Gansu, Ningxia, northern Tibet (by far their southernmost breeding area), Inner Mongolia and reach their eastern breeding limits in Manchuria and elsewhere in northeastern China. The species breeding range is also quite broad into Mongolia excluding the northern portion.
Wintering range
The steppe eagle is entirely migratory, wintering in east and, to a lesser extent, southern Africa. Their African range can extend western to southern Sudan, almost throughout east Africa, to the easternmost part of Democratic Republic of the Congo. The southern African wintering range extends to central Angola, northern and eastern Namibia south to Botswana, Zambia, Zimbabwe, Eswatini and northern South Africa, including former Transvaal and northern Natal as well as rarely south of the Orange River In South Africa, steppe eagles are reportedly often frequent only in the lowveld of Kruger National Park area. The steppe eagle's wintering range also extends into the Middle East. They occur broadly in the season in several central and southern parts of the Arabian Peninsula as well as regularly in eastern Iraq and western Iran with odd ones north to Turkey and Georgia. Although sometimes recorded as occurring "somewhat" in Arabia, more extensive surveying has revealed that many, if not more, steppe eagles wind up winter in the peninsula rather than Africa, and that the largest ever winter numbers were recorded in Saudi Arabia, where around 7200 individuals (or perhaps up to 9% of the current world population) were recorded near Riyadh. As many as 3000 have also been similarly recorded in the nation of Oman. Other nations to host wintering steppe eagles include Yemen, Azerbaijan and Syria as well as, albeit rarely doing so, in the United Arab Emirates, Lebanon and Kuwait.
Unusually, a few overwintering steppe eagles have been now recorded in Kazakhstan, apparently near Shymkent, in the Aksu-Zhabagly Nature Reserve, the valley of the Syr Darya, the Shardara Dam and towns of the East Kazakhstan Region. In south Asia, the species in winter may occur from Afghanistan (rarely wintering still in the Nuristan Province) and in much of the Indian subcontinent. Pakistan's Poonch and Jhelum valleys of Azad Kashmir are known to host a mean of 154 steppe eagles per study area. In India, they may occur mainly south to Madhya Pradesh, the Indo-Gangetic Plain, the Deccan Peninsula and Himalayan zone, Mizoram, Assam and southern Orissa. Vagrants have been recorded in India to Periyar National Park, Mahendragiri, Kanyakumari Wildlife Sanctuary and Mudumalai National Park. The wintering range extends east to Tibet (although the species is said to be gone from Lhasa in recent years), Nepal, Burma and broadly in east China from southeastern Guizhou to Hainan and southwestern Guangdong. Recent wintering records reflect the species as lingering seasonally at different points of the non-breeding season, albeit very seldom, in central and southern Myanmar, western Thailand, peninsular Malaysia and northern Vietnam. The species may have been aided in expanding their eastward wintering range by deforestation practice.
Migratory range
The steppe eagle appears broadly in many nations between their central Eurasian breeding areas and their generally tropical Indo and African wintering grounds. As a matter of fact, the largest concentrations of the species tend to occur at times of passage. The steppe eagle can also vagrate not infrequently far away from traditional migration sites, and has turned up in many areas from western Europe to as far east as Japan. Vagrant steppe eagles have been recorded in at least the following nations or regions: at least 6 nations in west Africa Morocco, Tunisia, the Netherlands, Finland (at least 50 times) as well as Spain and France, the Czech Republic, Bulgaria and Romania (in both of which they once bred but were extirpated), Greece Mordovia, Yakutia, the Korean Peninsula and probably down to Borneo in Asia. Migration sites include both mountainous ridges and the larger seas along their routes. Steppe eagles predominantly use two main migration routes: one radiates across the Middle East and Arabia, with many birds stopping to winter, but many too migrate around the Red Sea to winter in Africa while the other main migration path frequently involves farther eastern breeding eagles moving along many ridges and prominent flyways before radiated across a broad path through the Himalayas, in order to reach the south Asian and other Asian wintering sites. Less known or less frequently migration paths before these well-known routes of passage may lead steppe eagles around the Black Sea in the west and, much more frequently, around the Caspian Sea farther east. Nations known to be visited by steppe eagles almost exclusively in migratory passage include Egypt, most but not all of Syria, Turkmenistan and Afghanistan and much of east China from Tuquan County to about Xiamen. Points of migration bottleneck, where large numbers of steppe eagles are frequently recorded, are known in areas including Israel, especially around Eilat, Suez (in Egypt), Bab-el-Mandeb (in Yemen), some parts of the nation of Georgia and, in the Himalayan region, especially within Nepal but also sometimes en masse in Pakistan and northern India. Migration sites of minor significance are less known but include Alborz.
Habitat
Breeding habitat
The steppe eagle tends to breed in open dry country, within the characteristic habitat it is named after: the steppe both in both upland and lowland areas. In Kazakhstan, it is known to generally occur in drier parts of the steppe than some other raptors like harriers. This species generally avoids utilizing agricultural land such as arables and most other human-fragmented areas, however, they can be somewhat tolerant of nesting near roads. Associated habitats are frequented when breeding such as flat plains, arid grassland, semi-desert and even desert edge. Most members of the species breed at lower levels but largely in eastern part of the range also will nest in poorly vegetated dry rocky hillsides such as granite massifs and upland valleys, though generally avoid truly mountainous areas.
Wintering habitat
Wintering steppe eagles often occur much more frequently in human-modified areas in order to access easy foods. These include landfills and livestock carcass dumps, these being used frequently everywhere from Arabia to India. More natural habitats used most often by wintering steppe eagles tend to be various wetlands or other waterways where they are available. In winter, mostly savanna and grasslands are the predominant habitat used in Africa, also sometimes dry woodland. Study in Botswana indicated that wintering steppe eagles there appeared to be indifferent to land use changes by humans. In Zambia and Malawi, it was found that the steppe eagle was only frequent in high-elevation plateau areas from metres above sea level. Use of plateaus was also frequent in Zimbabwe, often where open savanna woods of Acacia stand as well as the use of cultivated areas such as wheat stubble fields by eagles. Iraqi wintering steppe eagles often used dump sites as well as deserts and semi-arid areas, with more steppe, other grassland and mountain slopes used in northern Iraq in winter. In Armenia steppe eagles are apparently frequent in old fields and orchards. In south Asia they usually use open country and often frequents large lakes and other wetlands near arid areas but may accept, or even prefer, more heavily wooded areas (however the first records from peninsular Malaysia seem to be from open areas created by deforestation). Although usually a breeder of lowlands, it has been known to live at elevations of up to and locally to in mountains, on passage can occur to over sometimes even to , as was recorded on Mount Everest. Compared to other Palearctic migrating eagles, the steppe eagle seems to perhaps be slightly more tolerant of a wider range of climatic conditions, including rather humid conditions in India provided subsistence is available as well as up to of snow cover in Kazakhstan (living off of urban pests).
Behaviour
The steppe eagle is sometimes regarded as solitary but is frequently seen in the company of conspecifics throughout the year. Besides the obvious breeding pair, they often flock during migration and aggregate in occasionally ample numbers during non-breeding times, usually at fruitful feeding sites, sometimes briefly cooperating with one another especially to klepoparasitize other birds of prey. Steppe eagles fly with slow, deep and stiff-looking wing beats, holding wings fully extend on upstrokes, rendering a heavier flight pattern than spotted eagles. The flight of the steppe eagle is well-analyzed such as experiments with a captive male and observations of migrants in Israel. It appears that the underwing coverts operate as a high-lift device and probably provide stability through unsteady maneuvers, otherwise positive loading on the wings can be maintained. Whilst soaring, generally the wings are held flattish or slightly flexed but sometimes with the hands lowered. About 90% of flight by these eagles in Israel was gliding or soaring. They often fly with the head dropped with hands arched in a glide or often arms straight out and hands drooped. The drooping wing flight method, peculiar to the steppe eagle as well as to the greater spotted eagle, is sometimes also called the "tuck", and is thought to be a gust response precipitated by a transient drop in aerodynamic loading. Steppes adapt their flight to wind and thermal conditions as was studied in Israel, increasing their gliding airspeed under strong thermal convections or opposing winds. This study determined that a combination of circling in thermals and inter-thermal gliding was interspersed with soaring in straight-line glide. Israeli migrants flew up to above the ground but 90% were under and half were below . The Israeli steppe eagles were able to maintain a mean climbing rate of per second, a mean cross-country air speed of per second and a mean of per second in glides; the flight was similar as in other common raptors here but the steppe eagle attained the highest mean cross-country speeds. Steppe eagles tend not to be very vocal especially when not breeding. Their main call is a raspy bark which is similar to that a tawny eagle, despite being mildly deeper. In aerial displays, a loud whistle has been recorded, quite unlike any vocalization of a tawny eagle. Other call recorded have included mainly low and croaking notes aside from a high shriek when startled.
Migration
Steppe eagles appeared to have evolved the strategy of migrating from their breeding grounds, due in large part to the temporary seasonal availability of their main prey, ground squirrels. They probably migrate in greater numbers than any other eagle in the world and can appear to be frequent enough at migration sites that they may mask less numerous migrating eagles that are mistakenly missed in their ranks. The migratory behaviour of this species is arguably amongst the best-studied aspect of its entire biology. Autumn migration often begins around October on fairly broad fronts, and may peak around late October. It usually ends in late November to December but steppe eagles frequently travel somewhat nomadically while not breeding and so individuals may not reach their winter terminus point until about January. Spring migration usually commences in February, peaking early from late February to March, with likely all gone from Africa by the end of the latter month, then continuing in a diminishing trickle into April and May. In passage at Suez, the steppe eagle is one of the earlier migrating raptors on average alongside the long-legged buzzard (Buteo rufinus), averaging about a month sooner in passage than the common buzzard (Buteo buteo) (the most common migrant there) and slightly sooner than the lesser spotted eagle, as well as much sooner than some other raptors there. On average, the wintering period in Africa is relatively brief, at a mean of about up to about 4 months (down to about 2), while adult steppe eagles spend up to 7 months (max of around 5 months for a young eagle) on their breeding grounds. In autumn records from Africa, younger eagles migrate the earliest and adults the latest. Radio-tagging studies confirmed, much as in the lesser spotted eagle, that in spring juveniles migrated later, wandering about more so and came back to the summering grounds much later. One young steppe eagle that was banded in passage in the United Arab Emirates wintered initially in Yemen before returning for the summer to Kazakhstan, then migrating to eastern Africa the following winter, showing that they can change their migratory habits over time. Many studies corroborate that steppe eagles generally migrate lesser distances as they age.
Peak movements around the Red Sea show as many as 76,000 steppe eagles moving over Bab-el-Mandeb in the fall of 1987, with up to 65,000 (in 1981) in Suez and up to 75,000 in Eilat, Israel in the year 1985. Once migrating steppe eagles enter Africa in autumn, no mass migrations have been recorded anywhere for the species in the continent. Although not large, some semi-significant spring movements were detected in Egypt, despite none being recorded in the fall. In autumn, steppe eagles usually pass over Bab-el-Mandeb in the north of Red Sea while in spring they predominantly cross to the south of the Red Sea around Suez. The mean number of steppe eagles that annually pass over Eilat in spring are estimated at 28,032 with a mean peak day of 10 March, making them roughly the fourth most common migrating raptor in spring there (and they often pass in intermingled flocks with other soaring raptors, but not those with powered flight). In Eilat, steppe eagles constitute 6.4% of all raptors seen, nearly all of the Aquila eagles seen and, among those that could be aged, an estimated 60–70% of the steppes seen were thought to be adults. More unusually, the steppe eagle may be the only raptor to also use Israel as a common migratory flight path in autumn as well as spring, with even commoner migrating raptors such as common buzzards and European honey buzzards (Pernis apivorus) being rare there in the fall. In Nepal over 2.5 weeks starting on 20 October, nearly 7852 steppe eagles were tallied, making up more than 80% of the recorded migrating raptors, with peak times of movement being between 10:00 AM and 4:00 PM, especially between noon and 2:00 PM. Over 3 years of study in Nepal, 21,447 steppe eagles were recorded (as many as 1102 within a day and a mean of about 15.2 an hour) at the counting sites. Strong evidence of east-to-west migratory movements, rather than south or northbound, has been made in the Kathmandu Valley. It was indicated based on the directional studies that especially juveniles from the eastern part of the breeding may be more frequently migrate westbound to reach wintering areas such as the Middle East and Africa. On the contrary, juveniles and subadults during the wintering season seem to considerably outnumber adults in the Indian subcontinent so many do head due south. Of 3381 ageable steppe eagles in passage in Nepal, 56% were juveniles or immature, 44% were adults; of 7852 eagles, 58% migrated in groups of 1–5, 30% in 5–20 groups and 12% in larger flocks. In Himachal Pradesh of India, about 11,000 steppe eagles were recorded in autumn migration in 2001 and about 40% less were counted the next spring. This study indicated different migratory paths being used in the seasons, presumably following the winds predominant direction around the terrain, with the westerly autumn migration mostly in the western Himalayas and the easterly spring migration more so in the east of Nepal. Staging areas are not well-delineated in India but appear to concentrate around feeding sites such as landfills. A single female that was radio-tagged in Mongolia was recorded to travel southwest and stop in southeastern Tibet, which is also the southernmost part of the species breeding range. The data from this female indicated that not all steppe eagles move to warmer climates and, based on that she remained stationary until her return to Mongolia, that she was not nomadic as many eagles of the species are. During return spring migration, the steppe eagles in passage in Nepal will reportedly amass into groups of approximately 5 to 20 eagles at only about above the terrain before rising up to cross between the snow-covered peaks.
16 radio-tagged eagles that returned in their first spring migration to their Kazakh summering grounds were recorded to winter as first-year juveniles either, in roughly equal measure, in the Arabian Peninsula or southern Africa, and covered straight-line distances, ranging from , although individually could meander up to for one eagle migrating from wintering grounds Botswana. Of the 16 returning Kazakhstan eagles, spring migration lasted an average of 40 days, ranging individually from 38 to 54 days and covered a mean of each day. The migration path generally led the eagles around almost every direction of the Red Sea, many also passing over Israel and some wrapping around the Caspian Sea. A different radio-tagging study of 19 juveniles (about 57% of which survived) from Russian or Kazakh sites found that autumn movements in the 1st year migration averaged and confirmed not only that they freely changed wintering sites anywhere from India to southern Africa but they never returned, surviving or not, to their natal site in the 1st year, instead return to wandering widely across the northern steppe. The 1st migration averaged 52 days and were much briefer for females than for males, with the discrepancies more pronounced for eagles originating from the Altai Mountains. 15 birds tracked in this study were found to have migrated most frequently to winter in south Pakistan (right along the borderlands to India) or in eastern Turkmenistan. Spring migration began on a mean date of 25 March for the 15 young eagles and lasted about 26 days on average, covering a mean of , with females initiating migration on average 18 days later than males and migrating more briefly, more quickly and more often with fewer stops than males. 9 eagles which were tracked successfully in their first spring passage in this study wandering widely mostly in natural steppe hunting for squirrels and 8 of these tracked to their 2nd autumn migration took about 1.5 times shorter on their 2nd autumn passage and migrated about 17% less far on average.
Dietary biology
The steppe eagle is an opportunistic predator like other Aquila eagles but has a number of dietary and foraging peculiarities. They prey mainly on small-sized mammals, with some birds (such as queleas) and reptiles and (mostly in winter) frequently insects (such as termites and locusts) and carrion. Despite their opportunistic nature, the steppe eagle is a somewhat specialized predator on particular mammals such as ground squirrels while breeding and, during non-breeding times, feeds on various foods but is often peculiarly narrow in dietary selection, preferring massed food sources that require little effort for them to obtain. Various other small or medium-sized mammals can become the most significant prey locally on the breeding grounds, such as voles, pikas and zokors and, generally more secondarily, marmots, hares, gerbils, hedgehogs and others. During the breeding season, one resource claimed that prey mostly weighs . Another account estimated that about 95% of prey weighed less than , although predominantly over . However, yet another resource claimed that staple prey for steppe eagles could weigh anywhere from up to . Even the latter estimate may be conservative in size range, with prey species varying widely in size from very small insects from colonies to unexpectedly large mammals (and seldom birds) apparently killed near nests. On the other hand, a preference has indeed been detected for smaller burrowing mammals (i.e. probably under or so). Studies have determined where only larger species of burrowing mammals are predominant (even the larger species of ground squirrel), the steppe eagles appear to attain comparatively sparse nest densities, only occurring in high densities where the smaller burrowers are profuse. Ecological partitioning to limit interspecific competition may be a factor that dictates the steppe eagle's preference for relatively small prey. The breeding steppe eagle mainly hunts in a low soaring or gliding flight, at a maximum of , diving or making short, accelerated stoops onto their prey. Usually, they tend to capture their prey on the ground. Steppe eagles have been recorded in both Kazakhstan and Mongolia to tactfully avoid casting a shadow before descending onto prey and may drop stones to provide a distraction, a probable form of tool use. In the Kazakh observation, the steppe eagles quickly became used to agricultural activity adjacent to prey accesses while they hunted. They also may hunt in any season on the ground, moving with a shambling gait as necessary, and may give chase on foot to both vertebrate and insect prey. Steppe eagles can often ambush prey by standing in wait next to burrows, suddenly pouncing quickly onto the quarry upon its emergence. Steppe eagles have been seen in China to buzz through locust swarms on the wing as well as to taking avian prey from over above the ground in a dive. Tandem hunting by pairs has been recorded during the breeding season while, in winter and migration, these may be the most social of all eagles, often sharing by up to the dozens abundant food sources. The non-breeding steppe eagle flocks may even seem to assist one another in procuring prey from which they themselves are not likely to be able to directly profit and may repeatedly assist each other until all flock members are satiated. If confirmed, this mutually beneficial foraging strategy between presumably unrelated eagles is truly unique. Much like the tawny eagle, the steppe eagle will readily rob other raptors of their catches, approaching from any angle and pursuing closely until the victim is forced to land or drop its food.
Summer diet
The single prey species most strongly associated with the steppe eagle is the little ground squirrel (Spermophilus pygmaeus). In some areas, as much as 98% of the diet reportedly can be little ground squirrels. This is a smallish ground squirrel though it is actually not greatly smaller than many other Eurasian ground squirrels, at a mean adult weight of about . The little ground squirrel once reached densities of around 30–40 per hectare and provided a reliable food source for these eagles. However, this species has plummeted in population density, in Kalmykia for instance going down from abundant in diverse habitats to perhaps locally extinct before gradually trickling back up in numbers (which continue to be a mere shadow of what they once were). The local steppe eagles of Kalmykia continue to show a strong preference for little ground squirrels. A continued primary reliance on little ground squirrels by steppe eagles was also found recent in studies from Saratov and Lake Baskunchak as well. Out of Russian, in the Karaganda Region of Kazakhstan, little ground squirrels again were an important identified food source, at 19.25% of 400 prey items. In the general area between the Aral Sea and the Caspian Sea, 112 prey items were led by little ground squirrels, at just over 33%. However, in this data, the little ground squirrels were closely followed in number (29.7%) by the yellow ground squirrel (Spermophilus fulvus), which, with seasonal weights ranging from , is the largest of Eurasian ground squirrels. The little ground squirrel is only found in a substantial portion of the western part of the range, so elsewhere steppe eagles tend to prey on different prey species while breeding, though generally continue to take small burrowing mammals, of course.
Around Lake Balkhash in Kazakhstan, the main prey was reportedly the red-cheeked ground squirrel (Spermophilus erythrogenys), a slightly larger ground squirrel than the little species at a mean adult weight of . Other prey noted here included Pallas's pika (Ochotona pallasi), Libyan jird (Meriones libycus) and tolai hare (Lepus tolai). In Xinjiang, reportedly the main prey species is the long-tailed ground squirrel (Urocitellus undulatus). In the Altai region, the leading prey may be the Siberian zokor (Myospalax myospalax), which is the size of a large ground squirrel at an adult weight of about . However, some report in the Altai region that the main prey is the long-tailed ground squirrel and the migration arrival times do seem to correspond closely with this species hibernation emergence period. Yet another primary prey resource reported for steppe eagles in the Altai is the much larger gray marmot (Marmota baibacina). All the primary prey in the previously little reported Altai population are as adults well over what is considered the typical prey size range for this eagle, such as long-tailed ground squirrels, zokors and marmots as well as ptarmigan, and in turn, this may favor the large size of the steppe eagles from this region. On the contrary, other predominant prey in steppe eagle nests can be even smaller than ground squirrels. In Mongolia, the main prey by a large margin was reportedly the Brandt's vole (Lasiopodomys brandtii), which weigh about . In the Transbaikal region, the main prey may be the Daurian pika (Ochotona dauurica), which weighs about . This pika can account for around 39%, as was the case in 62 prey items, (and perhaps up to 62% locally) of the diet in the region. Another study reported a very different primary food source for the Transbaikal, which was the young of the much larger Tarbagan marmot (Marmota sibirica), which were estimated in the study to be from 55 to 77% of the annual diet. Even more conflicting data found that some Transbaikal steppe eagles derived as much as 70% of their foods from long-tailed and Daurian ground squirrels (Spermophilus dauricus). It is possible that in both Altai and Transbaikal that the shifts to differing reported primary prey species are responses of the eagles to shifting prey availabilities as many burrowing mammals are subject to population cycles as well as human-sourced depletions. While rodents and some lagomorphs are usually favored in the diet, in some areas steppe eagles can live at least in part off of quite different prey such as long-eared hedgehogs (Hemiechinus auritus). Other notable prey taken regularly whilst breeding by steppe eagles includes steppe pika (Ochotona pusilla) (especially in the Volga region), alpine pika (Ochotona alpina), yellow steppe lemming (Eolagurus luteus) (especially in eastern Kazakhstan), or the slightly larger types of gerbil such as great gerbils (Rhombomys opimus) and Mongolian gerbils (Meriones unguiculatus). The study of the Karaganda region of Kazakhstan with 400 prey items found illustrated that the steppe eagle is capable of deriving a living from a wide range of prey, with the foods led by rosy starling (Pastor roseus) (mostly fledglings), at 24%, unidentified Microtus voles, at 19.75%, followed by little ground squirrels, unspecified pikas (8.25%), European hares (Lepus europaeus) (5%) and grey partridges (Perdix perdix) (4.5%). An aptitude for avian prey was detected in Transbaikal particularly, including Daurian partridge (Perdix dauurica) and Japanese quail (Coturnix japonica) (the latter at up to 15.6% of the diet). In Altai, assorted corvids (at up to 24.2% of the diet), probably mostly rooks (Corvus frugilegus) and Eurasian magpie (Pica pica), were important to diet as were willow ptarmigan (Lagopus lagopus). Within the Saratov area, medium-sized birds were frequently reported in the diet, such as grey partridges, little bustards (Tetrax tetrax), northern lapwings (Vanellus vanellus) and rooks. A diversity of small passerines has been found in the diet, especially fledgling-age larks of various species, most frequently perhaps in Kazakhstan and Mongolia. A few reptiles found in the diet around nest have included at least sand lizard (Lacerta agilis), Caspian whipsnake (Dolichophis caspius) and steppe viper (Vipera ursinii).
On occasion, during summer, a steppe eagle may be able to take exceptionally large prey. The most regular large prey to appear in their diets are usually Tolai hare, at about , and assorted marmots. The upper size of marmots that the steppe eagle may attack is not well-established although this eagle is more tend to focus on small emergent juveniles around . Besides marmots and hares, steppe eagle takes a diversity of largish mammalian carnivores. Corsac fox (Vulpes corsac) and mustelids such as steppe polecats (Mustela eversmanii) are readily taken on some occasions, pallas cats (Otocolobus manul) and Rüppell's foxes (Vulpes rueppellii) can be taken as well. Additionally, the remains of the red fox (Vulpes vulpes) were found at the nest.
A broad range of young ungulates have also been found in small numbers and it is likely that some are taken both as carrion and as kills, including goitered gazelle (Gazella subgutturosa), Mongolian gazelle (Procapra gutturosa), saiga antelope (Saiga tatarica) and domestic goat (Capra aegagrus hircus). In newborns of these species, weights can vary from around (in goats) to about (in saiga antelope). The taking of large birds is less well-documented than predation on large mammals and in some cases both in summer and during non-breeding times certainly pertain to nestling predations, such as on storks and cranes, or to pilfering easy large fowl like chickens (Gallus gallus domesticus) or domestic turkeys (Meleagris gallopavo).
Non-breeding diet
The steppe eagle, despite being one of the most numerous and widely distributed of all eagles, is exceptionally poorly studied in its non-breeding dietary habits. This is due in large part to the nomadic behaviour displayed by most (but not all) steppe eagles during these times. Steppe eagles are fairly different from related species, being rather gregarious and non-predatory while away from their breeding grounds. Exceptionally, some steppe eagles have been known to overwinter in Altai Town, Kazakhstan, living reportedly off of brown rats (Rattus norvegicus) and rock doves (Columba livia). They are often seen congregating at feeding sites with easily obtained foods that are available in large quantities. In southern Africa, these eagles are often associated with rain fronts and the humidity that accompanies them. They do this largely to exploit a certain food source, termite alates. Termites are known to emerge more extensively in these conditions and so the steppe eagle, not unlike other long-distance migrant raptors, can become locally rather insectivorous to the exception of virtually any other foods. Most often, these eagles will fly down when it is noticed that termites are emerging or wait on foot and then grab them. According to one account these large eagles feed on termites "lumbering after their minuscule quarry in ludicrous fashion". They have also sometimes been seen to take termites in the air and feed on them in flight, not any easy task for such a large eagle. Roosts near termite colonies can contain several steppe eagles which may remain over days but generally depart whether well-fed or not if the rains disperse. In Namibia, the roosts used were the tops of quite small trees of only height. Although tiny with an average estimated weight of only , the harvester termite (Hodotermes mossambicus) (the main termite prey) have been deemed highly nutritious with a relatively high caloric value. It has been estimated that a steppe eagle would have to eat approximately 1600–2200 termites a day, which can be attainable in about 3 hours of feeding. The stomachs of 2 dissected steppe eagles contained 630 and 930 termite heads, respectively. In Zimbabwe, steppe eagles have also been seen in feeding masses in stubble fields picking out insects. However, it would reductive to consider the steppe eagle largely insectivorous in winter, since disproportionately the eagles seen feeding on termites in southern Africa were juveniles and immatures and many of the species winter outside of southern Africa; often wintering steppe eagles from other areas do not seem to live predominantly on insects. In east Africa, the diet of steppe eagles is poorly documented but is reported to consist largely of silvery mole-rats (Heliophobius argenteocinereus) and blesmols of the genus Cryptomys. Routine predation, probably on young or weak individuals, by steppe eagles has been recorded amongst flamingo colonies in east Africa. In several parts of Africa, steppe eagles may routinely visit and feed off of the colonies of the super-abundant bird, the red-billed quelea (Quelea quelea), with a noted focus on picking off the seemingly innumerous nestlings and fledglings of this small passerine. The steppe eagles will reportedly do so by ungracefully scrambling amongst the branches of the nesting colonies.
In the Indian subcontinent, the steppe eagle appears to fulfill the role of a weakly predatory opportunist. Individual Indian wintering steppe eagles are reported to feed at times of vulnerability of prey, including injured birds, eggs and young water birds from heronries, while groups of the eagles often occur around carrion, masses of stranded fish, poultry farms, garbage dumps and livestock carcass dumps. In Chari-Dhand wetlands, as many as 1000 steppe eagles have been seen to gather, presumably living largely off of vulnerable water birds. At the city dumps of Pune as many as 200 steppe eagles have been known to gather and feed. A carcass dump in Jorbeer near Bikaner was recorded to host an average of 43 steppe eagles per day during winter, with a peak number generally occurring in January and February (common dates from November to March and more rarely from September to May), with as many as 136 steppe eagles plus at least 9 other large raptors (mostly vultures), many of which are considered threatened species. It was found the Jorbeer carcass dumps enticed the steppe eagles to venture away from the normal wetland or wetland-adjacent areas used by steppe eagles in the area to the desert-like region, but feral dogs could, in some years, appear to chase off and cause the eagles to avoid this dump. A concentration of around 50 steppe eagle was seen to feed on swarms of locusts in Nepal. Perhaps to avoid competition (i.e. from vultures, jackals and so on) and to monopolize a food item, steppe eagles in India appear to come largely to smaller carcasses such as those of jungle cats (Felis chaus) and pythons. In the Banni Grasslands Reserve, steppe eagles are reported to largely hunt for food unlike in many other Indian reports, mainly on lesser bandicoot rats (Bandicota bengalensis), although also sometimes stole prey from other raptors. Similarly, active predation was unusually reported in Saurashtra and on larger prey including mongoose and Indian hare (Lepus nigricollis) as well as an unsuccessful attack on a mountain gazelle (Gazella gazella) fawn.
In the region of Bharatpur, Rajasthan, largely around Keoladeo National Park, the foraging activities of steppe eagles have been observed extensively. The steppe eagles seldom actively hunted, instead alternating between capturing nestlings from the heronries, especially nearly fledgling-age young of late nesting painted storks (Mycteria leucocephala), and engaging in kleptoparasitism towards other birds of prey, often doing so in groups of about three to nine eagles. More infrequently, steppe eagles in Bharatpur have been seen hunting flocking birds, fish (usually stranded), lizards and snakes. The steppes have been observed feeding on freshly killed young water birds at Bharatpur at daybreak and during early mornings and so may hunt while taking advantage of bright moonlight. Piracy against other raptors often resulted in food wastage, since the steppe eagles often forced the other raptors to drop their catch but the steppes were unable to intercept them and the kills were frequently lost into the water. In Bharatpur, the steppe eagles tended to perch relatively low compared to other eagles, at about in the trees, and to perch often for longer periods than other raptors, apparently while watching closely the activity of the other birds of prey. Of a total of 49 observed hours of activity for steppe eagles in Bharatpur, 45% of it was spent foraging, with a maximum foraging time of 69% during January, then reduced in March to only 17%. The daily food intake of individual steppe eagles was extremely low relative to their size, at only . Instead of piracy, the steppe eagles often engaged each other in what can be considered a play display, almost exclusively between juvenile steppe eagles. In it, two birds circled or more, the higher bird circling closer and dropping toward the lower bird with extended feet, forcing it to roll over and present talons, they either immediately disengage with or without locking talons or descend looked for a few metres before separating; often steppes will fly purposely at a conspecific that is circling and fly up to a higher position so it can drop onto the other; in another incident, a steppe grabbed a plastic bag and let it go buffeting by the wind, then repeatedly caught it and let it go again, ultimately being joined by 5–6 other steppes in the "game'.
Less study has been conducted on feeding habits of the wintering and migrating steppe eagles in the Asia Minor, Middle East and Arabian Peninsula. What is known suggests that they, even more strongly than wintering steppe eagles in Indian subcontinent, today frequent various waste food sources inadvertently provided to them by humans. In Muscat, Oman, migrants largely from Kazakhstan were recorded to live off a mixture of refuse from the region's main landfill and large-scale carcass dump sites. As in the carcass dump areas of the Indian subcontinent, these carcass dumps often host a wide array of large birds of prey, both migrating species and non-migratory ones. In keeping with its size, steppe eagles dominated slightly smaller eagles and vultures and were in turn dominated by slightly larger eagles and much larger vultures. High use of slaughterhouses and cattle dump sites was recorded in winter in Iran. Interestingly, the Iranian slaughterhouses and dump sites hosted no first-year juveniles and few adults, but many steppe eagles either aged to 2 to 3 years of age (62.5%) or 4 to 5 years of age (33.3%). Foraging in both dump sites and available wetlands has been recorded in Iraq as well. Incidental feeding observations from Armenia suggests that steppe eagles in passage and in winter there are able to capture large quantities of voles or pirate them or similar small prey from smaller species of birds of prey.
Interspecific predatory relationships
The steppe eagle shares its distribution with several other birds of prey that can compete for resources. Most similar in feeding niche are largely other eagles, many of which are also similarly migratory. One eagle of similar central distribution is the eastern imperial eagles. The imperial eagle has a similar morphology and can broadly overlap in food selection. It also takes many ground squirrels but is generally less specialized on them during breeding, and often takes similar or larger numbers of prey such as hares, hedgehogs, hamsters and assorted birds both large and medium. In general, the dietary biology is better understood, prey is taken of more diverse sizes and the prey spectrum is far more diverse (perhaps nearly three times as many recorded prey species) in the imperial species. The average weight taken of prey like young marmots is similar in both eagles, averaging in the eastern imperial while the steppe also takes marmots of around this size. Although not common, the imperial eagle can sometimes take prey weighing over , probably rather more frequently than the steppe eagle. It is possible that the steppe eagle gained the preference for relatively more numerous and social but quite small mammals as prey to avoid heavier competition over slightly larger but often more dispersed terrestrial mammals (i.e. hares, hedgehogs, etc.), especially those taken by imperial eagles. Also, the imperial eagle is rather more predatory in food obtainment while wintering, not infrequently eschewing the more vulnerable nestling water birds in the Indian subcontinent to take many adult birds such as waterfowl and coots. The eastern imperial eagle differs most significantly from steppe eagles in nesting habits, favoring tall trees, sometimes in fairly well-wooded areas, which quite contrary to the steppe eagles ground nesting preferences. The migratory course used by imperial eagles is largely the same as the steppe eagle but the imperial is the far less numerous migrant (also more frequently overwintering near their breeding ground) and radiates less far in winter (especially in Africa). Despite the steppe eagle averaging scarcely smaller, data from both breeding and wintering areas indicates that the imperial eagle tends to be behaviorally dominant over steppe eagles. This has manifested in full or partial displacement of steppe eagles locally using pylons as nesting sites by imperial eagles. Furthermore, at shared feeding sites, the steppe eagle tends to back down to the imperial eagle, often allowing it to feed first despite occasional displacement of imperials with full crops. On occasion, in India, steppe eagles succeed in pirating prey from imperial eagles, normally in cooperating parties of steppe eagles. In at least one case in India, the steppe eagle was the aggressor in an interaction with an eastern imperial eagle, causing the two eagles to lock talons and cartwheel down with uncertain results. While the eagles are expected to correspond their sizes in hierarchy when nests are located in the same general area, with the steppe considered as generally subordinate to the imperial which is itself subordinate to the golden eagle, interactions in the Altai region suggest a more complex interspecific relationship. There one study reported several aggressive interactions with both imperial and golden eagles and the steppes were surprisingly the aggressors in each. In one instance, a golden eagle was fiercely attacked by a steppe eagle who appeared to dominate the interaction, grabbing the more formidable-armed golden in air and driving it forcefully to the ground (although the golden eagle was not killed).
Much has been written about what separates the steppe from the tawny eagle but little to no interactions between the species have been noted in the wild. Beyond being strongly allopatric, wintering steppe eagles usually use slightly different habitats, favoring available wetlands quite apart from the arid wooded savanna and semi-desert areas preferred by tawny eagles. The tawny eagle, despite being smaller and proportionately similar in talon size (with a considerably less massive gape), is a rather more powerful and bold predator than the steppe eagle, alternating between capturing relatively large prey, pirating prey from other raptors and scavenging on carrion. The prey sizes taken by tawny eagles are perhaps the most evenly distributed across all weight classes besides the eastern imperial and golden eagles amongst Aquila and Clanga species, with a focus on prey weighing from , i.e. well over the typical prey sizes for steppe eagles in any area. Tawny eagles occasionally attend the same food sources as wintering steppes, such as carcass dumps, other carrion and termite alates, and appear to largely ignore each other; on the other hand, an assertive steppe can sometimes displace a tawny eagle. Apart from imperial eagles, steppe eagles were said to be dominant over other Aquila eagles and spotted eagles in the guild of raptors in Bharatpur. The steppe eagle is quite similar to the lesser and greater spotted eagles in migratory behaviour, but tends to specialize on entirely different prey during breeding. The spotted eagles tend to nest in well-wooded areas near water and catch diverse prey, although usually focus on fairly small prey as does the steppe eagle. The mean prey sizes taken by greater spotted eagles, with a diet often focused on various water-friendly rodents and medium-sized birds, is probably quite similar to that of the steppe eagles, whilst that of lesser spotteds, focused on voles, frogs and small snakes, is expectedly smaller. Especially in Africa, lesser spotted eagles become locally specialized termite eaters very much like the steppe eagle. Spotted eagles appear to be almost invariably dominated by steppe eagle, as has been recorded at carcass dumps during winter. In Bharatpur, spotted eagles species (Indian and greater) are quite often the targets of piracy by steppe eagles. Egyptian vultures (Neophron percnopterus) appear to be subordinate to steppe eagles at carrion but most other vultures are larger (sometimes considerably so) and may be avoided by steppe eagles, although they often fed at carcass dumps alongside assorted vultures. Many other diurnal raptors may share the ground squirrel and other prey that the steppe eagle often subsists on but are generally less specialized and tend to use different nesting habits, usually nesting in trees. These may include saker falcons (Falco cherrug), long-legged buzzards and other buzzards while larger golden eagles and smaller upland buzzards (Buteo hemilasius) often nests in rocks at considerably more elevated altitudes (although the golden may too nest in trees and other habitats).
Smaller raptors like harriers are often the only other diurnal birds of prey to regularly nest on the ground and may co-occur over much of the range of steppe eagles, although usually use damper parts of the steppe as nesting habitats than the eagles. Harriers also often use similar migration routes as do the steppe eagles. In Africa, steppe eagles are often found feeding peaceably in the midst of the numerous yellow-billed kite (Milvus aegyptius) on termites. However, when interactions are of a more competitive nature, the steppe eagle tends to dominate any species of kite. Other raptors both large and small are not infrequently the victims of kleptoparasitism by steppe eagles. In India, Brahminy kites (Haliastur indus), black kites (Milvus migrans), laggar falcons (Falco juggar), Montagu's harrier (Circus pygargus) and western marsh harriers (Circus aeruginosus) among others were robbed of their catches as well as spotted and even tawny eagles. However, house crows (Corvus splendens) often robbed the steppe eagles. In Armenia, common buzzards and Montagu's harriers were seen to be robbed of catches by steppe eagles. Even the golden eagle has seen to have its prey be stolen by steppe eagles in the Bale Mountains.
Predatory interactions with other carnivorous animals where the steppe eagles are victims are largely restricted to the vulnerable young, with the nest sites often being highly vulnerable due to their often accessible positions on mildly elevated ground. Hungry canids are often particularly detrimental predators, particularly grey wolves (Canis lupus) and dogs (often herding and feral ones) and more infrequently red foxes and other carnivores like cats and their kin. An unexpected source of steppe eagle nestling mortality was found to be from the unusually aggressive pallid harrier (Circus macrourus) which attacked and killed two consecutive young eagles although never fed on them (possibly due to delayed displacement by the parent eagles). Apart from the vulnerable young on the nesting ground, steppe eagles appear to be seldom killed by natural predators. However, one was reported as the victim of a caracal (Caracal caracal) in Saudi Arabia (probably in a nighttime ambush). More often, the steppe eagle is the predator rather than victim in deadly contests against other predators. Besides the many aforementioned accounts of prey including carnivores like mustelids and foxes, steppe eagles can also on occasion kill other raptorial birds and seems to consider even quite formidable species as viable prey. In the Karaganda region alone, the local steppe eagles were recorded to prey on lesser kestrels (Falco naumanni), long-legged buzzards, Eurasian eagle-owls (Bubo bubo) and seven short-eared owls (Asio flammeus). In the Altai region, in addition to eagle-owls, the black kite has also been recorded as steppe eagle prey. In fact, the steppe eagle apparently is the only bird to have preyed upon Eurasian eagle-owls besides the golden eagle on multiple occasions. Although rarely observed to halt movements or to eat while migrating in Israel, one steppe eagle was seen to suddenly strike down and consume an adult common buzzard while both species were in passage there. A Brahminy kite that was seen attempting to mob a steppe eagle in Tamil Nadu was observed to be killed by the eagle, while at least one other Brahminy there was also injured by an aggressive steppe eagle.
Breeding
The steppe eagle, like most raptors, breeds in pairs. Otherwise, it displays a preference for solitude whilst summering on the steppe. Like other Aquila eagles, this species may engage in a territorial aerial display. The display of the steppe eagle is not well-known but can be assumed to resemble that of sympatric eagles and is known to include high circling (but perhaps engage in less aerial acrobatics than other Aquila). In Kalmykia, the mean number of pairs per was 1.7. The steppe eagle is rare in the Saratov area, with peak areas such as Alexandrovo-Gaysky District, Novouzensky District, Saint Petersburg and Ozinsky District hold a mean of about 3 pairs per while elsewhere in Saratov, the mean number of pairs per that area is about 0.8. Nearest neighbor distance in Transbaikal averaged . 85 nests in the Altai foothills were found to distanced at a mean of although not all of the nests were occupied. In the Ukok Plateau (within the Altais), the mean nearest neighbor distance was found to be , ranging from . Another study of the Altai found that there were about 0.51 to 3.11 pairs per with a number of successful pairs of 0.35–1.35 per this area, and further more found that Khakassia and Krasnoyarsk Krai contained higher nesting densities but that the Tvya Republic contained lowered densities. This study found that in Altai that the mean nearest neighbor distance was on average , ranging from . In the borderlands between Kazakhstan and Russia, i.e. Aktobe and Orenburg, there was an estimated 7.1 pairs per . In the southern part of the Aktobe region, such as between Bayganin District and Miyaly, the density of nesting steppes can occasionally reach as high as 2-2.5 pairs per . In the Atyrau Region of Kazakhstan, nests on utility towers were spaced at a mean of around as opposed to around apart on other nesting substrates. In the Aral and Caspian areas of Kazakhstan, the mean nearest neighbor distance was but the density of breeding pairs varied more than 50 fold based on habitat, with locally cliff habitat being the least productive and clay semi-desert the most productive. Within the Karaganda Region, the mean number of pairs per was 7.67 while the numbers of successful pairs per such an area was 3.24. In Xinjiang, home ranges were found to range in size from . The breeding season falls from late March or early April (occasionally not starting in earnest until late April) to roughly late August, although several steppe eagles can remain on their breeding grounds until at least October.
Nests
The nest is a large stick platform, varying greater in size based on available materials but averaging flatter than those of other Aquila eagles, excepting the tawny eagle. Most nests are around in diameter and around deep. Nests in the Transbaikal averaged across with nests located on cliffs or rocks getting larger at up to deep. In the Sarastov area, 14 nests measured on average in diameter by in depth, with ranges of and , respectively. Nests in Xinjiang could diameters of as much as . The largest nest in the southern Aktobe region reached a diameter of ; while height of the nest could vary there from those in trees could reach up to . The nest is generally lined with twigs and much clutter. This is due to sparser nesting material in their habitat, so nest structures frequently include peculiar items: paper, polyethylene bags, pieces of wool and manure, bones, feathers, old rags and other human refuse. The nest is traditionally place in an exposed site among stones, often on a hummock. Other nesting sites can include very low bushes and a spot on the ground which is usually raised slightly above the mean layout of the environment. Some other nest sites are known have including haystacks or ruins for a slight prominence, also sometimes on a non-steep cliff or rarely in a tree. Although some older studies claimed that steppe eagles avoided nesting near human activity this has been largely disproven. In Kalmykia, all nests were only from paved roads. A nest in the West Kazakhstan Region was found to be quite close to a village. However, in Transbaikal, the conversion of areas to farmland was a primary cause of a large population contraction for the eagles there. Of 14 nests in Kalmykia, 10 were located on the ground and 4 were in trees or shrubs, not to mention some located on transmission towers at a mean nest height of . In Transbaikal, 53.7% of 47 nests were on hills. Those located on rocky outcrops in Transbaikal, averaged above the mean ground level. In the region of Lake Baskunchak, most of 16 nests found for the species were on rocky rubble and boulders, karst craters and cliffs, with two located on trees. In the Altai region, 62.4% of territories contained only one nest but 27.4% contained 1 alternate nests, 4.7% contained 2 alternate nests and 5.9% contained 3 alternate nests. A study in Altai determined that nests were often in virgin or fallow steppe near larch forests, and were often reused in subsequent years. The location of Altai nests were on gently sloping rocky outcrops, or cuesta escarpments, mostly, which were about 82% of known nest sites (with only 4% on flat ground) In the borderlands of Kazakhstan and Russia, of 418 total nests, 75.6% of nests were on ledges of rocks and boulders or quartz ridges and only 15.8% were on flat ground. Within the region of Atyrau in Kazakhstan, 26% of raptor nests located on electrical transmission towers (or pylons) were those of steppe eagles. 38% of nests in the Aral-Caspian region were either on pylons or at the base of them. In the Karaganda Region, 75.6% of nests were on outcrops and a further 10.84% were on rock disintegrations, and all nest sites had a rocky substrate (including a very small amount on low bushes as well). The Karaganda nests were at a mean elevation of . In the general West Kazakhstan Region, of 286 nests 30.42% were on ground or rocks, 28.32% in trees or bushes and 27.27% on utility poles. The nests in the West Kazakh area that were on rocks and cliffs averaged above the ground, those on poles averaged above ground, in trees and those in bushes averaged above the ground. One nest near the Ili River was noted to be a small tree, Haloxylon, while another was a perhaps dangerously hot sandy dune. One successful nest near the Irtysh River was on the shrub Iberian meadowsweet (Spiraea hypericfolia) (while another unsuccessful one was on a thick growth of Lonicera tatarica). Another unusual Kazakh nest was on the ground with rather loamy grasslands that was probably unsafe due to excessive sun exposure (only blocked for 20% of the day) and a considerable local presence of red foxes. Of 49 nests found in Mongolia 47.8% were on the ground, 32.6% were on rock columns or large boulders, 8.7% on cliffs, 8.7% on artificial substrates, including a car tire, an abandoned car cabin and an artificial nest platform and 2.2% was in a tree. All Mongolian nest in this study were at elevations between , with a mean of . In Mongolia, the height of nests above the surrounding flat earth was .
Eggs and incubation
The clutch size is usually 2, i.e. from 1 to 3 with some clutches very rarely including as many as 4–5 eggs. The clutch averaged 2.38 in nests in the Aral-Caspian area. In Kalmykia, the mean clutch size was 2.31. In the Lake Baskunchak area, 66.7% of nests contained 2 eggs, 25% contained 3 and 8.3% contained 1. Out of 30 inhabited nests in the Transbaikal, 77% had 2 eggs, 20% had 1 egg and 3% had 3 eggs. In the Volga area, the average clutch size was 2.2. One study in Altai found the clutch size of a small sample averaged 1.67, while another placed the mean clutch size of 19 clutches was 2. Yet another Altai study found that clutch size in 32 active nests was 2.33. In Aktobe and Orenburg, the mean clutch size was 1.94. In western Kazakhstan, the clutch size average was 2.38, with up to 4 recorded; here 54.05% of clutches contained 2 eggs. The mean clutch size in study in Mongolia was 1.9, amongst 43 egg-laying pairs, 58.1% laid 2 eggs, 23.3% laid 1 and 18.6% laid 3. The eggs are largely off-white in colour but may have faint brown or grey spots. The mean egg size in Kalmykia was with a range of in height and a range of in width. The average was similar in the Volga area, at , with ranges of by . Eggs were larger in Transbaikal where they measured on average and weighed a mean of . Two eggs in Altai weighed , respectively. The incubation stage lasts around 45 days, though may be up to week briefer in some cases. Hatching is often sometime in May, but can continue to early June.
Development of young and parental behaviour
The brood size in Kalmykia averaged 1.64. As for Transbaikal, the average number of chick per occupied nest was 0.65 while, in the successful nests, the average was 1.38. In the Altai, the mean brood size was reported as 2 in a sample of 9 one year and 1.4 in a sample of 10 the next year. A different study of the Altai found that there was a mean brood size of 1.86 per successful nest (0.86 per all occupied nests). In the trans-border of Russia and Kazakhstan, the mean brood size per occupied nest was 1.03. The average brood size in the Aral-Caspian was 2.36. In the highlands of the East Kazakhstan Region, a mean of 1.9 eaglets were found in 15 occupied nests. In the west Kazakh region, the brood size average was 2.22. One study found that sex can be identified via morphometric measurements in 90% of cases and that the larger eastern populations of steppe eagle are conspicuously larger at all stages of development than the more westerly ones. The growth and development of a single chick in the Zhanybek District of Kazakhstan was well studied, in an area where little ground squirrels were broadly available (i.e. about 40 adults per ha). This eaglet weighed and was covered in white down on day 1 while, by day 6, it weighed and had down white as earlier but longer. By day 10, it weighed and by day 15 weighed . At the age of 20 days, this eaglet weighed and manifest much more conspicuous emerging brown feathers. Once aged 25 days, it weighed and had juvenile feathers over a third of its body and by day 30 it weighed . By day 35, it weighed and was almost all brown but with down still about the head. Full body size and juvenile plumage (but for fully-developed wing and tail feathers) was attained at 40–43 days for the chick, it weighed ; although fully grown, it still crouched down at threats and could not fly. Similar growth was tracked in Xinjiang, where it was noted that around 20 days of age, the young could standing, wing flap frequently at 45 days and move about the nest vicinity somewhat and could at 60 days old eat unaided. In one case near the Irtysh River, when a nest was approached by humans, the eldest eaglet was seen to aggressively display and try to displace them while it appeared to protect its two young siblings, which sheltered behind it. The fledging of the young eagles occurs relatively quickly at somewhere between 55 and 65 days, due probably to the vulnerability of the nest sites, quickly leaving the nest is probably advantages to avoid dangers peculiar to these eagles nest like predators, wildfires, cattle-trampling, humans and so on. Usually, the second fledgling initially flies somewhat later and more clumsily than the first. The difference in fledgling times from the first to the second fledgling was recorded to be 8 to 10 days in Transbaikal. The mother steppe eagle in Kalmykia is not as tight a sitter as other Aquila eagles tend to be and flushed when approached within and may too take a relatively long time to return. However, here the females were highly tolerant of automobiles. Along the Irtysh River in Kazakhstan, although steppe eagles flushed when approached within they did not depart for long and returned quickly for this species relative to other reports. Those nesting in Atyrau on utility towers allowed approach to within via car. In the southern Aktobe region, the steppe eagles appear almost desensitized to humans, probably due to extensive exposure unlike in more remote areas, and allowed approach via motorcycle to within but flushed if a person was on foot within . Levels of hemoparasites appear to be low in nestling steppe eagles but the sample sizes of the only study known so far are small.
Nesting success rates
Out of 10 nests in two locations in Kalmykia, alarmingly, 7 contained dead embryos that never hatched. Thus, the breeding success rate here was quite low at 30–40%. Study from the Altai region determined that than occupied nests produced an average of 1.52 fledglings. Within the Ukok Plateau part of the Altais, 31.6% of 19 checked nests were found to be successful. The mean number of fledglings in Mongolia per nest was 0.9 with a fledgling success rate of 42.2%, with no strong annual variations. Nest located on cliffs were more successful in Mongolia, the reasons inferred were greater protection from the elements and from predators, while for those nesting on artificial substrates, the success rate was lower (37.5%). Of 30 failures recorded in Mongolia, 37.5% were due to desertion by the parents, 16.7% due to infertile eggs, 6.7% due to predation (possibly from wolves and common ravens (Corvus corax)), 10% due to starvation, 3.3% from cannibalism and the remaining 26.7% from unknown causes. In the Lake Baskunchak area, several nests were abandoned due to reportedly accidental human intrusions. Study in Xinjiang indicated that about two-thirds of nesting attempts by steppe eagles there appear to fail. In Tuva, the dictating factors of nesting success were considered habitat quality, food supply, disturbance levels, and the ability to rapidly change home ranges as habitats were unnaturally altered via the felling of tall trees. Starvation and dehydration seem to be leading causes of chick mortality in Lake Baskunchak. In the Orenburg region, 41.1% of nestling mortality was due to starvation (after a decline of the little ground squirrel), 38.3% due to steppe fires, human disturbance at just under 10% and more minor causes were parental inexperience and predation. Among active nests in the Karaganda Region, as of 2017, 42.26% were successful and a high rate of 54.46% completely failed, producing 0.61 fledglings per occupied nest and 1.45 per successful nest. Many Karaganda nests were noted to include infertile eggs, while many nests were destroyed in steppe fires. A follow-up study 2 years later found even more severe nest failure rates, with only 28.42% of nests succeeding. The reduction of the number of occupied nests here was 18.9% and by number of successful nests, the reduction was 63.9%. The breeding of a large quantity of visibly younger eagles in a breeding population is generally thought to indicate stress on the population of a raptor species. The steppe eagle, in sync with its overall decline, has shown an alarming increase of subadult specimens breeding. In Kalmykia, the number of breeding subadults increased from 1.75% prior to 5.26% during the 2011–2015 period. This is a relatively small amount of breeding underaged eagles compared to some other populations. In the Aral and Caspian areas, 39.62% of 58 breeding pairs contained 1 subadult breeding bird of around 3–5 years of age. Similarly, within the Ukok Plateau, only 23.8% of 67 sighted steppe eagle were mature adults, indicating the reduction of mature individuals is similarly severe there. Subadult breeding steppe eagles were also noticed in Mongolia.
Status
Densities of steppe eagles vary greatly both regionally and annually. This species has specialized food requirements, making this species more dependent on food availability than many other raptors. European Russia in the 1990s was estimated to hold up to 20,000 pairs while steppe eagles were considered very rare in some parts of breeding range (i.e. central-south Siberia). Even in recent decades, the steppe eagle has been considered easily one of the most numerous migrating eagles in the world. The species has largely been regarded as "widespread and common" in winter in Indian subcontinent. Estimates of the world population were projected based on the total breeding range encompasses over and average density very roughly of , which would put the population would be in six figures, but density were perceived to be slightly lower (i.e. 1 pair/) gives a total of 80,000 breeding pairs. A refined and more recent estimate of the global population posited that there were 53,000–86,000 remaining breeding pairs globally for steppe eagles, with 43,000–59,000 pairs estimated in Kazakhstan, 2000–3000 estimated in Russia, 6000–13,000 estimated in Mongolia, and 2000–6000 estimated in China. This study projects the global number to be between 185,000 and 344,000 individuals at peak times, which is the end of the breeding season, with only 17,700–43,000 remaining adults. The Aral-Caspian area is estimated to hold about 5742–7548 breeding pairs (51% of which are thought to breed in Uzbekistan, the remainder in Kazakhstan). Post-breeding, the Aral-Caspian is thought to hold about 10,000–14,000 individuals. It is thought that the West Kazakhstan Region and Kalmykia region are the epicenter of the world population, holding the maximum genetic diversity (via haplotype) in the world, with the genetic diversity narrowing farther east in the breeding range. The Altai and Sayan region is thought to hold 43–51% of the current breeding population of steppe eagles in Russia, with the Altai Republic estimated to hold 270–280 breeding pairs. A slight recovery has been noted in the Tuva Basin, going up to as much as 200–300 pairs between 2008 and 2014.
The breeding range of the steppe eagle was already contracted markedly early on in the 20th century, especially in the west, largely as result of habitat loss (in particular appropriation of steppes for agriculture) but also persecution and predation, factors that may have drove some pairs to elevated nest sites. Steppe eagles once bred in Romania, Moldova and, more recently, Ukraine. Careless pesticide use in Europe depleted prey populations and collapsed nearly the whole local ecosystem, which alongside habitat conversions and persecution drove the steppe eagle's European breeding population to extinction. Range contraction is very considerable today in the European Russia area where it is almost locally extinct as well. Declines overall have been rapid and alarming. It is estimated that as of the 20 years prior to 2015, the population worldwide has declined at minimum by 58.6%. As a result, the steppe eagle was uplisted in 2015 to being endangered by the IUCN. In the borderlands of Russia and Kazakhstan, an estimated 11.9% population decline was detected in merely 6 years of study. Primary global threats appear to be habitat loss, persecution, wildfires, predation (and trampling by cattle) of chicks and electrocutions and wire collisions, especially the latter causes. Furthermore, the steppe eagles genetic diversity may be rapidly declining as well. The diagnosed causes of decline in Xinjiang, Tibet and Qinghai were found mostly to be poaching, poisoning from rodent control programs (with systemic efforts dating back at least 60 years), poisoning also targeted towards predators, illegal trade, food shortages and wire collisions but perhaps most of all habitat destruction, often with their former homes destroyed to make way for roadworks, tourism and mine exploration, with more destruction from overharvest of trees and plants and overgrazing by livestock, and accidental are frequent. Poisoning is thought to be quite prevalent in the Altais, as well as powerline electrocutions. A mean of 13.3 individual steppe eagles in Kazakhstan were estimated to be killed by each of powerline. The steppe eagle can even been the most frequently electrocuted raptor in Kazakh data, at up to about 35% of 129 dead raptors or 49% of 223 dead raptors in a couple of relatively small stretches. Many birds of various families are killed by these powerlines, as was recorded in Central Kazakhstan, in addition to the various raptorial birds which (due to their low reproductive rates and large territories) are often unable to withstand continuous powerline losses. It is estimated that in West Kazakhstan that no fewer than 1635 nests (or 7.9% of the entire breeding Kazakh population) fail due to the parents being electrocuted on powerlines. Across the border in Orenburg, Russia, a high rate of electrocutions is known be occurring as well. Furthermore, overgrazing and habitat alterations by humans have destroyed much of what remains around Haloxylon stands of west Kazakhstan (in turn destroying about 50% of local nesting attempts), while some steppe eagles cannot nest locally due to presumed competitive exclusion by imperial eagles. Locally predation and nest losses can claim up to 80% of chicks but productivity is heavily dependent on foods. In the Karaganda Region, 20.9% of nests were recorded to be ruined by steppe fires. Less well understood losses in West Kazakhstan may be due to continued poaching, poisonings and blackflies which kill nestlings and seem to have increased with the warming temperatures. The number of steppe fires appears to be higher than ever before which also may be due to increasing warmth and aridity. The probable and irrevocable extinction of this species is projected if numerous detrimental factors are not reversed, namely the mitigation of electrical lines and towers in breeding areas, the removal of grazing and manmade fires in breeding areas as well as habitat destruction and the several other threats. Ambiguities exist over whether Kazakhstan can institute protective laws strong enough to prevent the loss of the species, as even powerline alterations have not occurred at the national level. However, continued relative stability of the species has been detected in the more minor eastern part of the range (based on largely unchanged numbers of migrants from here in Nepal and elsewhere in the Himalayas) such as Altai. One potential stopgap solution to mitigate some of the electrocutions may be to install T-shaped perches around transmission towers which has been effective in reducing the more minor decline from powerlines in the Mongolian part of the range. Notably, the rate of decline in the western part of the range is so pronounced which is quite to the contrary other similar raptors like eastern imperial eagles and long-legged buzzards have begun to recover in similar areas (the opposite pattern almost manifests in the imperial eagle, which is declining much more severely in the east such as Baikal).
The nomadic nature of steppe eagles in winter can make accurate counts of the species in that season difficult. However, the species is still considered relatively frequent during winter in Pakistan. Declines from migrating and wintering areas appear to be generally poorly documented. There is concern that landfills function as ecological traps for these species, due to poisoning being frequent and powerlines frequently a present threat. From the years 1882 to 2013, an estimated 76,879 steppes were recorded in 9 countries in Indian subcontinent, including Afghanistan, Pakistan, Nepal, Tibet, Bhutan, Sikkim, Myanmar and Bangladesh, often gathering around garbage and carrion dumps. They may be even locally increasing somewhat in number in locales in India such as Kerala, possibly concentrated more so due to less competition from vultures. However, toxic levels of diclofenac were found in two dead steppe eagles at the cattle carcass dump in Rajasthan. A paper based on joint research conducted by the Bombay Natural History Society, Royal Society for the Protection of Birds and Indian Veterinary Research Institute, published in May 2014 in the journal of the Cambridge University Press, highlighted that steppe eagles are adversely affected by diclofenac and may fall prey to veterinary use of it. The research found the same signs of kidney failure as seen in the Gyps vulture killed due to diclofenac. They found extensive visceral gout, lesions and uric acid deposits in the liver, kidney and spleen, as well as deposits of diclofenac residue in tissues. Steppe eagles are opportunistic scavengers, which may expose them to the risk of diclofenac poisoning. Declines have pronounced in passage at Eilat, Israel, where the number of steppe eagle juveniles to adult has dropped 30% from the early 1980s and by 1.4% by 2000; all record low annual numbers in migration there have been subsequent to the 1990s. However, lower numbers in Eilat may be due in part to increasingly large portions of the steppe eagle population now wintering in Arabia rather than Africa. Persecution of raptors in Eilat appears to be still quite prevalent, with steppe eagles accounting for 9.1% of 77 raptors that were found killed by poachers (often appeared to been wrapped in rope and sometimes mutilated), doing so apparently largely out of superstition. Some population declines of migrants in Israel may too have depleted as well by the Chernobyl disaster. Israeli biologist have strongly advocated that stricter protection be undertaken and a conserved greenbelt be instituted to accommodate the steppe eagle and other raptors in passage. The similar numbers seen in passage exiting Africa in spring as those entering in autumn indicate that mortality for the species is not high in that continent. Persecution through shooting likely continues to be of determent to steppe eagles migrating or wintering in the countries of Georgia, Armenia, Iraq and Jordan, with the eagles being relatively vulnerable due to their sluggish, unwary demeanor and, in Iraq, along with many raptors end up being offered at local markets. In Saudi Arabia, the turnover to more intensive farming activity has depleted to available habitat usable for steppe eagles. In Saudi Arabia as well as Iraq and Armenia, other conservation concerns are similar as elsewhere, including dangerous powerlines and potential poisonings.
| Biology and health sciences | Accipitrimorphae | Animals |
755375 | https://en.wikipedia.org/wiki/Pectoralis%20major | Pectoralis major | The pectoralis major () is a thick, fan-shaped or triangular convergent muscle of the human chest. It makes up the bulk of the chest muscles and lies under the breast. Beneath the pectoralis major is the pectoralis minor muscle.
The pectoralis major arises from parts of the clavicle and sternum, costal cartilages of the true ribs, and the aponeurosis of the abdominal external oblique muscle; it inserts onto the lateral lip of the bicipital groove. It receives double motor innervation from the medial pectoral nerve and the lateral pectoral nerve. The pectoralis major's primary functions are flexion, adduction, and internal rotation of the humerus. The pectoral major may colloquially be referred to as "pecs", "pectoral muscle", or "chest muscle", because it is the largest and most superficial muscle in the chest area.
Structure
Origin
It arises from the anterior surface of the sternal half of the clavicle from breadth of the half of the anterior surface of the sternum, as low down as the attachment of the cartilage of the sixth or seventh rib; from the cartilages of all the true ribs, with the exception, frequently, of the first or seventh, and from the aponeurosis of the abdominal external oblique muscle.
Insertion
From this extensive origin the fibers converge toward their insertion; those arising from the clavicle pass obliquely downward and outwards (laterally), and are usually separated from the rest by a slight interval; those from the lower part of the sternum, and the cartilages of the lower true ribs, run upward and laterally, while the middle fibers pass horizontally.
They all end in a flat tendon, about 5 cm in breadth, which is inserted into the lateral lip of the bicipital groove (intertubercular sulcus) of the humerus.
This tendon consists of two laminae, placed one in front of the other, and usually blended together below:
The anterior lamina, which is thicker, receives the clavicular and the uppermost sternal fibers. They are inserted in the same order as that in which they arise: the most lateral of the clavicular fibers are inserted at the upper part of the anterior lamina; the uppermost sternal fibers pass down to the lower part of the lamina which extends as low as the tendon of the Deltoid and joins with it.
The posterior lamina of the tendon receives the attachment of the greater part of the sternal portion and the deep fibers, i.e., those from the costal cartilages.
These deep fibers, particularly those from the lower costal cartilages, ascend the humerus insertion higher, turning backward successively behind the superficial and upper ones, so that the tendon appears to be twisted. The posterior lamina reaches higher on the humerus than the anterior one, and it gives an expansion which covers the intertubercular groove of the humerus and blends with the capsule of the shoulder-joint.
From the deepest fibers of this lamina at its insertion, an expansion is given off which lines the intertubercular groove, while from the lower border of the tendon, a third expansion passes downward to the fascia of the arm.
Nerve supply
The pectoralis major receives dual motor innervation by the medial pectoral nerve and the lateral pectoral nerve, also known as the lateral anterior thoracic nerve. The sternal head receives innervation from the C7, C8 and T1 nerve roots, via the lower trunk of the brachial plexus and the medial pectoral nerve. The clavicular head receives innervation from the C5 and C6 nerve roots via the upper trunk and lateral cord of the brachial plexus, which gives off the lateral pectoral nerve. The lateral pectoral nerve is distributed over the deep surface of the pectoralis major.
The sensory feedback from the pectoralis major follows the reverse path, returning via first-order neurons to the spinal nerves at C5, C6, C8, and T1 through the posterior rami. After the synapse in the posterior horn of the spinal cord, sensory information concerning movement of the muscle, proprioception, and pressure then travels through a second-order neuron in the dorsal column medial lemniscus tract to the medulla. There, the fibers decussate to form the medial lemniscus which carries the sensory information the rest of the way to the thalamus, the "gateway to the cortex". The thalamus diverts some sensory information to the cerebellum and the basal nuclei to complete the motor feedback loop while some sensory information ascends directly to the postcentral gyrus of the parietal lobe of the brain via third-order neurons. Sensory information for the pectoralis major is processed in the superior portion of the sensory homunculus, adjacent to the longitudinal fissure which divides the two hemispheres of the brain.
Electromyography suggests that it consists of at least six groups of muscle fibres that can be independently coordinated by the central nervous system.
Variation
The more frequent variations include greater or less extent of attachment to the ribs and sternum, varying size of the abdominal part or its absence, greater or less extent of separation of sternocostal and clavicular parts, fusion of clavicular part with deltoid, and decussation in front of the sternum. Deficiency or absence of the sternocostal part is not uncommon and more frequent than absence of the clavicular part.
Poland syndrome is a rare congenital condition in which the whole muscle is missing, most commonly on one side of the body. This may accompany absence of the breast in females. The sternalis muscle may be a variant form of the pectoralis major or the rectus abdominis. [Submuscular and intramuscular surgical implants (similar to breast augmentation implants) may be available from plastic surgeons to modify aesthetic contours, mass, and asymmetry or variation in both males and females.]
Function
The pectoralis major has four actions which are primarily responsible for movement of the shoulder joint.
The first action is flexion of the humerus, as in throwing a ball underhand, and in lifting a child.
Secondly, it adducts the humerus, as when flapping the arms.
Thirdly, it rotates the humerus medially, as occurs when arm-wrestling.
Fourthly the pectoralis major is also responsible for keeping the arm attached to the trunk of the body.
It has two different parts which are responsible for different actions.
The clavicular part is close to the deltoid muscle and contributes to flexion, horizontal adduction, and inward rotation of the humerus. When at an approximately 110-degree angle, it contributes to adduction of the humerus.
The sternocostal part is antagonistic to the clavicular part contributing to downward and forward movement of the arm and inward rotation when accompanied by adduction. The sternal fibers can also contribute to extension, but not beyond anatomical position.
Hypertrophy of the pectoralis major increases functionality. Maximal activation of the pectoralis major occurs in the transverse plane through pressing motions. Both multi-joint and single-joint exercises induce pectoralis major hypertrophy. A combination of both single-joint and multi-joint exercises will result in a maximum hypertrophic response. [Aesthetic contours of regions in the muscle may be specifically addressed (“targeted”) by specific exercises; for instance, “plating” or “stitching” of the pectoralis major —towards the center of the sternum —-may be targeted by a wider hand position.] The pectoralis major can be targeted from numerous training angles along the sternum and clavicle. Exercises that include horizontal adduction and elbow extensions such as the barbell bench press, dumbbell bench press, and machine bench press induce high activation of the pectoralis major in the sternocostal region. Heavy loads are strongly correlated with pectoralis major activation.
Clinical significance
Injuries and imaging
Tears of the pectoralis major are rare and typically affect otherwise healthy individuals. This type of injury is known to affect the athletic population, namely in high-impact contact sports such as powerlifting, and may result in pain, weakness, and disability. Most lesions are located at the musculotendinous junction and result from violent, eccentric contraction of the muscle, such as during bench press. A less frequent rupture site is the muscle belly, usually as a result of a direct blow. In developed countries, most lesions occur in male athletes, especially those practicing contact sports and weight-lifting (particularly during a bench press maneuver). Women are less susceptible to these tears because of larger tendon-to-muscle diameter, greater muscular elasticity, and less energetic injuries. The injury is characterized by sudden and acute pain in the chest wall and shoulder area, bruising and loss of strength of the muscle. High grade partial or full thickness tears warrant surgical repair as the preferred treatment if function is to be preserved, particularly in the athletic population.
Acting fast, obtaining the correct diagnoses, and getting the surgical repair as soon as possible is a key to successful recovery. Waiting can cause the acute injury to become chronic and chances of success is greatly diminished as a result. After surgery, the impacted arm is then immobilized with a sling for about six to eight weeks to minimize and avoid movement of the arm and potentially re-rupturing the surgery site. About two months after the surgery, physical therapy is typically introduced for about six months, after which point strengthening of the muscle is needed to achieve good results. Most patients are able to return to activity after six months to a year following surgery with high patient satisfaction and slightly reduced strength compared to pre-injury. Both US and MRI are useful to confirm the diagnosis, location and extent of a tear, though the first may be more cost-effective in experienced hands.
Poland syndrome
Poland syndrome is a congenital anomaly in which there is a malformation of the chest causing the pectoralis major on one side of the body to be absent. Other characteristics of this disease are "unilateral shortening of the index, long, and ring fingers, syndactyly of the affected digits, hypoplasia of the hand, and the absence of the sternocostal portion of the ipsilateral pectoralis major muscle". Although the absence of a pectoralis major is not life-threatening, it will have an effect on the person with Poland's syndrome. Adduction and medial rotation of the arm will be much harder to accomplish without the pectoralis major. The latissimus dorsi and teres major also aid in adduction and medial rotation of the arm, so they may be able to compensate for the lack of extra muscle. However, some patients with Poland's syndrome may also be lacking these muscles, which make these actions nearly impossible.
Researchers from the Department of Rehabilitation Medicine at the Yonsei University College of Medicine in Seoul, Korea reported a case of congenital absence of pectoralis major in 1990. According to Kakulas and Adams, pectoralis major is the most frequently congenitally absent muscle. The case involved a 22-year-old marine who had asymmetrical configuration of chest wall who had never experienced difficulties performing daily activities, but who experienced difficulties in the military camp. He had difficulty in some training activities especially those such as throwing a grenade or rope climbing. During a surgery performed to correct the sternal depression, it was found that the right pectoralis major was totally absent. However, previous physical exams did not show deficiencies in muscle strength as the right shoulder was good for flexion, adduction, horizontal adduction and internal rotation. Moreover, his pain and touch sensation were normal. X-rays were also performed and showed normal pictures of the chest's bones. The fact that the absence of pectoralis major did not cause functional loss in ordinary activities in this case of congenital absence showed that other surrounding muscles played a compensatory role.
Other diseases
Pectoralis major muscle in rare occasions may develop intramuscular lipomas. Such rare tumors may mimic malignant breast tumors as they look like enlargements of the breasts. They are well-encapsulated radiolucent tumours of fat density. Their location can be accurately identified through computed tomography and magnetic resonance imaging (MRI). The treatment in these cases involves complete surgical excision because of the risk of liposarcoma they post especially large intramuscular liposomas. Partial excision is risky because recurrence may occur.
Additional images
| Biology and health sciences | Human anatomy | Health |
755947 | https://en.wikipedia.org/wiki/Ravelin | Ravelin | A ravelin is a triangular fortification or detached outwork, located in front of the innerworks of a fortress (the curtain walls and bastions). Originally called a demi-lune, after the lunette, the ravelin is placed outside a castle and opposite a fortification curtain wall.
The ravelin is the oldest and at the same time the most important outer work of the bastion fortification system. It originated from small forts that were supposed to cover the bridge that led across the moat to the city or fortress gate from a direct attack. From this original function, to protect the gate bridge, also comes its original Italian name "rivellino" (which means small bank work or with the German expression common for it: Brückenkopf – "bridge head"). Therefore, the ravelin was at first only a small work, which should only make the access to the bridge in front of the fortress gates more difficult.
When it was realized in the 16th century that this would generally provide better protection for the courtine, ravelins were also built in front of other courtines and these were gradually enlarged. However, it was not until the German fortress builder Daniel Specklin (1536–1589) recognized the principal importance of ravelins (which he still called "ledige Wehr" or "revelin"). He demanded that they be made as large as possible so that they fully covered the courtine and the flanks of the bastions and could place a flanking fire in front of the bastion tops. In the following period, ravelins can be found in practically all fortresses built according to the bastion fortification system.
The outer edges of the ravelin are so configured that it divides an assault force, and guns in the ravelin can fire upon the attacking troops as they approach the curtain wall. It also impedes besiegers from using their artillery to batter a breach in the curtain wall. The side of the ravelin facing the inner fortifications has at best a low wall, if any, so as not to shelter attacking forces if they have overwhelmed it or the defenders have abandoned it. Frequently ravelins have a ramp or stairs on the curtain-wall side to facilitate the movement of troops and artillery onto the ravelin.
The first example of a ravelin appears in the fortifications of the Italian town of Sarzanello, and dates from 1497. The first ravelins were built of brick, but later, during the sixteenth century in the Netherlands, they were earthen (perhaps faced by stone or brick), the better to absorb the impact of cannonballs. The Italian origins of the system of fortifications (the star forts) of which ravelins were a part gave rise to the term trace Italienne.
The French 17th-century military engineer Vauban made great use of ravelins in his design of fortifications for Louis XIV, and his ideas were still being used in 1761 by Major William Green at Gibraltar.
Gallery
| Technology | Fortification | null |
111194 | https://en.wikipedia.org/wiki/Pyroxene | Pyroxene | The pyroxenes (commonly abbreviated Px) are a group of important rock-forming inosilicate minerals found in many igneous and metamorphic rocks. Pyroxenes have the general formula , where X represents calcium (Ca), sodium (Na), iron (Fe(II)) or magnesium (Mg) and more rarely zinc, manganese or lithium, and Y represents ions of smaller size, such as chromium (Cr), aluminium (Al), magnesium (Mg), cobalt (Co), manganese (Mn), scandium (Sc), titanium (Ti), vanadium (V) or even iron (Fe(II) or Fe(III)). Although aluminium substitutes extensively for silicon in silicates such as feldspars and amphiboles, the substitution occurs only to a limited extent in most pyroxenes. They share a common structure consisting of single chains of silica tetrahedra. Pyroxenes that crystallize in the monoclinic system are known as clinopyroxenes and those that crystallize in the orthorhombic system are known as orthopyroxenes.
The name pyroxene is derived from the Ancient Greek words for 'fire' (, ) and 'stranger' (, ). Pyroxenes were so named due to their presence in volcanic lavas, where they are sometimes found as crystals embedded in volcanic glass; it was assumed they were impurities in the glass, hence the name meaning "fire stranger". However, they are simply early-forming minerals that crystallized before the lava erupted.
The upper mantle of Earth is composed mainly of olivine and pyroxene minerals. Pyroxene and feldspar are the major minerals in basalt, andesite, and gabbro rocks.
Structure
Pyroxenes are the most common single-chain silicate minerals. (The only other important group of single-chain silicates, the pyroxenoids, are much less common.) Their structure consists of parallel chains of negatively-charged silica tetrahedra bonded together by metal cations. In other words, each silicon ion in a pyroxene crystal is surrounded by four oxygen ions forming a tetrahedron around the relatively small silicon ion. Each silicon ion shares two oxygen ions with neighboring silicon ions in the chain.
The tetrahedra in the chain all face in the same direction, so that two oxygen ions are located on one face of the chain for every oxygen ion on the other face of the chain. The oxygen ions on the narrower face are described as apical oxygen ions. Pairs of chains are bound together on their apical sides by Y cations, with each Y cation surrounded by six oxygen ions. The resulting pairs of single chains have sometimes been likened to I-beams. The I-beams interlock, with additional X cations bonding the outer faces of the I-beams to neighboring I-beams and providing the remaining charge balance. This binding is relatively weak and gives pyroxenes their characteristic cleavage.
Chemistry and nomenclature
The chain silicate structure of the pyroxenes offers much flexibility in the incorporation of various cations and the names of the pyroxene minerals are primarily defined by their chemical composition.
Pyroxene minerals are named according to the chemical species occupying the X (or M2) site, the Y (or M1) site, and the tetrahedral T site. Cations in Y (M1) site are closely bound to 6 oxygens in octahedral coordination. Cations in the X (M2) site can be coordinated with 6 to 8 oxygen atoms, depending on the cation size. Twenty mineral names are recognised by the International Mineralogical Association's Commission on New Minerals and Mineral Names and 105 previously used names have been discarded.
A typical pyroxene has mostly silicon in the tetrahedral site and predominately ions with a charge of +2 in both the X and Y sites, giving the approximate formula . The names of the common calciumironmagnesium pyroxenes are defined in the 'pyroxene quadrilateral'. The enstatite-ferrosilite series () includes the common rock-forming mineral hypersthene, contains up to 5 mol.% calcium and exists in three polymorphs, orthorhombic orthoenstatite and protoenstatite and monoclinic clinoenstatite (and the ferrosilite equivalents). Increasing the calcium content prevents the formation of the orthorhombic phases and pigeonite () only crystallises in the monoclinic system. There is not complete solid solution in calcium content and Mg-Fe-Ca pyroxenes with calcium contents between about 15 and 25 mol.% are not stable with respect to a pair of exolved crystals. This leads to a miscibility gap between pigeonite and augite compositions. There is an arbitrary separation between augite and the diopside-hedenbergite () solid solution. The divide is taken at >45 mol.% Ca. As the calcium ion cannot occupy the Y site, pyroxenes with more than 50 mol.% calcium are not possible. A related mineral wollastonite has the formula of the hypothetical calcium end member () but important structural differences mean that it is instead classified as a pyroxenoid.
Magnesium, calcium and iron are by no means the only cations that can occupy the X and Y sites in the pyroxene structure. A second important series of pyroxene minerals are the sodium-rich pyroxenes, corresponding to the 'pyroxene triangle' nomenclature. The inclusion of sodium, which has a charge of +1, into the pyroxene implies the need for a mechanism to make up the "missing" positive charge. In jadeite and aegirine this is added by the inclusion of a +3 cation (aluminium and iron(III) respectively) on the Y site. Sodium pyroxenes with more than 20 mol.% calcium, magnesium or iron(II) components are known as omphacite and aegirine-augite. With 80% or more of these components the pyroxene is classified using the quadrilateral diagram.
A wide range of other cations that can be accommodated in the different sites of pyroxene structures.
In assigning ions to sites, the basic rule is to work from left to right in this table, first assigning all silicon to the T site and then filling the site with the remaining aluminium and finally iron(III); extra aluminium or iron can be accommodated in the Y site and bulkier ions on the X site.
Not all the resulting mechanisms to achieve charge neutrality follow the sodium example above, and there are several alternative schemes:
Coupled substitutions of 1+ and 3+ ions on the X and Y sites respectively. For example, Na and Al give the jadeite ) composition.
Coupled substitution of a 1+ ion on the X site and a mixture of equal numbers of 2+ and 4+ ions on the Y site. This leads to e.g., .
The Tschermak substitution where a 3+ ion occupies the Y site and a T site leading to e.g., .
In nature, more than one substitution may be found in the same mineral.
Pyroxene minerals
Clinopyroxenes (monoclinic)
Aegirine,
Augite,
Clinoenstatite,
Diopside,
Esseneite,
Hedenbergite,
Jadeite,
Jervisite,
Johannsenite,
Kanoite,
Kosmochlor,
Namansilite,
Natalyite,
Omphacite,
Petedunnite,
Pigeonite,
Spodumene,
Orthopyroxenes (orthorhombic)
Enstatite,
Bronzite, intermediate between enstatite and hypersthene
Hypersthene,
Eulite, intermediate between hypersthene and ferrosilite
Ferrosilite,
Donpeacorite,
Nchwaningite,
| Physical sciences | Mineralogy | null |
36563 | https://en.wikipedia.org/wiki/Magnetic%20field | Magnetic field | A magnetic field (sometimes called B-field) is a physical field that describes the magnetic influence on moving electric charges, electric currents, and magnetic materials. A moving charge in a magnetic field experiences a force perpendicular to its own velocity and to the magnetic field. A permanent magnet's magnetic field pulls on ferromagnetic materials such as iron, and attracts or repels other magnets. In addition, a nonuniform magnetic field exerts minuscule forces on "nonmagnetic" materials by three other magnetic effects: paramagnetism, diamagnetism, and antiferromagnetism, although these forces are usually so small they can only be detected by laboratory equipment. Magnetic fields surround magnetized materials, electric currents, and electric fields varying in time. Since both strength and direction of a magnetic field may vary with location, it is described mathematically by a function assigning a vector to each point of space, called a vector field (more precisely, a pseudovector field).
In electromagnetics, the term magnetic field is used for two distinct but closely related vector fields denoted by the symbols and . In the International System of Units, the unit of , magnetic flux density, is the tesla (in SI base units: kilogram per second squared per ampere), which is equivalent to newton per meter per ampere. The unit of , magnetic field strength, is ampere per meter (A/m). and differ in how they take the medium and/or magnetization into account. In vacuum, the two fields are related through the vacuum permeability, ; in a magnetized material, the quantities on each side of this equation differ by the magnetization field of the material.
Magnetic fields are produced by moving electric charges and the intrinsic magnetic moments of elementary particles associated with a fundamental quantum property, their spin. Magnetic fields and electric fields are interrelated and are both components of the electromagnetic force, one of the four fundamental forces of nature.
Magnetic fields are used throughout modern technology, particularly in electrical engineering and electromechanics. Rotating magnetic fields are used in both electric motors and generators. The interaction of magnetic fields in electric devices such as transformers is conceptualized and investigated as magnetic circuits. Magnetic forces give information about the charge carriers in a material through the Hall effect. The Earth produces its own magnetic field, which shields the Earth's ozone layer from the solar wind and is important in navigation using a compass.
Description
The force on an electric charge depends on its location, speed, and direction; two vector fields are used to describe this force. The first is the electric field, which describes the force acting on a stationary charge and gives the component of the force that is independent of motion. The magnetic field, in contrast, describes the component of the force that is proportional to both the speed and direction of charged particles. The field is defined by the Lorentz force law and is, at each instant, perpendicular to both the motion of the charge and the force it experiences.
There are two different, but closely related vector fields which are both sometimes called the "magnetic field" written and . While both the best names for these fields and exact interpretation of what these fields represent has been the subject of long running debate, there is wide agreement about how the underlying physics work. Historically, the term "magnetic field" was reserved for while using other terms for , but many recent textbooks use the term "magnetic field" to describe as well as or in place of .
There are many alternative names for both (see sidebars).
The B-field
The magnetic field vector at any point can be defined as the vector that, when used in the Lorentz force law, correctly predicts the force on a charged particle at that point:
Here is the force on the particle, is the particle's electric charge, is the external electric field, , is the particle's velocity, and × denotes the cross product. The direction of force on the charge can be determined by a mnemonic known as the right-hand rule (see the figure). Using the right hand, pointing the thumb in the direction of the current, and the fingers in the direction of the magnetic field, the resulting force on the charge points outwards from the palm. The force on a negatively charged particle is in the opposite direction. If both the speed and the charge are reversed then the direction of the force remains the same. For that reason a magnetic field measurement (by itself) cannot distinguish whether there is a positive charge moving to the right or a negative charge moving to the left. (Both of these cases produce the same current.) On the other hand, a magnetic field combined with an electric field can distinguish between these, see Hall effect below.
The first term in the Lorentz equation is from the theory of electrostatics, and says that a particle of charge in an electric field experiences an electric force:
The second term is the magnetic force:
Using the definition of the cross product, the magnetic force can also be written as a scalar equation:
where , , and are the scalar magnitude of their respective vectors, and is the angle between the velocity of the particle and the magnetic field. The vector is defined as the vector field necessary to make the Lorentz force law correctly describe the motion of a charged particle. In other words,
The field can also be defined by the torque on a magnetic dipole, .
The SI unit of is tesla (symbol: T). The Gaussian-cgs unit of is the gauss (symbol: G). (The conversion is 1 T ≘ 10000 G.) One nanotesla corresponds to 1 gamma (symbol: γ).
The H-field
The magnetic field is defined:
where is the vacuum permeability, and is the magnetization vector. In a vacuum, and are proportional to each other. Inside a material they are different (see H and B inside and outside magnetic materials). The SI unit of the -field is the ampere per metre (A/m), and the CGS unit is the oersted (Oe).
Measurement
An instrument used to measure the local magnetic field is known as a magnetometer. Important classes of magnetometers include using induction magnetometers (or search-coil magnetometers) which measure only varying magnetic fields, rotating coil magnetometers, Hall effect magnetometers, NMR magnetometers, SQUID magnetometers, and fluxgate magnetometers. The magnetic fields of distant astronomical objects are measured through their effects on local charged particles. For instance, electrons spiraling around a field line produce synchrotron radiation that is detectable in radio waves. The finest precision for a magnetic field measurement was attained by Gravity Probe B at ().
Visualization
The field can be visualized by a set of magnetic field lines, that follow the direction of the field at each point. The lines can be constructed by measuring the strength and direction of the magnetic field at a large number of points (or at every point in space). Then, mark each location with an arrow (called a vector) pointing in the direction of the local magnetic field with its magnitude proportional to the strength of the magnetic field. Connecting these arrows then forms a set of magnetic field lines. The direction of the magnetic field at any point is parallel to the direction of nearby field lines, and the local density of field lines can be made proportional to its strength. Magnetic field lines are like streamlines in fluid flow, in that they represent a continuous distribution, and a different resolution would show more or fewer lines.
An advantage of using magnetic field lines as a representation is that many laws of magnetism (and electromagnetism) can be stated completely and concisely using simple concepts such as the "number" of field lines through a surface. These concepts can be quickly "translated" to their mathematical form. For example, the number of field lines through a given surface is the surface integral of the magnetic field.
Various phenomena "display" magnetic field lines as though the field lines were physical phenomena. For example, iron filings placed in a magnetic field form lines that correspond to "field lines". Magnetic field "lines" are also visually displayed in polar auroras, in which plasma particle dipole interactions create visible streaks of light that line up with the local direction of Earth's magnetic field.
Field lines can be used as a qualitative tool to visualize magnetic forces. In ferromagnetic substances like iron and in plasmas, magnetic forces can be understood by imagining that the field lines exert a tension, (like a rubber band) along their length, and a pressure perpendicular to their length on neighboring field lines. "Unlike" poles of magnets attract because they are linked by many field lines; "like" poles repel because their field lines do not meet, but run parallel, pushing on each other.
Magnetic field of permanent magnets
Permanent magnets are objects that produce their own persistent magnetic fields. They are made of ferromagnetic materials, such as iron and nickel, that have been magnetized, and they have both a north and a south pole.
The magnetic field of permanent magnets can be quite complicated, especially near the magnet. The magnetic field of a small straight magnet is proportional to the magnet's strength (called its magnetic dipole moment ). The equations are non-trivial and depend on the distance from the magnet and the orientation of the magnet. For simple magnets, points in the direction of a line drawn from the south to the north pole of the magnet. Flipping a bar magnet is equivalent to rotating its by 180 degrees.
The magnetic field of larger magnets can be obtained by modeling them as a collection of a large number of small magnets called dipoles each having their own . The magnetic field produced by the magnet then is the net magnetic field of these dipoles; any net force on the magnet is a result of adding up the forces on the individual dipoles.
There are two simplified models for the nature of these dipoles: the magnetic pole model and the Amperian loop model. These two models produce two different magnetic fields, and . Outside a material, though, the two are identical (to a multiplicative constant) so that in many cases the distinction can be ignored. This is particularly true for magnetic fields, such as those due to electric currents, that are not generated by magnetic materials.
A realistic model of magnetism is more complicated than either of these models; neither model fully explains why materials are magnetic. The monopole model has no experimental support. The Amperian loop model explains some, but not all of a material's magnetic moment. The model predicts that the motion of electrons within an atom are connected to those electrons' orbital magnetic dipole moment, and these orbital moments do contribute to the magnetism seen at the macroscopic level. However, the motion of electrons is not classical, and the spin magnetic moment of electrons (which is not explained by either model) is also a significant contribution to the total moment of magnets.
Magnetic pole model
Historically, early physics textbooks would model the force and torques between two magnets as due to magnetic poles repelling or attracting each other in the same manner as the Coulomb force between electric charges. At the microscopic level, this model contradicts the experimental evidence, and the pole model of magnetism is no longer the typical way to introduce the concept. However, it is still sometimes used as a macroscopic model for ferromagnetism due to its mathematical simplicity.
In this model, a magnetic -field is produced by fictitious magnetic charges that are spread over the surface of each pole. These magnetic charges are in fact related to the magnetization field . The -field, therefore, is analogous to the electric field , which starts at a positive electric charge and ends at a negative electric charge. Near the north pole, therefore, all -field lines point away from the north pole (whether inside the magnet or out) while near the south pole all -field lines point toward the south pole (whether inside the magnet or out). Too, a north pole feels a force in the direction of the -field while the force on the south pole is opposite to the -field.
In the magnetic pole model, the elementary magnetic dipole is formed by two opposite magnetic poles of pole strength separated by a small distance vector , such that . The magnetic pole model predicts correctly the field both inside and outside magnetic materials, in particular the fact that is opposite to the magnetization field inside a permanent magnet.
Since it is based on the fictitious idea of a magnetic charge density, the pole model has limitations. Magnetic poles cannot exist apart from each other as electric charges can, but always come in north–south pairs. If a magnetized object is divided in half, a new pole appears on the surface of each piece, so each has a pair of complementary poles. The magnetic pole model does not account for magnetism that is produced by electric currents, nor the inherent connection between angular momentum and magnetism.
The pole model usually treats magnetic charge as a mathematical abstraction, rather than a physical property of particles. However, a magnetic monopole is a hypothetical particle (or class of particles) that physically has only one magnetic pole (either a north pole or a south pole). In other words, it would possess a "magnetic charge" analogous to an electric charge. Magnetic field lines would start or end on magnetic monopoles, so if they exist, they would give exceptions to the rule that magnetic field lines neither start nor end. Some theories (such as Grand Unified Theories) have predicted the existence of magnetic monopoles, but so far, none have been observed.
Amperian loop model
In the model developed by Ampere, the elementary magnetic dipole that makes up all magnets is a sufficiently small Amperian loop with current and loop area . The dipole moment of this loop is .
These magnetic dipoles produce a magnetic -field.
The magnetic field of a magnetic dipole is depicted in the figure. From outside, the ideal magnetic dipole is identical to that of an ideal electric dipole of the same strength. Unlike the electric dipole, a magnetic dipole is properly modeled as a current loop having a current and an area . Such a current loop has a magnetic moment of
where the direction of is perpendicular to the area of the loop and depends on the direction of the current using the right-hand rule. An ideal magnetic dipole is modeled as a real magnetic dipole whose area has been reduced to zero and its current increased to infinity such that the product is finite. This model clarifies the connection between angular momentum and magnetic moment, which is the basis of the Einstein–de Haas effect rotation by magnetization and its inverse, the Barnett effect or magnetization by rotation. Rotating the loop faster (in the same direction) increases the current and therefore the magnetic moment, for example.
Interactions with magnets
Force between magnets
Specifying the force between two small magnets is quite complicated because it depends on the strength and orientation of both magnets and their distance and direction relative to each other. The force is particularly sensitive to rotations of the magnets due to magnetic torque. The force on each magnet depends on its magnetic moment and the magnetic field of the other.
To understand the force between magnets, it is useful to examine the magnetic pole model given above. In this model, the -field of one magnet pushes and pulls on both poles of a second magnet. If this -field is the same at both poles of the second magnet then there is no net force on that magnet since the force is opposite for opposite poles. If, however, the magnetic field of the first magnet is nonuniform (such as the near one of its poles), each pole of the second magnet sees a different field and is subject to a different force. This difference in the two forces moves the magnet in the direction of increasing magnetic field and may also cause a net torque.
This is a specific example of a general rule that magnets are attracted (or repulsed depending on the orientation of the magnet) into regions of higher magnetic field. Any non-uniform magnetic field, whether caused by permanent magnets or electric currents, exerts a force on a small magnet in this way.
The details of the Amperian loop model are different and more complicated but yield the same result: that magnetic dipoles are attracted/repelled into regions of higher magnetic field. Mathematically, the force on a small magnet having a magnetic moment due to a magnetic field is:
where the gradient is the change of the quantity per unit distance and the direction is that of maximum increase of . The dot product , where and represent the magnitude of the and vectors and is the angle between them. If is in the same direction as then the dot product is positive and the gradient points "uphill" pulling the magnet into regions of higher -field (more strictly larger ). This equation is strictly only valid for magnets of zero size, but is often a good approximation for not too large magnets. The magnetic force on larger magnets is determined by dividing them into smaller regions each having their own then summing up the forces on each of these very small regions.
Magnetic torque on permanent magnets
If two like poles of two separate magnets are brought near each other, and one of the magnets is allowed to turn, it promptly rotates to align itself with the first. In this example, the magnetic field of the stationary magnet creates a magnetic torque on the magnet that is free to rotate. This magnetic torque tends to align a magnet's poles with the magnetic field lines. A compass, therefore, turns to align itself with Earth's magnetic field.
In terms of the pole model, two equal and opposite magnetic charges experiencing the same also experience equal and opposite forces. Since these equal and opposite forces are in different locations, this produces a torque proportional to the distance (perpendicular to the force) between them. With the definition of as the pole strength times the distance between the poles, this leads to , where is a constant called the vacuum permeability, measuring V·s/(A·m) and is the angle between and .
Mathematically, the torque on a small magnet is proportional both to the applied magnetic field and to the magnetic moment of the magnet:
where × represents the vector cross product. This equation includes all of the qualitative information included above. There is no torque on a magnet if is in the same direction as the magnetic field, since the cross product is zero for two vectors that are in the same direction. Further, all other orientations feel a torque that twists them toward the direction of magnetic field.
Interactions with electric currents
Currents of electric charges both generate a magnetic field and feel a force due to magnetic B-fields.
Magnetic field due to moving charges and electric currents
All moving charged particles produce magnetic fields. Moving point charges, such as electrons, produce complicated but well known magnetic fields that depend on the charge, velocity, and acceleration of the particles.
Magnetic field lines form in concentric circles around a cylindrical current-carrying conductor, such as a length of wire. The direction of such a magnetic field can be determined by using the "right-hand grip rule" (see figure at right). The strength of the magnetic field decreases with distance from the wire. (For an infinite length wire the strength is inversely proportional to the distance.)
Bending a current-carrying wire into a loop concentrates the magnetic field inside the loop while weakening it outside. Bending a wire into multiple closely spaced loops to form a coil or "solenoid" enhances this effect. A device so formed around an iron core may act as an electromagnet, generating a strong, well-controlled magnetic field. An infinitely long cylindrical electromagnet has a uniform magnetic field inside, and no magnetic field outside. A finite length electromagnet produces a magnetic field that looks similar to that produced by a uniform permanent magnet, with its strength and polarity determined by the current flowing through the coil.
The magnetic field generated by a steady current (a constant flow of electric charges, in which charge neither accumulates nor is depleted at any point) is described by the Biot–Savart law:
where the integral sums over the wire length where vector is the vector line element with direction in the same sense as the current , is the magnetic constant, is the distance between the location of and the location where the magnetic field is calculated, and is a unit vector in the direction of . For example, in the case of a sufficiently long, straight wire, this becomes:
where . The direction is tangent to a circle perpendicular to the wire according to the right hand rule.
A slightly more general way of relating the current to the -field is through Ampère's law:
where the line integral is over any arbitrary loop and is the current enclosed by that loop. Ampère's law is always valid for steady currents and can be used to calculate the -field for certain highly symmetric situations such as an infinite wire or an infinite solenoid.
In a modified form that accounts for time varying electric fields, Ampère's law is one of four Maxwell's equations that describe electricity and magnetism.
Force on moving charges and current
Force on a charged particle
A charged particle moving in a -field experiences a sideways force that is proportional to the strength of the magnetic field, the component of the velocity that is perpendicular to the magnetic field and the charge of the particle. This force is known as the Lorentz force, and is given by
where is the force, is the electric charge of the particle, is the instantaneous velocity of the particle, and is the magnetic field (in teslas).
The Lorentz force is always perpendicular to both the velocity of the particle and the magnetic field that created it. When a charged particle moves in a static magnetic field, it traces a helical path in which the helix axis is parallel to the magnetic field, and in which the speed of the particle remains constant. Because the magnetic force is always perpendicular to the motion, the magnetic field can do no work on an isolated charge. It can only do work indirectly, via the electric field generated by a changing magnetic field. It is often claimed that the magnetic force can do work to a non-elementary magnetic dipole, or to charged particles whose motion is constrained by other forces, but this is incorrect because the work in those cases is performed by the electric forces of the charges deflected by the magnetic field.
Force on current-carrying wire
The force on a current carrying wire is similar to that of a moving charge as expected since a current carrying wire is a collection of moving charges. A current-carrying wire feels a force in the presence of a magnetic field. The Lorentz force on a macroscopic current is often referred to as the Laplace force.
Consider a conductor of length , cross section , and charge due to electric current . If this conductor is placed in a magnetic field of magnitude that makes an angle with the velocity of charges in the conductor, the force exerted on a single charge is
so, for charges where
the force exerted on the conductor is
where .
Relation between H and B
The formulas derived for the magnetic field above are correct when dealing with the entire current. A magnetic material placed inside a magnetic field, though, generates its own bound current, which can be a challenge to calculate. (This bound current is due to the sum of atomic sized current loops and the spin of the subatomic particles such as electrons that make up the material.) The -field as defined above helps factor out this bound current; but to see how, it helps to introduce the concept of magnetization first.
Magnetization
The magnetization vector field represents how strongly a region of material is magnetized. It is defined as the net magnetic dipole moment per unit volume of that region. The magnetization of a uniform magnet is therefore a material constant, equal to the magnetic moment of the magnet divided by its volume. Since the SI unit of magnetic moment is A⋅m2, the SI unit of magnetization is ampere per meter, identical to that of the -field.
The magnetization field of a region points in the direction of the average magnetic dipole moment in that region. Magnetization field lines, therefore, begin near the magnetic south pole and ends near the magnetic north pole. (Magnetization does not exist outside the magnet.)
In the Amperian loop model, the magnetization is due to combining many tiny Amperian loops to form a resultant current called bound current. This bound current, then, is the source of the magnetic field due to the magnet. Given the definition of the magnetic dipole, the magnetization field follows a similar law to that of Ampere's law:
where the integral is a line integral over any closed loop and is the bound current enclosed by that closed loop.
In the magnetic pole model, magnetization begins at and ends at magnetic poles. If a given region, therefore, has a net positive "magnetic pole strength" (corresponding to a north pole) then it has more magnetization field lines entering it than leaving it. Mathematically this is equivalent to:
where the integral is a closed surface integral over the closed surface and is the "magnetic charge" (in units of magnetic flux) enclosed by . (A closed surface completely surrounds a region with no holes to let any field lines escape.) The negative sign occurs because the magnetization field moves from south to north.
H-field and magnetic materials
In SI units, the H-field is related to the B-field by
In terms of the H-field, Ampere's law is
where represents the 'free current' enclosed by the loop so that the line integral of does not depend at all on the bound currents.
For the differential equivalent of this equation see Maxwell's equations. Ampere's law leads to the boundary condition
where is the surface free current density and the unit normal points in the direction from medium 2 to medium 1.
Similarly, a surface integral of over any closed surface is independent of the free currents and picks out the "magnetic charges" within that closed surface:
which does not depend on the free currents.
The -field, therefore, can be separated into two independent parts:
where is the applied magnetic field due only to the free currents and is the demagnetizing field due only to the bound currents.
The magnetic -field, therefore, re-factors the bound current in terms of "magnetic charges". The field lines loop only around "free current" and, unlike the magnetic field, begins and ends near magnetic poles as well.
Magnetism
Most materials respond to an applied -field by producing their own magnetization and therefore their own -fields. Typically, the response is weak and exists only when the magnetic field is applied. The term magnetism describes how materials respond on the microscopic level to an applied magnetic field and is used to categorize the magnetic phase of a material. Materials are divided into groups based upon their magnetic behavior:
Diamagnetic materials produce a magnetization that opposes the magnetic field.
Paramagnetic materials produce a magnetization in the same direction as the applied magnetic field.
Ferromagnetic materials and the closely related ferrimagnetic materials and antiferromagnetic materials can have a magnetization independent of an applied B-field with a complex relationship between the two fields.
Superconductors (and ferromagnetic superconductors) are materials that are characterized by perfect conductivity below a critical temperature and magnetic field. They also are highly magnetic and can be perfect diamagnets below a lower critical magnetic field. Superconductors often have a broad range of temperatures and magnetic fields (the so-named mixed state) under which they exhibit a complex hysteretic dependence of on .
In the case of paramagnetism and diamagnetism, the magnetization is often proportional to the applied magnetic field such that:
where is a material dependent parameter called the permeability. In some cases the permeability may be a second rank tensor so that may not point in the same direction as . These relations between and are examples of constitutive equations. However, superconductors and ferromagnets have a more complex -to- relation; see magnetic hysteresis.
Stored energy
Energy is needed to generate a magnetic field both to work against the electric field that a changing magnetic field creates and to change the magnetization of any material within the magnetic field. For non-dispersive materials, this same energy is released when the magnetic field is destroyed so that the energy can be modeled as being stored in the magnetic field.
For linear, non-dispersive, materials (such that where is frequency-independent), the energy density is:
If there are no magnetic materials around then can be replaced by . The above equation cannot be used for nonlinear materials, though; a more general expression given below must be used.
In general, the incremental amount of work per unit volume needed to cause a small change of magnetic field is:
Once the relationship between and is known this equation is used to determine the work needed to reach a given magnetic state. For hysteretic materials such as ferromagnets and superconductors, the work needed also depends on how the magnetic field is created. For linear non-dispersive materials, though, the general equation leads directly to the simpler energy density equation given above.
Appearance in Maxwell's equations
Like all vector fields, a magnetic field has two important mathematical properties that relates it to its sources. (For the sources are currents and changing electric fields.) These two properties, along with the two corresponding properties of the electric field, make up Maxwell's Equations. Maxwell's Equations together with the Lorentz force law form a complete description of classical electrodynamics including both electricity and magnetism.
The first property is the divergence of a vector field , , which represents how "flows" outward from a given point. As discussed above, a -field line never starts or ends at a point but instead forms a complete loop. This is mathematically equivalent to saying that the divergence of is zero. (Such vector fields are called solenoidal vector fields.) This property is called Gauss's law for magnetism and is equivalent to the statement that there are no isolated magnetic poles or magnetic monopoles.
The second mathematical property is called the curl, such that represents how curls or "circulates" around a given point. The result of the curl is called a "circulation source". The equations for the curl of and of are called the Ampère–Maxwell equation and Faraday's law respectively.
Gauss' law for magnetism
One important property of the -field produced this way is that magnetic -field lines neither start nor end (mathematically, is a solenoidal vector field); a field line may only extend to infinity, or wrap around to form a closed curve, or follow a never-ending (possibly chaotic) path. Magnetic field lines exit a magnet near its north pole and enter near its south pole, but inside the magnet -field lines continue through the magnet from the south pole back to the north. If a -field line enters a magnet somewhere it has to leave somewhere else; it is not allowed to have an end point.
More formally, since all the magnetic field lines that enter any given region must also leave that region, subtracting the "number" of field lines that enter the region from the number that exit gives identically zero. Mathematically this is equivalent to Gauss's law for magnetism:
where the integral is a surface integral over the closed surface (a closed surface is one that completely surrounds a region with no holes to let any field lines escape). Since points outward, the dot product in the integral is positive for -field pointing out and negative for -field pointing in.
Faraday's Law
A changing magnetic field, such as a magnet moving through a conducting coil, generates an electric field (and therefore tends to drive a current in such a coil). This is known as Faraday's law and forms the basis of many electrical generators and electric motors. Mathematically, Faraday's law is:
where is the electromotive force (or EMF, the voltage generated around a closed loop) and is the magnetic flux—the product of the area times the magnetic field normal to that area. (This definition of magnetic flux is why is often referred to as magnetic flux density.) The negative sign represents the fact that any current generated by a changing magnetic field in a coil produces a magnetic field that opposes the change in the magnetic field that induced it. This phenomenon is known as Lenz's law. This integral formulation of Faraday's law can be converted into a differential form, which applies under slightly different conditions.
Ampère's Law and Maxwell's correction
Similar to the way that a changing magnetic field generates an electric field, a changing electric field generates a magnetic field. This fact is known as Maxwell's correction to Ampère's law and is applied as an additive term to Ampere's law as given above. This additional term is proportional to the time rate of change of the electric flux and is similar to Faraday's law above but with a different and positive constant out front. (The electric flux through an area is proportional to the area times the perpendicular part of the electric field.)
The full law including the correction term is known as the Maxwell–Ampère equation. It is not commonly given in integral form because the effect is so small that it can typically be ignored in most cases where the integral form is used.
The Maxwell term is critically important in the creation and propagation of electromagnetic waves. Maxwell's correction to Ampère's Law together with Faraday's law of induction describes how mutually changing electric and magnetic fields interact to sustain each other and thus to form electromagnetic waves, such as light: a changing electric field generates a changing magnetic field, which generates a changing electric field again. These, though, are usually described using the differential form of this equation given below.
where is the complete microscopic current density, and is the vacuum permittivity.
As discussed above, materials respond to an applied electric field and an applied magnetic field by producing their own internal "bound" charge and current distributions that contribute to and but are difficult to calculate. To circumvent this problem, and fields are used to re-factor Maxwell's equations in terms of the free current density :
These equations are not any more general than the original equations (if the "bound" charges and currents in the material are known). They also must be supplemented by the relationship between and as well as that between and . On the other hand, for simple relationships between these quantities this form of Maxwell's equations can circumvent the need to calculate the bound charges and currents.
Formulation in special relativity and quantum electrodynamics
Relativistic electrodynamics
As different aspects of the same phenomenon
According to the special theory of relativity, the partition of the electromagnetic force into separate electric and magnetic components is not fundamental, but varies with the observational frame of reference: An electric force perceived by one observer may be perceived by another (in a different frame of reference) as a magnetic force, or a mixture of electric and magnetic forces.
The magnetic field existing as electric field in other frames can be shown by consistency of equations obtained from Lorentz transformation of four force from Coulomb's Law in particle's rest frame with Maxwell's laws considering definition of fields from Lorentz force and for non accelerating condition. The form of magnetic field hence obtained by Lorentz transformation of four-force from the form of Coulomb's law in source's initial frame is given by:where is the charge of the point source, is the vacuum permittivity, is the position vector from the point source to the point in space, is the velocity vector of the charged particle, is the ratio of speed of the charged particle divided by the speed of light and is the angle between and . This form of magnetic field can be shown to satisfy maxwell's laws within the constraint of particle being non accelerating. The above reduces to Biot-Savart law for non relativistic stream of current ().
Formally, special relativity combines the electric and magnetic fields into a rank-2 tensor, called the electromagnetic tensor. Changing reference frames mixes these components. This is analogous to the way that special relativity mixes space and time into spacetime, and mass, momentum, and energy into four-momentum. Similarly, the energy stored in a magnetic field is mixed with the energy stored in an electric field in the electromagnetic stress–energy tensor.
Magnetic vector potential
In advanced topics such as quantum mechanics and relativity it is often easier to work with a potential formulation of electrodynamics rather than in terms of the electric and magnetic fields. In this representation, the magnetic vector potential , and the electric scalar potential , are defined using gauge fixing such that:
The vector potential, given by this form may be interpreted as a generalized potential momentum per unit charge just as is interpreted as a generalized potential energy per unit charge. There are multiple choices one can make for the potential fields that satisfy the above condition. However, the choice of potentials is represented by its respective gauge condition.
Maxwell's equations when expressed in terms of the potentials in Lorenz gauge can be cast into a form that agrees with special relativity. In relativity, together with forms a four-potential regardless of the gauge condition, analogous to the four-momentum that combines the momentum and energy of a particle. Using the four potential instead of the electromagnetic tensor has the advantage of being much simpler—and it can be easily modified to work with quantum mechanics.
Propagation of Electric and Magnetic fields
Special theory of relativity imposes the condition for events related by cause and effect to be time-like separated, that is that causal efficacy propagates no faster than light. Maxwell's equations for electromagnetism are found to be in favor of this as electric and magnetic disturbances are found to travel at the speed of light in space. Electric and magnetic fields from classical electrodynamics obey the principle of locality in physics and are expressed in terms of retarded time or the time at which the cause of a measured field originated given that the influence of field travelled at speed of light. The retarded time for a point particle is given as solution of:
where is retarded time or the time at which the source's contribution of the field originated, is the position vector of the particle as function of time, is the point in space, is the time at which fields are measured and is the speed of light. The equation subtracts the time taken for light to travel from particle to the point in space from the time of measurement to find time of origin of the fields. The uniqueness of solution for for given , and is valid for charged particles moving slower than speed of light.
Magnetic field of arbitrary moving point charge
The solution of maxwell's equations for electric and magnetic field of a point charge is expressed in terms of retarded time or the time at which the particle in the past causes the field at the point, given that the influence travels across space at the speed of light.
Any arbitrary motion of point charge causes electric and magnetic fields found by solving maxwell's equations using green's function for retarded potentials and hence finding the fields to be as follows:
where and are electric scalar potential and magnetic vector potential in Lorentz gauge, is the charge of the point source, is a unit vector pointing from charged particle to the point in space, is the velocity of the particle divided by the speed of light and is the corresponding Lorentz factor. Hence by the principle of superposition, the fields of a system of charges also obey principle of locality.
Quantum electrodynamics
The classical electromagnetic field incorporated into quantum mechanics forms what is known as the semi-classical theory of radiation. However, it is not able to make experimentally observed predictions such as spontaneous emission process or Lamb shift implying the need for quantization of fields. In modern physics, the electromagnetic field is understood to be not a classical field, but rather a quantum field; it is represented not as a vector of three numbers at each point, but as a vector of three quantum operators at each point. The most accurate modern description of the electromagnetic interaction (and much else) is quantum electrodynamics (QED), which is incorporated into a more complete theory known as the Standard Model of particle physics.
In QED, the magnitude of the electromagnetic interactions between charged particles (and their antiparticles) is computed using perturbation theory. These rather complex formulas produce a remarkable pictorial representation as Feynman diagrams in which virtual photons are exchanged.
Predictions of QED agree with experiments to an extremely high degree of accuracy: currently about 10−12 (and limited by experimental errors); for details see precision tests of QED. This makes QED one of the most accurate physical theories constructed thus far.
All equations in this article are in the classical approximation, which is less accurate than the quantum description mentioned here. However, under most everyday circumstances, the difference between the two theories is negligible.
Uses and examples
Earth's magnetic field
The Earth's magnetic field is produced by convection of a liquid iron alloy in the outer core. In a dynamo process, the movements drive a feedback process in which electric currents create electric and magnetic fields that in turn act on the currents.
The field at the surface of the Earth is approximately the same as if a giant bar magnet were positioned at the center of the Earth and tilted at an angle of about 11° off the rotational axis of the Earth (see the figure). The north pole of a magnetic compass needle points roughly north, toward the North Magnetic Pole. However, because a magnetic pole is attracted to its opposite, the North Magnetic Pole is actually the south pole of the geomagnetic field. This confusion in terminology arises because the pole of a magnet is defined by the geographical direction it points.
Earth's magnetic field is not constant—the strength of the field and the location of its poles vary. Moreover, the poles periodically reverse their orientation in a process called geomagnetic reversal. The most recent reversal occurred 780,000 years ago.
Rotating magnetic fields
The rotating magnetic field is a key principle in the operation of alternating-current motors. A permanent magnet in such a field rotates so as to maintain its alignment with the external field.
Magnetic torque is used to drive electric motors. In one simple motor design, a magnet is fixed to a freely rotating shaft and subjected to a magnetic field from an array of electromagnets. By continuously switching the electric current through each of the electromagnets, thereby flipping the polarity of their magnetic fields, like poles are kept next to the rotor; the resultant torque is transferred to the shaft.
A rotating magnetic field can be constructed using two orthogonal coils with 90 degrees phase difference in their AC currents. However, in practice such a system would be supplied through a three-wire arrangement with unequal currents.
This inequality would cause serious problems in standardization of the conductor size and so, to overcome it, three-phase systems are used where the three currents are equal in magnitude and have 120 degrees phase difference. Three similar coils having mutual geometrical angles of 120 degrees create the rotating magnetic field in this case. The ability of the three-phase system to create a rotating field, utilized in electric motors, is one of the main reasons why three-phase systems dominate the world's electrical power supply systems.
Synchronous motors use DC-voltage-fed rotor windings, which lets the excitation of the machine be controlled—and induction motors use short-circuited rotors (instead of a magnet) following the rotating magnetic field of a multicoiled stator. The short-circuited turns of the rotor develop eddy currents in the rotating field of the stator, and these currents in turn move the rotor by the Lorentz force.
The Italian physicist Galileo Ferraris and the Serbian-American electrical engineer Nikola Tesla independently researched the use of rotating magnetic fields in electric motors. In 1888, Ferraris published his research in a paper to the Royal Academy of Sciences in Turin and Tesla gained for his work.
Hall effect
The charge carriers of a current-carrying conductor placed in a transverse magnetic field experience a sideways Lorentz force; this results in a charge separation in a direction perpendicular to the current and to the magnetic field. The resultant voltage in that direction is proportional to the applied magnetic field. This is known as the Hall effect.
The Hall effect is often used to measure the magnitude of a magnetic field. It is used as well to find the sign of the dominant charge carriers in materials such as semiconductors (negative electrons or positive holes).
Magnetic circuits
An important use of is in magnetic circuits where inside a linear material. Here, is the magnetic permeability of the material. This result is similar in form to Ohm's law , where is the current density, is the conductance and is the electric field. Extending this analogy, the counterpart to the macroscopic Ohm's law () is:
where is the magnetic flux in the circuit, is the magnetomotive force applied to the circuit, and is the reluctance of the circuit. Here the reluctance is a quantity similar in nature to resistance for the flux. Using this analogy it is straightforward to calculate the magnetic flux of complicated magnetic field geometries, by using all the available techniques of circuit theory.
Largest magnetic fields
, the largest magnetic field produced over a macroscopic volume outside a lab setting is 2.8 kT (VNIIEF in Sarov, Russia, 1998). As of October 2018, the largest magnetic field produced in a laboratory over a macroscopic volume was 1.2 kT by researchers at the University of Tokyo in 2018.
The largest magnetic fields produced in a laboratory occur in particle accelerators, such as RHIC, inside the collisions of heavy ions, where microscopic fields reach 1014 T. Magnetars have the strongest known magnetic fields of any naturally occurring object, ranging from 0.1 to 100 GT (108 to 1011 T).
Common formulæ
Additional magnetic field values can be found through the magnetic field of a finite beam, for example, that the magnetic field of an arc of angle and radius at the center is , or that the magnetic field at the center of a N-sided regular polygon of side is , both outside of the plane with proper directions as inferred by right hand thumb rule.
History
Early developments
While magnets and some properties of magnetism were known to ancient societies, the research of magnetic fields began in 1269 when French scholar Petrus Peregrinus de Maricourt mapped out the magnetic field on the surface of a spherical magnet using iron needles. Noting the resulting field lines crossed at two points he named those points "poles" in analogy to Earth's poles. He also articulated the principle that magnets always have both a north and south pole, no matter how finely one slices them.
Almost three centuries later, William Gilbert of Colchester replicated Petrus Peregrinus' work and was the first to state explicitly that Earth is a magnet. Published in 1600, Gilbert's work, De Magnete, helped to establish magnetism as a science.
Mathematical development
In 1750, John Michell stated that magnetic poles attract and repel in accordance with an inverse square law Charles-Augustin de Coulomb experimentally verified this in 1785 and stated explicitly that north and south poles cannot be separated. Building on this force between poles, Siméon Denis Poisson (1781–1840) created the first successful model of the magnetic field, which he presented in 1824. In this model, a magnetic -field is produced by magnetic poles and magnetism is due to small pairs of north–south magnetic poles.
Three discoveries in 1820 challenged this foundation of magnetism. Hans Christian Ørsted demonstrated that a current-carrying wire is surrounded by a circular magnetic field. Then André-Marie Ampère showed that parallel wires with currents attract one another if the currents are in the same direction and repel if they are in opposite directions. Finally, Jean-Baptiste Biot and Félix Savart announced empirical results about the forces that a current-carrying long, straight wire exerted on a small magnet, determining the forces were inversely proportional to the perpendicular distance from the wire to the magnet. Laplace later deduced a law of force based on the differential action of a differential section of the wire, which became known as the Biot–Savart law, as Laplace did not publish his findings.
Extending these experiments, Ampère published his own successful model of magnetism in 1825. In it, he showed the equivalence of electrical currents to magnets and proposed that magnetism is due to perpetually flowing loops of current instead of the dipoles of magnetic charge in Poisson's model. Further, Ampère derived both Ampère's force law describing the force between two currents and Ampère's law, which, like the Biot–Savart law, correctly described the magnetic field generated by a steady current. Also in this work, Ampère introduced the term electrodynamics to describe the relationship between electricity and magnetism.
In 1831, Michael Faraday discovered electromagnetic induction when he found that a changing magnetic field generates an encircling electric field, formulating what is now known as Faraday's law of induction. Later, Franz Ernst Neumann proved that, for a moving conductor in a magnetic field, induction is a consequence of Ampère's force law. In the process, he introduced the magnetic vector potential, which was later shown to be equivalent to the underlying mechanism proposed by Faraday.
In 1850, Lord Kelvin, then known as William Thomson, distinguished between two magnetic fields now denoted and . The former applied to Poisson's model and the latter to Ampère's model and induction. Further, he derived how and relate to each other and coined the term permeability.
Between 1861 and 1865, James Clerk Maxwell developed and published Maxwell's equations, which explained and united all of classical electricity and magnetism. The first set of these equations was published in a paper entitled On Physical Lines of Force in 1861. These equations were valid but incomplete. Maxwell completed his set of equations in his later 1865 paper A Dynamical Theory of the Electromagnetic Field and demonstrated the fact that light is an electromagnetic wave. Heinrich Hertz published papers in 1887 and 1888 experimentally confirming this fact.
Modern developments
In 1887, Tesla developed an induction motor that ran on alternating current. The motor used polyphase current, which generated a rotating magnetic field to turn the motor (a principle that Tesla claimed to have conceived in 1882). Tesla received a patent for his electric motor in May 1888. In 1885, Galileo Ferraris independently researched rotating magnetic fields and subsequently published his research in a paper to the Royal Academy of Sciences in Turin, just two months before Tesla was awarded his patent, in March 1888.
The twentieth century showed that classical electrodynamics is already consistent with special relativity, and extended classical electrodynamics to work with quantum mechanics. Albert Einstein, in his paper of 1905 that established relativity, showed that both the electric and magnetic fields are part of the same phenomena viewed from different reference frames. Finally, the emergent field of quantum mechanics was merged with electrodynamics to form quantum electrodynamics, which first formalized the notion that electromagnetic field energy is quantized in the form of photons.
| Physical sciences | Magnetism | null |
36624 | https://en.wikipedia.org/wiki/Embryo | Embryo | An embryo is the initial stage of development for a multicellular organism. In organisms that reproduce sexually, embryonic development is the part of the life cycle that begins just after fertilization of the female egg cell by the male sperm cell. The resulting fusion of these two cells produces a single-celled zygote that undergoes many cell divisions that produce cells known as blastomeres. The blastomeres (4-cell stage) are arranged as a solid ball that when reaching a certain size, called a morula, (16-cell stage) takes in fluid to create a cavity called a blastocoel. The structure is then termed a blastula, or a blastocyst in mammals.
The mammalian blastocyst hatches before implantating into the endometrial lining of the womb. Once implanted the embryo will continue its development through the next stages of gastrulation, neurulation, and organogenesis. Gastrulation is the formation of the three germ layers that will form all of the different parts of the body. Neurulation forms the nervous system, and organogenesis is the development of all the various tissues and organs of the body.
A newly developing human is typically referred to as an embryo until the ninth week after conception, when it is then referred to as a fetus. In other multicellular organisms, the word "embryo" can be used more broadly to any early developmental or life cycle stage prior to birth or hatching.
Etymology
First attested in English in the mid-14c., the word embryon derives from Medieval Latin embryo, itself from Greek (embruon), lit. "young one", which is the neuter of (embruos), lit. "growing in", from ἐν (en), "in" and βρύω (bruō), "swell, be full"; the proper Latinized form of the Greek term would be embryum.
Development
Animal embryos
In animals, fertilization begins the process of embryonic development with the creation of a zygote, a single cell resulting from the fusion of gametes (e.g. egg and sperm). The development of a zygote into a multicellular embryo proceeds through a series of recognizable stages, often divided into cleavage, blastula, gastrulation, and organogenesis.
Cleavage is the period of rapid mitotic cell divisions that occur after fertilization. During cleavage, the overall size of the embryo does not change, but the size of individual cells decrease rapidly as they divide to increase the total number of cells. Cleavage results in a blastula.
Depending on the species, a blastula or blastocyst stage embryo can appear as a ball of cells on top of yolk, or as a hollow sphere of cells surrounding a middle cavity. The embryo's cells continue to divide and increase in number, while molecules within the cells such as RNAs and proteins actively promote key developmental processes such as gene expression, cell fate specification, and polarity. Before implanting into the uterine wall the embryo is sometimes known as the pre-implantation embryo or pre-implantation conceptus. Sometimes this is called the pre-embryo a term employed to differentiate from an embryo proper in relation to embryonic stem cell discourses.
Gastrulation is the next phase of embryonic development, and involves the development of two or more layers of cells (germinal layers). Animals that form two layers (such as Cnidaria) are called diploblastic, and those that form three (most other animals, from flatworms to humans) are called triploblastic. During gastrulation of triploblastic animals, the three germinal layers that form are called the ectoderm, mesoderm, and endoderm. All tissues and organs of a mature animal can trace their origin back to one of these layers. For example, the ectoderm will give rise to the skin epidermis and the nervous system, the mesoderm will give rise to the vascular system, muscles, bone, and connective tissues, and the endoderm will give rise to organs of the digestive system and epithelium of the digestive system and respiratory system. Many visible changes in embryonic structure happen throughout gastrulation as the cells that make up the different germ layers migrate and cause the previously round embryo to fold or invaginate into a cup-like appearance.
Past gastrulation, an embryo continues to develop into a mature multicellular organism by forming structures necessary for life outside of the womb or egg. As the name suggests, organogenesis is the stage of embryonic development when organs form. During organogenesis, molecular and cellular interactions prompt certain populations of cells from the different germ layers to differentiate into organ-specific cell types. For example, in neurogenesis, a subpopulation of cells from the ectoderm segregate from other cells and further specialize to become the brain, spinal cord, or peripheral nerves.
The embryonic period varies from species to species. In human development, the term fetus is used instead of embryo after the ninth week after conception, whereas in zebrafish, embryonic development is considered finished when a bone called the cleithrum becomes visible. In animals that hatch from an egg, such as birds, a young animal is typically no longer referred to as an embryo once it has hatched. In viviparous animals (animals whose offspring spend at least some time developing within a parent's body), the offspring is typically referred to as an embryo while inside of the parent, and is no longer considered an embryo after birth or exit from the parent. However, the extent of development and growth accomplished while inside of an egg or parent varies significantly from species to species, so much so that the processes that take place after hatching or birth in one species may take place well before those events in another. Therefore, according to one textbook, it is common for scientists to interpret the scope of embryology broadly as the study of the development of animals.
Plant embryos
Flowering plants (angiosperms) create embryos after the fertilization of a haploid ovule by pollen. The DNA from the ovule and pollen combine to form a diploid, single-cell zygote that will develop into an embryo. The zygote, which will divide multiple times as it progresses throughout embryonic development, is one part of a seed. Other seed components include the endosperm, which is tissue rich in nutrients that will help support the growing plant embryo, and the seed coat, which is a protective outer covering. The first cell division of a zygote is asymmetric, resulting in an embryo with one small cell (the apical cell) and one large cell (the basal cell). The small, apical cell will eventually give rise to most of the structures of the mature plant, such as the stem, leaves, and roots. The larger basal cell will give rise to the suspensor, which connects the embryo to the endosperm so that nutrients can pass between them. The plant embryo cells continue to divide and progress through developmental stages named for their general appearance: globular, heart, and torpedo. In the globular stage, three basic tissue types (dermal, ground, and vascular) can be recognized. The dermal tissue will give rise to the epidermis or outer covering of a plant, ground tissue will give rise to inner plant material that functions in photosynthesis, resource storage, and physical support, and vascular tissue will give rise to connective tissue like the xylem and phloem that transport fluid, nutrients, and minerals throughout the plant. In heart stage, one or two cotyledons (embryonic leaves) will form. Meristems (centers of stem cell activity) develop during the torpedo stage, and will eventually produce many of the mature tissues of the adult plant throughout its life. At the end of embryonic growth, the seed will usually go dormant until germination. Once the embryo begins to germinate (grow out from the seed) and forms its first true leaf, it is called a seedling or plantlet.
Plants that produce spores instead of seeds, like bryophytes and ferns, also produce embryos. In these plants, the embryo begins its existence attached to the inside of the archegonium on a parental gametophyte from which the egg cell was generated. The inner wall of the archegonium lies in close contact with the "foot" of the developing embryo; this "foot" consists of a bulbous mass of cells at the base of the embryo which may receive nutrition from its parent gametophyte. The structure and development of the rest of the embryo varies by group of plants.
Since all land plants create embryos, they are collectively referred to as embryophytes (or by their scientific name, Embryophyta). This, along with other characteristics, distinguishes land plants from other types of plants, such as algae, which do not produce embryos.
Research and technology
Biological processes
Embryos from numerous plant and animal species are studied in biological research laboratories across the world to learn about topics such as stem cells, evolution and development, cell division, and gene expression. Examples of scientific discoveries made while studying embryos that were awarded the Nobel Prize in Physiology or Medicine include the Spemann-Mangold organizer, a group of cells originally discovered in amphibian embryos that give rise to neural tissues, and genes that give rise to body segments discovered in Drosophila fly embryos by Christiane Nüsslein-Volhard and Eric Wieschaus.
Assisted reproductive technology
Creating and/or manipulating embryos via assisted reproductive technology (ART) is used for addressing fertility concerns in humans and other animals, and for selective breeding in agricultural species. Between the years 1987 and 2015, ART techniques including in vitro fertilization (IVF) were responsible for an estimated one million human births in the United States alone. Other clinical technologies include preimplantation genetic diagnosis (PGD), which can identify certain serious genetic abnormalities, such as aneuploidy, prior to selecting embryos for use in IVF. Some have proposed (or even attempted - see He Jiankui affair) genetic editing of human embryos via CRISPR-Cas9 as a potential avenue for preventing disease; however, this has been met with widespread condemnation from the scientific community.
ART techniques are also used to improve the profitability of agricultural animal species such as cows and pigs by enabling selective breeding for desired traits and/or to increase numbers of offspring. For example, when allowed to breed naturally, cows typically produce one calf per year, whereas IVF increases offspring yield to 9–12 calves per year. IVF and other ART techniques, including cloning via interspecies somatic cell nuclear transfer (iSCNT), are also used in attempts to increase the numbers of endangered or vulnerable species, such as Northern white rhinos, cheetahs, and sturgeons.
Cryoconservation of plant and animal biodiversity
Cryoconservation of genetic resources involves collecting and storing the reproductive materials, such as embryos, seeds, or gametes, from animal or plant species at low temperatures in order to preserve them for future use. Some large-scale animal species cryoconservation efforts include "frozen zoos" in various places around the world, including in the UK's Frozen Ark, the Breeding Centre for Endangered Arabian Wildlife (BCEAW) in the United Arab Emirates, and the San Diego Zoo Institute for Conservation in the United States. As of 2018, there were approximately 1,700 seed banks used to store and protect plant biodiversity, particularly in the event of mass extinction or other global emergencies. The Svalbard Global Seed Vault in Norway maintains the largest collection of plant reproductive tissue, with more than a million samples stored at .
Fossilized embryos
Fossilized animal embryos are known from the Precambrian, and are found in great numbers during the Cambrian period. Even fossilized dinosaur embryos have been discovered.
| Biology and health sciences | Animal ontogeny | null |
36790 | https://en.wikipedia.org/wiki/Artery | Artery | An artery () is a blood vessel in humans and most other animals that takes oxygenated blood away from the heart in the systemic circulation to one or more parts of the body. Exceptions that carry deoxygenated blood are the pulmonary arteries in the pulmonary circulation that carry blood to the lungs for oxygenation, and the umbilical arteries in the fetal circulation that carry deoxygenated blood to the placenta. It consists of a multi-layered artery wall wrapped into a tube-shaped channel.
Arteries contrast with veins, which carry deoxygenated blood back towards the heart; or in the pulmonary and fetal circulations carry oxygenated blood to the lungs and fetus respectively.
Structure
The anatomy of arteries can be separated into gross anatomy, at the macroscopic level, and microanatomy, which must be studied with a microscope. The arterial system of the human body is divided into systemic arteries, carrying blood from the heart to the whole body, and pulmonary arteries, carrying deoxygenated blood from the heart to the lungs.
Large arteries (such as the aorta) are composed of many different types of cells, namely endothelial, smooth muscle, fibroblast, and immune cells. As with veins, the arterial wall consists of three layers called tunics, namely the tunica intima, tunica media, and tunica externa, from innermost to outermost. The externa, alternatively known as the tunica adventitia, is composed of collagen fibers and elastic tissue—with the largest arteries containing vasa vasorum, small blood vessels that supply the walls of large blood vessels. Most of the layers have a clear boundary between them, however the tunica externa has a boundary that is ill-defined. Normally its boundary is considered when it meets or touches the connective tissue. Inside this layer is the tunica media, which is made up of smooth muscle cells, elastic tissue (also called connective tissue proper) and collagen fibres. The innermost layer, which is in direct contact with the flow of blood, is the tunica intima. The elastic tissue allows the artery to bend and fit through places in the body. This layer is mainly made up of endothelial cells (and a supporting layer of elastin rich collagen in elastic arteries). The hollow internal cavity in which the blood flows is called the lumen.
Development
Arterial formation begins and ends when endothelial cells begin to express arterial specific genes, such as ephrin B2.
Function
Arteries form part of the circulatory system. They carry blood that is oxygenated after it has been pumped from the heart. Coronary arteries also aid the heart in pumping blood by sending oxygenated blood to the heart, allowing the muscles to function. Arteries carry oxygenated blood away from the heart to the tissues, except for pulmonary arteries, which carry blood to the lungs for oxygenation (usually veins carry deoxygenated blood to the heart but the pulmonary veins carry oxygenated blood as well). There are two types of unique arteries. The pulmonary artery carries blood from the heart to the lungs, where it receives oxygen. It is unique because the blood in it is not "oxygenated", as it has not yet passed through the lungs. The other unique artery is the umbilical artery, which carries deoxygenated blood from a fetus to its mother.
Arteries have a blood pressure higher than other parts of the circulatory system. The pressure in arteries varies during the cardiac cycle. It is highest when the heart contracts and lowest when heart relaxes. The variation in pressure produces a pulse, which can be felt in different areas of the body, such as the radial pulse. Arterioles have the greatest collective influence on both local blood flow and on overall blood pressure. They are the primary "adjustable nozzles" in the blood system, across which the greatest pressure drop occurs. The combination of heart output (cardiac output) and systemic vascular resistance, which refers to the collective resistance of all of the body's arterioles, are the principal determinants of arterial blood pressure at any given moment.
Arteries have the highest pressure and have narrow lumen diameter.
Systemic arteries are the arteries (including the peripheral arteries), of the systemic circulation, which is the part of the cardiovascular system that carries oxygenated blood away from the heart, to the body, and returns deoxygenated blood back to the heart. Systemic arteries can be subdivided into two types—muscular and elastic—according to the relative compositions of elastic and muscle tissue in their tunica media as well as their size and the makeup of the internal and external elastic lamina. The larger arteries (>10 mm diameter) are generally elastic and the smaller ones (0.1–10 mm) tend to be muscular. Systemic arteries deliver blood to the arterioles, and then to the capillaries, where nutrients and gasses are exchanged.
After traveling from the aorta, blood travels through peripheral arteries into smaller arteries called arterioles, and eventually to capillaries. Arterioles help in regulating blood pressure by the variable contraction of the smooth muscle of their walls, and deliver blood to the capillaries. This smooth muscle contraction is primarily influenced by activity of the sympathetic vasomotor nerves innervating the arterioles. Enhanced sympathetic activation prompts vasoconstriction, reducing the lumen diameter. A reduced lumen diameter consequently elevates the blood pressure within the arterioles. Conversely, decreased sympathetic activity within the vasomotor nerves causes vasodilation of the vessels thereby decreasing blood pressure.
Aorta
The aorta is the root systemic artery (i.e., main artery). In humans, it receives blood directly from the left ventricle of the heart via the aortic valve. As the aorta branches and these arteries branch, in turn, they become successively smaller in diameter, down to the arterioles. The arterioles supply capillaries, which in turn empty into venules. The first branches off of the aorta are the coronary arteries, which supply blood to the heart muscle itself. These are followed by the branches of the aortic arch, namely the brachiocephalic artery, the left common carotid, and the left subclavian arteries.
Capillaries
The capillaries are the smallest of the blood vessels and are part of the microcirculation. The microvessels have a width of a single cell in diameter to aid in the fast and easy diffusion of gasses, sugars and nutrients to surrounding tissues. Capillaries have no smooth muscle surrounding them and have a diameter less than that of red blood cells; a red blood cell is typically 7 micrometers outside diameter, capillaries typically 5 micrometers inside diameter. The red blood cells must distort in order to pass through the capillaries.
These small diameters of the capillaries provide a relatively large surface area for the exchange of gasses and nutrients.
Clinical significance
Systemic arterial pressures are generated by the forceful contractions of the heart's left ventricle. High blood pressure is a factor in causing arterial damage. Healthy resting arterial pressures are relatively low, mean systemic pressures typically being under above surrounding atmospheric pressure (about at sea level). To withstand and adapt to the pressures within, arteries are surrounded by varying thicknesses of smooth muscle which have extensive elastic and inelastic connective tissues. The pulse pressure, being the difference between systolic and diastolic pressure, is determined primarily by the amount of blood ejected by each heart beat, stroke volume, versus the volume and elasticity of the major arteries.
A blood squirt, also known as an arterial gush, is the effect when an artery is cut due to the higher arterial pressures. Blood is spurted out at a rapid, intermittent rate, that coincides with the heartbeat. The amount of blood loss can be copious, can occur very rapidly, and be life-threatening.
Over time, factors such as elevated arterial blood sugar (particularly as seen in diabetes mellitus), lipoprotein, cholesterol, high blood pressure, stress and smoking, are all implicated in damaging both the endothelium and walls of the arteries, resulting in atherosclerosis. Atherosclerosis is a disease marked by the hardening of arteries. This is caused by an atheroma or plaque in the artery wall and is a build-up of cell debris, that contain lipids, (cholesterol and fatty acids), calcium and a variable amount of fibrous connective tissue.
Accidental intra-arterial injection either iatrogenically or through recreational drug use can cause symptoms such as intense pain, paresthesia and necrosis. It usually causes permanent damage to the limb; often amputation is necessary.
History
Among the Ancient Greeks before Hippocrates, all blood vessels were called Φλέβες, phlebes. The word arteria then referred to the windpipe. Herophilos was the first to describe anatomical differences between the two types of blood vessel. While Empedocles believed that the blood moved to and fro through the blood vessels, there was no concept of the capillary vessels that join arteries and veins, and there was no notion of circulation. Diogenes of Apollonia developed the theory of pneuma, originally meaning just air but soon identified with the soul itself, and thought to co-exist with the blood in the blood vessels. The arteries were thought to be responsible for the transport of air to the tissues and to be connected to the trachea. This was as a result of finding the arteries of cadavers devoid of blood.
In medieval times, it was supposed that arteries carried a fluid, called "spiritual blood" or "vital spirits", considered to be different from the contents of the veins. This theory went back to Galen. In the late medieval period, the trachea, and ligaments were also called "arteries".
William Harvey described and popularized the modern concept of the circulatory system and the roles of arteries and veins in the 17th century.
Alexis Carrel at the beginning of the 20th century first described the technique for vascular suturing and anastomosis and successfully performed many organ transplantations in animals; he thus actually opened the way to modern vascular surgery that was previously limited to vessels' permanent ligation.
| Biology and health sciences | Circulatory system | null |
36806 | https://en.wikipedia.org/wiki/Cotton | Cotton | Cotton () is a soft, fluffy staple fiber that grows in a boll, or protective case, around the seeds of the cotton plants of the genus Gossypium in the mallow family Malvaceae. The fiber is almost pure cellulose, and can contain minor percentages of waxes, fats, pectins, and water. Under natural conditions, the cotton bolls will increase the dispersal of the seeds.
The plant is a shrub native to tropical and subtropical regions around the world, including the Americas, Africa, Egypt and India. The greatest diversity of wild cotton species is found in Mexico, followed by Australia and Africa. Cotton was independently domesticated in the Old and New Worlds.
The fiber is most often spun into yarn or thread and used to make a soft, breathable, and durable textile. The use of cotton for fabric is known to date to prehistoric times; fragments of cotton fabric dated to the fifth millennium BC have been found in the Indus Valley civilization, as well as fabric remnants dated back to 4200 BC in Peru.
Although cultivated since antiquity, it was the invention of the cotton gin that lowered the cost of production that led to its widespread use, and it is the most widely used natural fiber cloth in clothing today.
Current estimates for world production are about 25 million tonnes or 110 million bales annually, accounting for 2.5% of the world's arable land. India is the world's largest producer of cotton. The United States has been the largest exporter for many years.
Types
There are four commercially grown species of cotton, all domesticated in antiquity:
Gossypium hirsutum – upland cotton, native to Central America, Mexico, the Caribbean and southern Florida (90% of world production)
Gossypium barbadense – known as extra-long staple cotton, native to tropical South America (over 5% of world production)
Gossypium arboreum – tree cotton, native to India and Pakistan (less than 2%)
Gossypium herbaceum – Levant cotton, native to southern Africa and the Arabian Peninsula (less than 2%)
Hybrid varieties are also cultivated. The two New World cotton species account for the vast majority of modern cotton production, but the two Old World species were widely used before the 1900s. While cotton fibers occur naturally in colors of white, brown, pink and green, fears of contaminating the genetics of white cotton have led many cotton-growing locations to ban the growing of colored cotton varieties.
Etymology
The word "cotton" has Arabic origins, derived from the Arabic word ( or ). This was the usual word for cotton in medieval Arabic. Marco Polo in chapter 2 in his book, describes a province he calls Khotan in Turkestan, today's Xinjiang, where cotton was grown in abundance. The word entered the Romance languages in the mid-12th century, and English a century later. Cotton fabric was known to the ancient Romans as an import, but cotton was rare in the Romance-speaking lands until imports from the Arabic-speaking lands in the later medieval era at transformatively lowered prices.
History
Early history
South Asia
The earliest evidence of the use of cotton in the Old World, dated to 5500 BC and preserved in copper beads, has been found at the Neolithic site of Mehrgarh, at the foot of the Bolan Pass in ancient India, today in Balochistan Pakistan. Fragments of cotton textiles have been found at Mohenjo-daro and other sites of the Bronze Age Indus Valley civilization, and cotton may have been an important export from it.
Americas
Cotton bolls discovered in a cave near Tehuacán, Mexico, have been
dated to as early as 5500 BC, but this date has been challenged. More securely dated is the domestication of Gossypium hirsutum in Mexico between around 3400 and 2300 BC. During this time, people between the Río Santiago and the Río Balsas grew, spun, wove, dyed, and sewed cotton. What they did not use themselves, they sent to their Aztec rulers as tribute, on the scale of ~ annually.
In Peru, cultivation of the indigenous cotton species Gossypium barbadense has been dated, from a find in Ancon, to , and was the backbone of the development of coastal cultures such as the Norte Chico, Moche, and Nazca. Cotton was grown upriver, made into nets, and traded with fishing villages along the coast for large supplies of fish. The Spanish who came to Mexico and Peru in the early 16th century found the people growing cotton and wearing clothing made of it.
Arabia
The Greeks and the Arabs were not familiar with cotton until the Wars of Alexander the Great, as his contemporary Megasthenes told Seleucus I Nicator of "there being trees on which wool grows" in "Indica." This may be a reference to "tree cotton", Gossypium arboreum, which is native to the Indian subcontinent.
According to the Columbia Encyclopedia:
Iran
In Iran (Persia), the history of cotton dates back to the Achaemenid era (5th century BC); however, there are few sources about the planting of cotton in pre-Islamic Iran. Cotton cultivation was common in Merv, Ray and Pars. In Persian poems, especially Ferdowsi's Shahname, there are references to cotton ("panbe" in Persian). Marco Polo (13th century) refers to the major products of Persia, including cotton. John Chardin, a French traveler of the 17th century who visited Safavid Persia, spoke approvingly of the vast cotton farms of Persia.
Kingdom of Kush
Cotton (Gossypium herbaceum Linnaeus) may have been domesticated 5000 BC in eastern Sudan near the Middle Nile Basin region, where cotton cloth was being produced. Around the 4th century BC, the cultivation of cotton and the knowledge of its spinning and weaving in Meroë reached a high level. The export of textiles was one of the sources of wealth for Meroë. Ancient Nubia had a "culture of cotton" of sorts, evidenced by physical evidence of cotton processing tools and the presence of cattle in certain areas. Some researchers propose that cotton was important to the Nubian economy for its use in contact with the neighboring Egyptians. Aksumite King Ezana boasted in his inscription that he destroyed large cotton plantations in Meroë during his conquest of the region.
In the Meroitic Period (beginning 3rd century BCE), many cotton textiles have been recovered, preserved due to favorable arid conditions. Most of these fabric fragments come from Lower Nubia, and the cotton textiles account for 85% of the archaeological textiles from Classic/Late Meroitic sites. Due to these arid conditions, cotton, a plant that usually thrives moderate rainfall and richer soils, requires extra irrigation and labor in Sudanese climate conditions. Therefore, a great deal of resources would have been required, likely restricting its cultivation to the elite. In the first to third centuries CE, recovered cotton fragments all began to mirror the same style and production method, as seen from the direction of spun cotton and technique of weaving. Cotton textiles also appear in places of high regard, such as on funerary stelae and statues.
China
During the Han dynasty (207 BC - 220 AD), cotton was grown by Chinese peoples in the southern Chinese province of Yunnan.
Middle Ages
Eastern world
Egyptians grew and spun cotton in the first seven centuries of the Christian era.
Handheld roller cotton gins had been used in India since the 6th century, and was then introduced to other countries from there. Between the 12th and 14th centuries, dual-roller gins appeared in India and China. The Indian version of the dual-roller gin was prevalent throughout the Mediterranean cotton trade by the 16th century. This mechanical device was, in some areas, driven by water power.
The earliest clear illustrations of the spinning wheel come from the Islamic world in the eleventh century. The earliest unambiguous reference to a spinning wheel in India is dated to 1350, suggesting that the spinning wheel was likely introduced from Iran to India during the Delhi Sultanate.
Europe
During the late medieval period, cotton became known as an imported fiber in northern Europe, without any knowledge of how it was derived, other than that it was a plant. Because Herodotus had written in his Histories, Book III, 106, that in India trees grew in the wild producing wool, it was assumed that the plant was a tree, rather than a shrub. This aspect is retained in the name for cotton in several Germanic languages, such as German Baumwolle, which translates as "tree wool" (Baum means "tree"; Wolle means "wool"). Noting its similarities to wool, people in the region could only imagine that cotton must be produced by plant-borne sheep. John Mandeville, writing in 1350, stated as fact that "There grew there [India] a wonderful tree which bore tiny lambs on the endes of its branches. These branches were so pliable that they bent down to allow the lambs to feed when they are hungry." (See Vegetable Lamb of Tartary.)
Cotton manufacture was introduced to Europe during the Muslim conquest of the Iberian Peninsula and Sicily. The knowledge of cotton weaving was spread to northern Italy in the 12th century, when Sicily was conquered by the Normans, and consequently to the rest of Europe. The spinning wheel, introduced to Europe circa 1350, improved the speed of cotton spinning. By the 15th century, Venice, Antwerp, and Haarlem were important ports for cotton trade, and the sale and transportation of cotton fabrics had become very profitable.
Early modern period
Mughal India
Under the Mughal Empire, which ruled in the Indian subcontinent from the early 16th century to the early 18th century, Indian cotton production increased, in terms of both raw cotton and cotton textiles. The Mughals introduced agrarian reforms such as a new revenue system that was biased in favour of higher value cash crops such as cotton and indigo, providing state incentives to grow cash crops, in addition to rising market demand.
The largest manufacturing industry in the Mughal Empire was cotton textile manufacturing, which included the production of piece goods, calicos, and muslins, available unbleached and in a variety of colours. The cotton textile industry was responsible for a large part of the empire's international trade. India had a 25% share of the global textile trade in the early 18th century. Indian cotton textiles were the most important manufactured goods in world trade in the 18th century, consumed across the world from the Americas to Japan. The most important center of cotton production was the Bengal Subah province, particularly around its capital city of Dhaka.
The worm gear roller cotton gin, which was invented in India during the early Delhi Sultanate era of the 13th–14th centuries, came into use in the Mughal Empire some time around the 16th century, and is still used in India through to the present day. Another innovation, the incorporation of the crank handle in the cotton gin, first appeared in India some time during the late Delhi Sultanate or the early Mughal Empire. The production of cotton, which may have largely been spun in the villages and then taken to towns in the form of yarn to be woven into cloth textiles, was advanced by the diffusion of the spinning wheel across India shortly before the Mughal era, lowering the costs of yarn and helping to increase demand for cotton. The diffusion of the spinning wheel, and the incorporation of the worm gear and crank handle into the roller cotton gin, led to greatly expanded Indian cotton textile production during the Mughal era.
It was reported that, with an Indian cotton gin, which is half machine and half tool, one man and one woman could clean of cotton per day. With a modified Forbes version, one man and a boy could produce per day. If oxen were used to power 16 of these machines, and a few people's labour was used to feed them, they could produce as much work as 750 people did formerly.
Egypt
In the early 19th century, a Frenchman named M. Jumel proposed to the great ruler of Egypt, Mohamed Ali Pasha, that he could earn a substantial income by growing an extra-long staple Maho (Gossypium barbadense) cotton, in Lower Egypt, for the French market. Mohamed Ali Pasha accepted the proposition and granted himself the monopoly on the sale and export of cotton in Egypt; and later dictated cotton should be grown in preference to other crops.
Egypt under Muhammad Ali in the early 19th century had the fifth most productive cotton industry in the world, in terms of the number of spindles per capita. The industry was initially driven by machinery that relied on traditional energy sources, such as animal power, water wheels, and windmills, which were also the principal energy sources in Western Europe up until around 1870. It was under Muhammad Ali in the early 19th century that steam engines were introduced to the Egyptian cotton industry.
By the time of the American Civil war annual exports had reached $16 million (120,000 bales), which rose to $56 million by 1864, primarily due to the loss of the Confederate supply on the world market. Exports continued to grow even after the reintroduction of US cotton, produced now by a paid workforce, and Egyptian exports reached 1.2 million bales a year by 1903.
Britain
East India Company
The English East India Company (EIC) introduced the British to cheap calico and chintz cloth on the restoration of the monarchy in the 1660s. Initially imported as a novelty side line, from its spice trading posts in Asia, the cheap colourful cloth proved popular and overtook the EIC's spice trade by value in the late 17th century. The EIC embraced the demand, particularly for calico, by expanding its factories in Asia and producing and importing cloth in bulk, creating competition for domestic woollen and linen textile producers. The impacted weavers, spinners, dyers, shepherds and farmers objected and the calico question became one of the major issues of National politics between the 1680s and the 1730s. Parliament began to see a decline in domestic textile sales, and an increase in imported textiles from places like China and India. Seeing the East India Company and their textile importation as a threat to domestic textile businesses, Parliament passed the 1700 Calico Act, blocking the importation of cotton cloth. As there was no punishment for continuing to sell cotton cloth, smuggling of the popular material became commonplace. In 1721, dissatisfied with the results of the first act, Parliament passed a stricter addition, this time prohibiting the sale of most cottons, imported and domestic (exempting only thread Fustian and raw cotton). The exemption of raw cotton from the prohibition initially saw 2 thousand bales of cotton imported annually, to become the basis of a new indigenous industry, initially producing Fustian for the domestic market, though more importantly triggering the development of a series of mechanised spinning and weaving technologies, to process the material. This mechanised production was concentrated in new cotton mills, which slowly expanded until by the beginning of the 1770s seven thousand bales of cotton were imported annually, and pressure was put on Parliament, by the new mill owners, to remove the prohibition on the production and sale of pure cotton cloth, as they could easily compete with anything the EIC could import.
The acts were repealed in 1774, triggering a wave of investment in mill-based cotton spinning and production, doubling the demand for raw cotton within a couple of years, and doubling it again every decade, into the 1840s.
Indian cotton textiles, particularly those from Bengal, continued to maintain a competitive advantage up until the 19th century. In order to compete with India, Britain invested in labour-saving technical progress, while implementing protectionist policies such as bans and tariffs to restrict Indian imports. At the same time, the East India Company's rule in India contributed to its deindustrialization, opening up a new market for British goods, while the capital amassed from Bengal after its 1757 conquest was used to invest in British industries such as textile manufacturing and greatly increase British wealth. British colonization also forced open the large Indian market to British goods, which could be sold in India without tariffs or duties, compared to local Indian producers who were heavily taxed, while raw cotton was imported from India without tariffs to British factories which manufactured textiles from Indian cotton, giving Britain a monopoly over India's large market and cotton resources. India served as both a significant supplier of raw goods to British manufacturers and a large captive market for British manufactured goods. Britain eventually surpassed India as the world's leading cotton textile manufacturer in the 19th century.
India's cotton-processing sector changed during EIC expansion in India in the late 18th and early 19th centuries. From focusing on supplying the British market to supplying East Asia with raw cotton. As the Artisan produced textiles were no longer competitive with those produced Industrially, and Europe preferring the cheaper slave produced, long staple American, and Egyptian cottons, for its own materials.
Industrial Revolution
The advent of the Industrial Revolution in Britain provided a great boost to cotton manufacture, as textiles emerged as Britain's leading export. In 1738, Lewis Paul and John Wyatt, of Birmingham, England, patented the roller spinning machine, as well as the flyer-and-bobbin system for drawing cotton to a more even thickness using two sets of rollers that traveled at different speeds. Later, the invention of the James Hargreaves' spinning jenny in 1764, Richard Arkwright's spinning frame in 1769 and Samuel Crompton's spinning mule in 1775 enabled British spinners to produce cotton yarn at much higher rates. From the late 18th century on, the British city of Manchester acquired the nickname "Cottonopolis" due to the cotton industry's omnipresence within the city, and Manchester's role as the heart of the global cotton trade.
Production capacity in Britain and the United States was improved by the invention of the modern cotton gin by the American Eli Whitney in 1793. Before the development of cotton gins, the cotton fibers had to be pulled from the seeds tediously by hand. By the late 1700s, a number of crude ginning machines had been developed. However, to produce a bale of cotton required over 600 hours of human labor, making large-scale production uneconomical in the United States, even with the use of humans as slave labor. The gin that Whitney manufactured (the Holmes design) reduced the hours down to just a dozen or so per bale. Although Whitney patented his own design for a cotton gin, he manufactured a prior design from Henry Odgen Holmes, for which Holmes filed a patent in 1796. Improving technology and increasing control of world markets allowed British traders to develop a commercial chain in which raw cotton fibers were (at first) purchased from colonial plantations, processed into cotton cloth in the mills of Lancashire, and then exported on British ships to captive colonial markets in West Africa, India, and China (via Shanghai and Hong Kong).
By the 1840s, India was no longer capable of supplying the vast quantities of cotton fibers needed by mechanized British factories, while shipping bulky, low-price cotton from India to Britain was time-consuming and expensive. This, coupled with the emergence of American cotton as a superior type (due to the longer, stronger fibers of the two domesticated native American species, Gossypium hirsutum and Gossypium barbadense), encouraged British traders to purchase cotton from plantations in the United States and in the Caribbean. By the mid-19th century, "King Cotton" had become the backbone of the southern American economy. In the United States, cultivating and harvesting cotton became the leading occupation of slaves.
During the American Civil War, American cotton exports slumped due to a Union blockade on Southern ports, and because of a strategic decision by the Confederate government to cut exports, hoping to force Britain to recognize the Confederacy or enter the war. The Lancashire Cotton Famine prompted the main purchasers of cotton, Britain and France, to turn to Egyptian cotton. British and French traders invested heavily in cotton plantations. The Egyptian government of Viceroy Isma'il took out substantial loans from European bankers and stock exchanges. After the American Civil War ended in 1865, British and French traders abandoned Egyptian cotton and returned to cheap American exports, sending Egypt into a deficit spiral that led to the country declaring bankruptcy in 1876, a key factor behind Egypt's occupation by the British Empire in 1882.
During this time, cotton cultivation in the British Empire, especially Australia and India, greatly increased to replace the lost production of the American South. Through tariffs and other restrictions, the British government discouraged the production of cotton cloth in India; rather, the raw fiber was sent to England for processing. The Indian Mahatma Gandhi described the process:
English people buy Indian cotton in the field, picked by Indian labor at seven cents a day, through an optional monopoly.
This cotton is shipped on British ships, a three-week journey across the Indian Ocean, down the Red Sea, across the Mediterranean, through Gibraltar, across the Bay of Biscay and the Atlantic Ocean to London. One hundred per cent profit on this freight is regarded as small.
The cotton is turned into cloth in Lancashire. You pay shilling wages instead of Indian pennies to your workers. The English worker not only has the advantage of better wages, but the steel companies of England get the profit of building the factories and machines. Wages; profits; all these are spent in England.
The finished product is sent back to India at European shipping rates, once again on British ships. The captains, officers, sailors of these ships, whose wages must be paid, are English. The only Indians who profit are a few lascars who do the dirty work on the boats for a few cents a day.
The cloth is finally sold back to the kings and landlords of India who got the money to buy this expensive cloth out of the poor peasants of India who worked at seven cents a day.
United States
In the United States, growing Southern cotton generated significant wealth and capital for the antebellum South, as well as raw material for Northern textile industries. Before 1865 the cotton was largely produced through the labor of enslaved African Americans. It enriched both the Southern landowners and the new textile industries of the Northeastern United States and northwestern Europe. In 1860 the slogan "Cotton is king" characterized the attitude of Southern leaders toward this monocrop in that Europe would support an independent Confederate States of America in 1861 in order to protect the supply of cotton it needed for its very large textile industry.
Russell Griffin of California was a farmer who farmed one of the biggest cotton operations. He produced over sixty thousand bales.
Cotton remained a key crop in the Southern economy after slavery ended in 1865. Across the South, sharecropping evolved, in which landless farmers worked land owned by others in return for a share of the profits. Some farmers rented the land and bore the production costs themselves. Until mechanical cotton pickers were developed, cotton farmers needed additional labor to hand-pick cotton. Picking cotton was a source of income for families across the South. Rural and small town school systems had split vacations so children could work in the fields during "cotton-picking."
During the middle 20th century, employment in cotton farming fell, as machines began to replace laborers and the South's rural labor force dwindled during the World Wars. Cotton remains a major export of the United States, with large farms in California, Arizona and the Deep South. To acknowledge cotton's place in the history and heritage of Texas, the Texas Legislature designated cotton the official "State Fiber and Fabric of Texas" in 1997.
The Moon
China's Chang'e 4 spacecraft took cotton seeds to the Moon's far side. On 15 January 2019, China announced that a cotton seed sprouted, the first "truly otherworldly plant in history". Inside the Von Kármán Crater, the capsule and seeds sit inside the Chang'e 4 lander.
Cultivation
Successful cultivation of cotton requires a long frost-free period, plenty of sunshine, and a moderate rainfall, usually from . Soils usually need to be fairly heavy, although the level of nutrients does not need to be exceptional. In general, these conditions are met within the seasonally dry tropics and subtropics in the Northern and Southern hemispheres, but a large proportion of the cotton grown today is cultivated in areas with less rainfall that obtain the water from irrigation. Production of the crop for a given year usually starts soon after harvesting the preceding autumn. Cotton is naturally a perennial but is grown as an annual to help control pests. Planting time in spring in the Northern hemisphere varies from the beginning of February to the beginning of June. The area of the United States known as the South Plains is the largest contiguous cotton-growing region in the world. While dryland (non-irrigated) cotton is successfully grown in this region, consistent yields are only produced with heavy reliance on irrigation water drawn from the Ogallala Aquifer. Since cotton is somewhat salt and drought tolerant, this makes it an attractive crop for arid and semiarid regions. As water resources get tighter around the world, economies that rely on it face difficulties and conflict, as well as potential environmental problems. For example, improper cropping and irrigation practices have led to desertification in areas of Uzbekistan, where cotton is a major export. In the days of the Soviet Union, the Aral Sea was tapped for agricultural irrigation, largely of cotton, and now salination is widespread.
Cotton can also be cultivated to have colors other than the yellowish off-white typical of modern commercial cotton fibers. Naturally colored cotton can come in red, green, and several shades of brown.
Water footprint
The water footprint of cotton fibers is substantially larger than for most other plant fibers. Cotton is also known as a thirsty crop; on average, globally, cotton requires 8,000–10,000 liters of water for one kilogram of cotton, and in dry areas, it may require even more such as in some areas of India, it may need 22,500 liters.
Genetic modification
Genetically modified (GM) cotton was developed to reduce the heavy reliance on pesticides. The bacterium Bacillus thuringiensis (Bt) naturally produces a chemical harmful only to a small fraction of insects, most notably the larvae of moths and butterflies, beetles, and flies, and harmless to other forms of life. The gene coding for Bt toxin has been inserted into cotton, causing cotton, called Bt cotton, to produce this natural insecticide in its tissues. In many regions, the main pests in commercial cotton are lepidopteran larvae, which are killed by the Bt protein in the transgenic cotton they eat. This eliminates the need to use large amounts of broad-spectrum insecticides to kill lepidopteran pests (some of which have developed pyrethroid resistance). This spares natural insect predators in the farm ecology and further contributes to noninsecticide pest management.
However, Bt cotton is ineffective against many cotton pests, such as plant bugs, stink bugs, and aphids; depending on circumstances it may still be desirable to use insecticides against these. A 2006 study done by Cornell researchers, the Center for Chinese Agricultural Policy and the Chinese Academy of Science on Bt cotton farming in China found that after seven years these secondary pests that were normally controlled by pesticide had increased, necessitating the use of pesticides at similar levels to non-Bt cotton and causing less profit for farmers because of the extra expense of GM seeds. However, a 2009 study by the Chinese Academy of Sciences, Stanford University and Rutgers University refuted this. They concluded that the GM cotton effectively controlled bollworm. The secondary pests were mostly miridae (plant bugs) whose increase was related to local temperature and rainfall and only continued to increase in half the villages studied. Moreover, the increase in insecticide use for the control of these secondary insects was far smaller than the reduction in total insecticide use due to Bt cotton adoption. A 2012 Chinese study concluded that Bt cotton halved the use of pesticides and doubled the level of ladybirds, lacewings and spiders. The International Service for the Acquisition of Agri-biotech Applications (ISAAA) said that, worldwide, GM cotton was planted on an area of 25 million hectares in 2011. This was 69% of the worldwide total area planted in cotton.
GM cotton acreage in India grew at a rapid rate, increasing from 50,000 hectares in 2002 to 10.6 million hectares in 2011. The total cotton area in India was 12.1 million hectares in 2011, so GM cotton was grown on 88% of the cotton area. This made India the country with the largest area of GM cotton in the world. A long-term study on the economic impacts of Bt cotton in India, published in the Journal PNAS in 2012, showed that Bt cotton has increased yields, profits, and living standards of smallholder farmers. The U.S. GM cotton crop was 4.0 million hectares in 2011 the second largest area in the world, the Chinese GM cotton crop was third largest by area with 3.9 million hectares and Pakistan had the fourth largest GM cotton crop area of 2.6 million hectares in 2011. The initial introduction of GM cotton proved to be a success in Australia – the yields were equivalent to the non-transgenic varieties and the crop used much less pesticide to produce (85% reduction). The subsequent introduction of a second variety of GM cotton led to increases in GM cotton production until 95% of the Australian cotton crop was GM in 2009 making Australia the country with the fifth largest GM cotton crop in the world. Other GM cotton growing countries in 2011 were Argentina, Myanmar, Burkina Faso, Brazil, Mexico, Colombia, South Africa and Costa Rica.
Cotton has been genetically modified for resistance to glyphosate a broad-spectrum herbicide discovered by Monsanto which also sells some of the Bt cotton seeds to farmers. There are also a number of other cotton seed companies selling GM cotton around the world. About 62% of the GM cotton grown from 1996 to 2011 was insect resistant, 24% stacked product and 14% herbicide resistant.
Cotton has gossypol, a toxin that makes it inedible. However, scientists have silenced the gene that produces the toxin, making it a potential food crop. On 17 October 2018, the USDA deregulated GE low-gossypol cotton.
Organic production
Organic cotton is generally understood as cotton from plants not genetically modified and that is certified to be grown without the use of any synthetic agricultural chemicals, such as fertilizers or pesticides. Its production also promotes and enhances biodiversity and biological cycles. In the United States, organic cotton plantations are required to enforce the National Organic Program (NOP). This institution determines the allowed practices for pest control, growing, fertilizing, and handling of organic crops. As of 2007, 265,517 bales of organic cotton were produced in 24 countries, and worldwide production was growing at a rate of more than 50% per year. Organic cotton products are now available for purchase at limited locations. These are popular for baby clothes and diapers; natural cotton products are known to be both sustainable and hypoallergenic.
Pests and weeds
The cotton industry relies heavily on chemicals, such as fertilizers, insecticides and herbicides, although a very small number of farmers are moving toward an organic model of production. Under most definitions, organic products do not use transgenic Bt cotton which contains a bacterial gene that codes for a plant-produced protein that is toxic to a number of pests especially the bollworms. For most producers, Bt cotton has allowed a substantial reduction in the use of synthetic insecticides, although in the long term resistance may become problematic.
Global pest problems
Significant global pests of cotton include various species of bollworm, such as Pectinophora gossypiella. Sucking pests include cotton stainers, the chili thrips, Scirtothrips dorsalis; the cotton seed bug, Oxycarenus hyalinipennis. Defoliators include the fall armyworm, Spodoptera frugiperda.
Cotton yield is threatened by the evolution of new biotypes of insects and of new pathogens. Maintaining good yield requires strategies to slow these adversaries' evolution.
North American insect pests
Historically, in North America, one of the most economically destructive pests in cotton production has been the boll weevil. Boll weevils are beetles who ate cotton in the 1950s, that slowed the production of the cotton industry drastically. "This bone pile of short budgets, loss of market share, failing prices, abandoned farms, and the new immunity of boll weevils generated a feeling of helplessness" Boll Weevils first appeared in Beeville, Texas wiping out field after field of cotton in south Texas. This swarm of Boll Weevils swept through east Texas and spread to the eastern seaboard, leaving ruin and devastation in its path, causing many cotton farmers to go out of business.
Due to the US Department of Agriculture's highly successful Boll Weevil Eradication Program (BWEP), this pest has been eliminated from cotton in most of the United States. This program, along with the introduction of genetically engineered Bt cotton, has improved the management of a number of pests such as cotton bollworm and pink bollworm. Sucking pests include the cotton stainer, Dysdercus suturellus and the tarnish plant bug, Lygus lineolaris. A significant cotton disease is caused by Xanthomonas citri subsp. malvacearum.
Harvesting
Most cotton in the United States, Europe and Australia is harvested mechanically, either by a cotton picker, a machine that removes the cotton from the boll without damaging the cotton plant, or by a cotton stripper, which strips the entire boll off the plant. Cotton strippers are used in regions where it is too windy to grow picker varieties of cotton, and usually after application of a chemical defoliant or the natural defoliation that occurs after a freeze. Cotton is a perennial crop in the tropics, and without defoliation or freezing, the plant will continue to grow.
Cotton continues to be picked by hand in developing countries and in Xinjiang, China, allegedly by forced labor. Xinjiang produces over 20% of the world's cotton.
Competition from synthetic fibers
The era of manufactured fibers began with the development of rayon in France in the 1890s. Rayon is derived from a natural cellulose and cannot be considered synthetic, but requires extensive processing in a manufacturing process, and led the less expensive replacement of more naturally derived materials. A succession of new synthetic fibers were introduced by the chemicals industry in the following decades. Acetate in fiber form was developed in 1924. Nylon, the first fiber synthesized entirely from petrochemicals, was introduced as a sewing thread by DuPont in 1936, followed by DuPont's acrylic in 1944. Some garments were created from fabrics based on these fibers, such as women's hosiery from nylon, but it was not until the introduction of polyester into the fiber marketplace in the early 1950s that the market for cotton came under threat. The rapid uptake of polyester garments in the 1960s caused economic hardship in cotton-exporting economies, especially in Central American countries, such as Nicaragua, where cotton production had boomed tenfold between 1950 and 1965 with the advent of cheap chemical pesticides. Cotton production recovered in the 1970s, but crashed to pre-1960 levels in the early 1990s.
Competition from natural fibers
High water and pesticide use in cotton cultivation has prompted sustainability concerns and created a market for natural fiber alternatives. Other cellulose fibers, such as hemp, are seen as more sustainable options because of higher yields per acre with less water and pesticide use than cotton. Cellulose fiber alternatives have similar characteristics but are not perfect substitutes for cotton textiles with differences in properties like tensile strength and thermal regulation.
Uses
Cotton is used to make a number of textile products. These include terrycloth for highly absorbent bath towels and robes; denim for blue jeans; cambric, popularly used in the manufacture of blue work shirts (from which the term "blue-collar" is derived) and corduroy, seersucker, and cotton twill. Socks, underwear, and most T-shirts are made from cotton. Bed sheets often are made from cotton. It is a preferred material for sheets as it is hypoallergenic, easy to maintain and non-irritant to the skin. Cotton also is used to make yarn used in crochet and knitting. Fabric also can be made from recycled or recovered cotton that otherwise would be thrown away during the spinning, weaving, or cutting process. While many fabrics are made completely of cotton, some materials blend cotton with other fibers, including rayon and synthetic fibers such as polyester. It can either be used in knitted or woven fabrics, as it can be blended with elastine to make a stretchier thread for knitted fabrics, and apparel such as stretch jeans. Cotton can be blended also with linen producing fabrics with the benefits of both materials. Linen-cotton blends are wrinkle resistant and retain heat more effectively than only linen, and are thinner, stronger and lighter than only cotton.
In addition to the textile industry, cotton is used in fishing nets, coffee filters, tents, explosives manufacture (see nitrocellulose), cotton paper, and in bookbinding. Fire hoses were once made of cotton.
The cottonseed which remains after the cotton is ginned is used to produce cottonseed oil, which, after refining, can be consumed by humans like any other vegetable oil. The cottonseed meal that is left generally is fed to ruminant livestock; the gossypol remaining in the meal is toxic to monogastric animals. Cottonseed hulls can be added to dairy cattle rations for roughage. During the American slavery period, cotton root bark was used in folk remedies as an abortifacient, that is, to induce a miscarriage. Gossypol was one of the many substances found in all parts of the cotton plant and it was described by the scientists as 'poisonous pigment'. It also appears to inhibit the development of sperm or even restrict the mobility of the sperm. Also, it is thought to interfere with the menstrual cycle by restricting the release of certain hormones.
Cotton linters are fine, silky fibers which adhere to the seeds of the cotton plant after ginning. These curly fibers typically are less than long. The term also may apply to the longer textile fiber staple lint as well as the shorter fuzzy fibers from some upland species. Linters are traditionally used in the manufacture of paper and as a raw material in the manufacture of cellulose. In the UK, linters are referred to as "cotton wool".
A less technical use of the term "cotton wool", in the UK and Ireland, is for the refined product known as "absorbent cotton" (or, often, just "cotton") in U.S. usage: fluffy cotton in sheets or balls used for medical, cosmetic, protective packaging, and many other practical purposes. The first medical use of cotton wool was by Sampson Gamgee at the Queen's Hospital (later the General Hospital) in Birmingham, England.
Long staple (LS cotton) is cotton of a longer fibre length and therefore of higher quality, while Extra-long staple cotton (ELS cotton) has longer fibre length still and of even higher quality. The name "Egyptian cotton" is broadly associated high quality cottons and is often an LS or (less often) an ELS cotton. Nowadays the name "Egyptian cotton" refers more to the way cotton is treated and threads produced rather than the location where it is grown. The American cotton variety Pima cotton is often compared to Egyptian cotton, as both are used in high quality bed sheets and other cotton products. While Pima cotton is often grown in the American southwest, the Pima name is now used by cotton-producing nations such as Peru, Australia and Israel. Not all products bearing the Pima name are made with the finest cotton: American-grown ELS Pima cotton is trademarked as Supima cotton. "Kasturi" cotton is a brand-building initiative for Indian long staple cotton by the Indian government. The PIB issued a press release announcing the same.
Cottons have been grown as ornamentals or novelties due to their showy flowers and snowball-like fruit. For example, Jumel's cotton, once an important source of fiber in Egypt, started as an ornamental. However, agricultural authorities such as the Boll Weevil Eradication Program in the United States discourage using cotton as an ornamental, due to concerns about these plants harboring pests injurious to crops.
International trade
The largest producers of cotton, as of 2017, are India and China, with annual production of about and , respectively; most of this production is consumed by their respective textile industries. The largest exporters of raw cotton are the United States, with sales of $4.9 billion, and Africa, with sales of $2.1 billion. The total international trade is estimated to be $12 billion. Africa's share of the cotton trade has doubled since 1980. Neither area has a significant domestic textile industry, textile manufacturing having moved to developing nations in Eastern and South Asia such as India and China. In Africa, cotton is grown by numerous small holders. Dunavant Enterprises, based in Memphis, Tennessee, is the leading cotton broker in Africa, with hundreds of purchasing agents. It operates cotton gins in Uganda, Mozambique, and Zambia. In Zambia, it often offers loans for seed and expenses to the 180,000 small farmers who grow cotton for it, as well as advice on farming methods. Cargill also purchases cotton in Africa for export.
The 25,000 cotton growers in the United States are heavily subsidized at the rate of $2 billion per year although China now provides the highest overall level of cotton sector support. The future of these subsidies is uncertain and has led to anticipatory expansion of cotton brokers' operations in Africa. Dunavant expanded in Africa by buying out local operations. This is only possible in former British colonies and Mozambique; former French colonies continue to maintain tight monopolies, inherited from their former colonialist masters, on cotton purchases at low fixed prices.
To encourage trade and organize discussion about cotton, World Cotton Day is celebrated every October 7.
Cotton is included within World Trade Organization (WTO) activities within two "complementary tracks":
trade aspects, around multilateral negotiations aiming to address distorting subsidies and trade barriers affecting cotton; and
development assistance provided within the cotton production industry and its value chain.
An agreement on trade in cotton formed part of the ministerial declaration concluding the World Trade Organization Ministerial Conference of 2005.
Production
In 2022, world production of cotton was 69.7 million tonnes, led by China with 26% of the total. Other major producers were India (22%) and the United States (12%) (table).
The five leading exporters of cotton in 2019 are (1) India, (2) the United States, (3) China, (4) Brazil, and (5) Pakistan.
In India, the states of Maharashtra (26.63%), Gujarat (17.96%) and Andhra Pradesh (13.75%) and also Madhya Pradesh are the leading cotton producing states, these states have a predominantly tropical wet and dry climate.
In the United States, the state of Texas led in total production as of 2004, while the state of California had the highest yield per acre.
Fair trade
Cotton is an enormously important commodity throughout the world. It provides livelihoods for up to 1 billion people, including 100 million smallholder farmers who cultivate cotton. However, many farmers in developing countries receive a low price for their produce, or find it difficult to compete with developed countries.
This has led to an international dispute (see Brazil–United States cotton dispute):
On 27 September 2002, Brazil requested consultations with the US regarding prohibited and actionable subsidies provided to US producers, users and/or exporters of upland cotton, as well as legislation, regulations, statutory instruments and amendments thereto providing such subsidies (including export credits), grants, and any other assistance to the US producers, users and exporters of upland cotton.
On 8 September 2004, the Panel Report recommended that the United States "withdraw" export credit guarantees and payments to domestic users and exporters, and "take appropriate steps to remove the adverse effects or withdraw" the mandatory price-contingent subsidy measures.
While Brazil was fighting the US through the WTO's Dispute Settlement Mechanism against a heavily subsidized cotton industry, a group of four least-developed African countries – Benin, Burkina Faso, Chad, and Mali – also known as "Cotton-4" have been the leading protagonist for the reduction of US cotton subsidies through negotiations. The four introduced a "Sectoral Initiative in Favour of Cotton", presented by Burkina Faso's President Blaise Compaoré during the Trade Negotiations Committee on 10 June 2003.
In addition to concerns over subsidies, the cotton industries of some countries are criticized for employing child labor and damaging workers' health by exposure to pesticides used in production. The Environmental Justice Foundation has campaigned against the prevalent use of forced child and adult labor in cotton production in Uzbekistan, the world's third largest cotton exporter.
The international production and trade situation has led to "fair trade" cotton clothing and footwear, joining a rapidly growing market for organic clothing, fair fashion or "ethical fashion". The fair trade system was initiated in 2005 with producers from Cameroon, Mali and Senegal, with the Association Max Havelaar France playing a lead role in the establishment of this segment of the fair trade system in conjunction with Fairtrade International and the French organisation Dagris (Développement des Agro-Industries du Sud).
Trading
Cotton is bought and sold by investors and price speculators as a tradable commodity on two different commodity exchanges in the United States of America.
Cotton No. 2 futures contracts are traded on the ICE Futures US Softs (NYI) under the ticker symbol CT. They are delivered every year in March, May, July, October, and December.
Cotton futures contracts are traded on the New York Mercantile Exchange (NYMEX) under the ticker symbol TT. They are delivered every year in March, May, July, October, and December.
Critical temperatures
Favorable travel temperature range: below
Optimum travel temperature:
Glow temperature:
Fire point:
Autoignition temperature:
Autoignition temperature (for oily cotton):
A temperature range of is the optimal range for mold development. At temperatures below , rotting of wet cotton stops. Damaged cotton is sometimes stored at these temperatures to prevent further deterioration.
Egypt has a unique climatic temperature that the soil and the temperature provide an exceptional environment for cotton to grow rapidly.
British standard yarn measures
1 thread =
1 skein or rap = 80 threads ()
1 hank = 7 skeins ()
1 spindle = 18 hanks ()
Fiber properties
Depending upon the origin, the chemical composition of cotton is as follows:
Cellulose 91.00%
Water 7.85%
Protoplasm, pectins 0.55%
Waxes, fatty substances 0.40%
Mineral salts 0.20%
Morphology
Cotton has a more complex structure among the other crops. A matured cotton fiber is a single, elongated complete dried multilayer cell that develops in the surface layer of cottonseed. It has the following parts.
The cuticle is the outer most layer. It is a waxy layer that contains pectins and proteinaceous materials.
The primary wall is the original thin cell wall. Primary wall is mainly cellulose, it is made up of a network of fine fibrils (small strands of cellulose).
The winding layer is the first layer of secondary thickening it is also called the S1 layer. It is different in structure from both the primary wall and the remainder of the secondary wall. It consists of fibrils aligned at 40 to 70-degree angles to the fiber axis in an open netting type of pattern.
The secondary wall consists of concentric layers of cellulose it is also called the S2 layer, that constitute the main portion of the cotton fiber. After the fiber has attained its maximum diameter, new layers of cellulose are added to form the secondary wall. The fibrils are deposited at 70 to 80-degree angles to the fiber axis, reversing angle at points along the length of the fiber.
The lumen is the hollow canal that runs the length of the fiber. It is filled with living protoplasm during the growth period. After the fiber matures and the boll opens, the protoplast dries up, and the lumen naturally collapses, leaving a central void, or pore space, in each fiber. It separates the secondary wall from the lumen and appears to be more resistant to certain reagents than the secondary wall layers. The lumen wall also called the S3 layer.
Dead cotton
Dead cotton is a term that refers to unripe cotton fibers that do not absorb dye. Dead cotton is immature cotton that has poor dye affinity and appears as white specks on a dyed fabric. When cotton fibers are analyzed and assessed through a microscope, dead fibers appear differently. Dead cotton fibers have thin cell walls. In contrast, mature fibers have more cellulose and a greater degree of cell wall thickening
Genome
There is a public effort to sequence the genome of cotton. It was started in 2007 by a consortium of public researchers. Their aim is to sequence the genome of cultivated, tetraploid cotton. "Tetraploid" means that its nucleus has two separate genomes, called A and D. The consortium agreed to first sequence the D-genome wild relative of cultivated cotton (G. raimondii, a Central American species) because it is small and has few repetitive elements. It has nearly one-third of the bases of tetraploid cotton, and each chromosome occurs only once. Then, the A genome of G. arboreum would be sequenced. Its genome is roughly twice that of G. raimondii. Part of the difference in size is due to the amplification of retrotransposons (GORGE). After both diploid genomes are assembled, they would be used as models for sequencing the genomes of tetraploid cultivated species. Without knowing the diploid genomes, the euchromatic DNA sequences of AD genomes would co-assemble, and their repetitive elements would assemble independently into A and D sequences respectively. There would be no way to untangle the mess of AD sequences without comparing them to their diploid counterparts.
The public sector effort continues with the goal to create a high-quality, draft genome sequence from reads generated by all sources. The effort has generated Sanger reads of BACs, fosmids, and plasmids, as well as 454 reads. These later types of reads will be instrumental in assembling an initial draft of the D genome. In 2010, the companies Monsanto and Illumina completed enough Illumina sequencing to cover the D genome of G. raimondii about 50x. They announced that they would donate their raw reads to the public. This public relations effort gave them some recognition for sequencing the cotton genome. Once the D genome is assembled from all of this raw material, it will undoubtedly assist in the assembly of the AD genomes of cultivated varieties of cotton, but much work remains.
As of 2014, at least one assembled cotton genome had been reported.
| Technology | Materials | null |
36808 | https://en.wikipedia.org/wiki/Heart | Heart | The heart is a muscular organ found in humans and other animals. This organ pumps blood through the blood vessels. Heart and blood vessels together make the circulatory system. The pumped blood carries oxygen and nutrients to the tissue, while carrying metabolic waste such as carbon dioxide to the lungs. In humans, the heart is approximately the size of a closed fist and is located between the lungs, in the middle compartment of the chest, called the mediastinum.
In humans, the heart is divided into four chambers: upper left and right atria and lower left and right ventricles. Commonly, the right atrium and ventricle are referred together as the right heart and their left counterparts as the left heart. In a healthy heart, blood flows one way through the heart due to heart valves, which prevent backflow. The heart is enclosed in a protective sac, the pericardium, which also contains a small amount of fluid. The wall of the heart is made up of three layers: epicardium, myocardium, and endocardium.
The heart pumps blood with a rhythm determined by a group of pacemaker cells in the sinoatrial node. These generate an electric current that causes the heart to contract, traveling through the atrioventricular node and along the conduction system of the heart. In humans, deoxygenated blood enters the heart through the right atrium from the superior and inferior venae cavae and passes to the right ventricle. From here, it is pumped into pulmonary circulation to the lungs, where it receives oxygen and gives off carbon dioxide. Oxygenated blood then returns to the left atrium, passes through the left ventricle and is pumped out through the aorta into systemic circulation, traveling through arteries, arterioles, and capillaries—where nutrients and other substances are exchanged between blood vessels and cells, losing oxygen and gaining carbon dioxide—before being returned to the heart through venules and veins. The adult heart beats at a resting rate close to 72 beats per minute. Exercise temporarily increases the rate, but lowers it in the long term, and is good for heart health.
Cardiovascular diseases are the most common cause of death globally as of 2008, accounting for 30% of all human deaths. Of these more than three-quarters are a result of coronary artery disease and stroke. Risk factors include: smoking, being overweight, little exercise, high cholesterol, high blood pressure, and poorly controlled diabetes, among others. Cardiovascular diseases do not frequently have symptoms but may cause chest pain or shortness of breath. Diagnosis of heart disease is often done by the taking of a medical history, listening to the heart-sounds with a stethoscope, as well as with ECG, and echocardiogram which uses ultrasound. Specialists who focus on diseases of the heart are called cardiologists, although many specialties of medicine may be involved in treatment.
Structure
Location and shape
The human heart is situated in the mediastinum, at the level of thoracic vertebrae T5–T8. A double-membraned sac called the pericardium surrounds the heart and attaches to the mediastinum. The back surface of the heart lies near the vertebral column, and the front surface, known as the sternocostal surface, sits behind the sternum and rib cartilages. The upper part of the heart is the attachment point for several large blood vessels—the venae cavae, aorta and pulmonary trunk. The upper part of the heart is located at the level of the third costal cartilage. The lower tip of the heart, the apex, lies to the left of the sternum (8 to 9 cm from the midsternal line) between the junction of the fourth and fifth ribs near their articulation with the costal cartilages.
The largest part of the heart is usually slightly offset to the left side of the chest (levocardia). In a rare congenital disorder (dextrocardia) the heart is offset to the right side and is felt to be on the left because the left heart is stronger and larger, since it pumps to all body parts. Because the heart is between the lungs, the left lung is smaller than the right lung and has a cardiac notch in its border to accommodate the heart.
The heart is cone-shaped, with its base positioned upwards and tapering down to the apex. An adult heart has a mass of 250–350 grams (9–12 oz). The heart is often described as the size of a fist: 12 cm (5 in) in length, 8 cm (3.5 in) wide, and 6 cm (2.5 in) in thickness, although this description is disputed, as the heart is likely to be slightly larger. Well-trained athletes can have much larger hearts due to the effects of exercise on the heart muscle, similar to the response of skeletal muscle.
Chambers
The heart has four chambers, two upper atria, the receiving chambers, and two lower ventricles, the discharging chambers. The atria open into the ventricles via the atrioventricular valves, present in the atrioventricular septum. This distinction is visible also on the surface of the heart as the coronary sulcus. There is an ear-shaped structure in the upper right atrium called the right atrial appendage, or auricle, and another in the upper left atrium, the left atrial appendage. The right atrium and the right ventricle together are sometimes referred to as the right heart. Similarly, the left atrium and the left ventricle together are sometimes referred to as the left heart. The ventricles are separated from each other by the interventricular septum, visible on the surface of the heart as the anterior longitudinal sulcus and the posterior interventricular sulcus.
The fibrous cardiac skeleton gives structure to the heart. It forms the atrioventricular septum, which separates the atria from the ventricles, and the fibrous rings, which serve as bases for the four heart valves. The cardiac skeleton also provides an important boundary in the heart's electrical conduction system since collagen cannot conduct electricity. The interatrial septum separates the atria, and the interventricular septum separates the ventricles. The interventricular septum is much thicker than the interatrial septum since the ventricles need to generate greater pressure when they contract.
Valves
The heart has four valves, which separate its chambers. One valve lies between each atrium and ventricle, and one valve rests at the exit of each ventricle.
The valves between the atria and ventricles are called the atrioventricular valves. Between the right atrium and the right ventricle is the tricuspid valve. The tricuspid valve has three cusps, which connect to chordae tendinae and three papillary muscles named the anterior, posterior, and septal muscles, after their relative positions. The mitral valve lies between the left atrium and left ventricle. It is also known as the bicuspid valve due to its having two cusps, an anterior and a posterior cusp. These cusps are also attached via chordae tendinae to two papillary muscles projecting from the ventricular wall.
The papillary muscles extend from the walls of the heart to valves by cartilaginous connections called chordae tendinae. These muscles prevent the valves from falling too far back when they close. During the relaxation phase of the cardiac cycle, the papillary muscles are also relaxed and the tension on the chordae tendineae is slight. As the heart chambers contract, so do the papillary muscles. This creates tension on the chordae tendineae, helping to hold the cusps of the atrioventricular valves in place and preventing them from being blown back into the atria.
Two additional semilunar valves sit at the exit of each of the ventricles. The pulmonary valve is located at the base of the pulmonary artery. This has three cusps which are not attached to any papillary muscles. When the ventricle relaxes blood flows back into the ventricle from the artery and this flow of blood fills the pocket-like valve, pressing against the cusps which close to seal the valve. The semilunar aortic valve is at the base of the aorta and also is not attached to papillary muscles. This too has three cusps which close with the pressure of the blood flowing back from the aorta.
Right heart
The right heart consists of two chambers, the right atrium and the right ventricle, separated by a valve, the tricuspid valve.
The right atrium receives blood almost continuously from the body's two major veins, the superior and inferior venae cavae. A small amount of blood from the coronary circulation also drains into the right atrium via the coronary sinus, which is immediately above and to the middle of the opening of the inferior vena cava. In the wall of the right atrium is an oval-shaped depression known as the fossa ovalis, which is a remnant of an opening in the fetal heart known as the foramen ovale. Most of the internal surface of the right atrium is smooth, the depression of the fossa ovalis is medial, and the anterior surface has prominent ridges of pectinate muscles, which are also present in the right atrial appendage.
The right atrium is connected to the right ventricle by the tricuspid valve. The walls of the right ventricle are lined with trabeculae carneae, ridges of cardiac muscle covered by endocardium. In addition to these muscular ridges, a band of cardiac muscle, also covered by endocardium, known as the moderator band reinforces the thin walls of the right ventricle and plays a crucial role in cardiac conduction. It arises from the lower part of the interventricular septum and crosses the interior space of the right ventricle to connect with the inferior papillary muscle. The right ventricle tapers into the pulmonary trunk, into which it ejects blood when contracting. The pulmonary trunk branches into the left and right pulmonary arteries that carry the blood to each lung. The pulmonary valve lies between the right heart and the pulmonary trunk.
Left heart
The left heart has two chambers: the left atrium and the left ventricle, separated by the mitral valve.
The left atrium receives oxygenated blood back from the lungs via one of the four pulmonary veins. The left atrium has an outpouching called the left atrial appendage. Like the right atrium, the left atrium is lined by pectinate muscles. The left atrium is connected to the left ventricle by the mitral valve.
The left ventricle is much thicker as compared with the right, due to the greater force needed to pump blood to the entire body. Like the right ventricle, the left also has trabeculae carneae, but there is no moderator band. The left ventricle pumps blood to the body through the aortic valve and into the aorta. Two small openings above the aortic valve carry blood to the heart muscle; the left coronary artery is above the left cusp of the valve, and the right coronary artery is above the right cusp.
Wall
The heart wall is made up of three layers: the inner endocardium, middle myocardium and outer epicardium. These are surrounded by a double-membraned sac called the pericardium.
The innermost layer of the heart is called the endocardium. It is made up of a lining of simple squamous epithelium and covers heart chambers and valves. It is continuous with the endothelium of the veins and arteries of the heart, and is joined to the myocardium with a thin layer of connective tissue. The endocardium, by secreting endothelins, may also play a role in regulating the contraction of the myocardium.
The middle layer of the heart wall is the myocardium, which is the cardiac muscle—a layer of involuntary striated muscle tissue surrounded by a framework of collagen. The cardiac muscle pattern is elegant and complex, as the muscle cells swirl and spiral around the chambers of the heart, with the outer muscles forming a figure 8 pattern around the atria and around the bases of the great vessels and the inner muscles, forming a figure 8 around the two ventricles and proceeding toward the apex. This complex swirling pattern allows the heart to pump blood more effectively.
There are two types of cells in cardiac muscle: muscle cells which have the ability to contract easily, and pacemaker cells of the conducting system. The muscle cells make up the bulk (99%) of cells in the atria and ventricles. These contractile cells are connected by intercalated discs which allow a rapid response to impulses of action potential from the pacemaker cells. The intercalated discs allow the cells to act as a syncytium and enable the contractions that pump blood through the heart and into the major arteries. The pacemaker cells make up 1% of cells and form the conduction system of the heart. They are generally much smaller than the contractile cells and have few myofibrils which gives them limited contractibility. Their function is similar in many respects to neurons. Cardiac muscle tissue has autorhythmicity, the unique ability to initiate a cardiac action potential at a fixed rate—spreading the impulse rapidly from cell to cell to trigger the contraction of the entire heart.
There are specific proteins expressed in cardiac muscle cells. These are mostly associated with muscle contraction, and bind with actin, myosin, tropomyosin, and troponin. They include MYH6, ACTC1, TNNI3, CDH2 and PKP2. Other proteins expressed are MYH7 and LDB3 that are also expressed in skeletal muscle.
Pericardium
The pericardium is the sac that surrounds the heart. The tough outer surface of the pericardium is called the fibrous membrane. This is lined by a double inner membrane called the serous membrane that produces pericardial fluid to lubricate the surface of the heart. The part of the serous membrane attached to the fibrous membrane is called the parietal pericardium, while the part of the serous membrane attached to the heart is known as the visceral pericardium. The pericardium is present in order to lubricate its movement against other structures within the chest, to keep the heart's position stabilised within the chest, and to protect the heart from infection.
Coronary circulation
Heart tissue, like all cells in the body, needs to be supplied with oxygen, nutrients and a way of removing metabolic wastes. This is achieved by the coronary circulation, which includes arteries, veins, and lymphatic vessels. Blood flow through the coronary vessels occurs in peaks and troughs relating to the heart muscle's relaxation or contraction.
Heart tissue receives blood from two arteries which arise just above the aortic valve. These are the left main coronary artery and the right coronary artery. The left main coronary artery splits shortly after leaving the aorta into two vessels, the left anterior descending and the left circumflex artery. The left anterior descending artery supplies heart tissue and the front, outer side, and septum of the left ventricle. It does this by branching into smaller arteries—diagonal and septal branches. The left circumflex supplies the back and underneath of the left ventricle. The right coronary artery supplies the right atrium, right ventricle, and lower posterior sections of the left ventricle. The right coronary artery also supplies blood to the atrioventricular node (in about 90% of people) and the sinoatrial node (in about 60% of people). The right coronary artery runs in a groove at the back of the heart and the left anterior descending artery runs in a groove at the front. There is significant variation between people in the anatomy of the arteries that supply the heart The arteries divide at their furthest reaches into smaller branches that join at the edges of each arterial distribution.
The coronary sinus is a large vein that drains into the right atrium, and receives most of the venous drainage of the heart. It receives blood from the great cardiac vein (receiving the left atrium and both ventricles), the posterior cardiac vein (draining the back of the left ventricle), the middle cardiac vein (draining the bottom of the left and right ventricles), and small cardiac veins. The anterior cardiac veins drain the front of the right ventricle and drain directly into the right atrium.
Small lymphatic networks called plexuses exist beneath each of the three layers of the heart. These networks collect into a main left and a main right trunk, which travel up the groove between the ventricles that exists on the heart's surface, receiving smaller vessels as they travel up. These vessels then travel into the atrioventricular groove, and receive a third vessel which drains the section of the left ventricle sitting on the diaphragm. The left vessel joins with this third vessel, and travels along the pulmonary artery and left atrium, ending in the inferior tracheobronchial node. The right vessel travels along the right atrium and the part of the right ventricle sitting on the diaphragm. It usually then travels in front of the ascending aorta and then ends in a brachiocephalic node.
Nerve supply
The heart receives nerve signals from the vagus nerve and from nerves arising from the sympathetic trunk. These nerves act to influence, but not control, the heart rate. Sympathetic nerves also influence the force of heart contraction. Signals that travel along these nerves arise from two paired cardiovascular centres in the medulla oblongata. The vagus nerve of the parasympathetic nervous system acts to decrease the heart rate, and nerves from the sympathetic trunk act to increase the heart rate. These nerves form a network of nerves that lies over the heart called the cardiac plexus.
The vagus nerve is a long, wandering nerve that emerges from the brainstem and provides parasympathetic stimulation to a large number of organs in the thorax and abdomen, including the heart. The nerves from the sympathetic trunk emerge through the T1–T4 thoracic ganglia and travel to both the sinoatrial and atrioventricular nodes, as well as to the atria and ventricles. The ventricles are more richly innervated by sympathetic fibers than parasympathetic fibers. Sympathetic stimulation causes the release of the neurotransmitter norepinephrine (also known as noradrenaline) at the neuromuscular junction of the cardiac nerves. This shortens the repolarisation period, thus speeding the rate of depolarisation and contraction, which results in an increased heart rate. It opens chemical or ligand-gated sodium and calcium ion channels, allowing an influx of positively charged ions. Norepinephrine binds to the beta–1 receptor.
Development
The heart is the first functional organ to develop and starts to beat and pump blood at about three weeks into embryogenesis. This early start is crucial for subsequent embryonic and prenatal development.
The heart derives from splanchnopleuric mesenchyme in the neural plate which forms the cardiogenic region. Two endocardial tubes form here that fuse to form a primitive heart tube known as the tubular heart. Between the third and fourth week, the heart tube lengthens, and begins to fold to form an S-shape within the pericardium. This places the chambers and major vessels into the correct alignment for the developed heart. Further development will include the formation of the septa and the valves and the remodeling of the heart chambers. By the end of the fifth week, the septa are complete, and by the ninth week, the heart valves are complete.
Before the fifth week, there is an opening in the fetal heart known as the foramen ovale. The foramen ovale allowed blood in the fetal heart to pass directly from the right atrium to the left atrium, allowing some blood to bypass the lungs. Within seconds after birth, a flap of tissue known as the septum primum that previously acted as a valve closes the foramen ovale and establishes the typical cardiac circulation pattern. A depression in the surface of the right atrium remains where the foramen ovale was, called the fossa ovalis.
The embryonic heart begins beating at around 22 days after conception (5 weeks after the last normal menstrual period, LMP). It starts to beat at a rate near to the mother's which is about 75–80 beats per minute (bpm). The embryonic heart rate then accelerates and reaches a peak rate of 165–185 bpm early in the early 7th week (early 9th week after the LMP). After 9 weeks (start of the fetal stage) it starts to decelerate, slowing to around 145 (±25) bpm at birth. There is no difference in female and male heart rates before birth.
Physiology
Blood flow
The heart functions as a pump in the circulatory system to provide a continuous flow of blood throughout the body. This circulation consists of the systemic circulation to and from the body and the pulmonary circulation to and from the lungs. Blood in the pulmonary circulation exchanges carbon dioxide for oxygen in the lungs through the process of respiration. The systemic circulation then transports oxygen to the body and returns carbon dioxide and relatively deoxygenated blood to the heart for transfer to the lungs.
The right heart collects deoxygenated blood from two large veins, the superior and inferior venae cavae. Blood collects in the right and left atrium continuously. The superior vena cava drains blood from above the diaphragm and empties into the upper back part of the right atrium. The inferior vena cava drains the blood from below the diaphragm and empties into the back part of the atrium below the opening for the superior vena cava. Immediately above and to the middle of the opening of the inferior vena cava is the opening of the thin-walled coronary sinus. Additionally, the coronary sinus returns deoxygenated blood from the myocardium to the right atrium. The blood collects in the right atrium. When the right atrium contracts, the blood is pumped through the tricuspid valve into the right ventricle. As the right ventricle contracts, the tricuspid valve closes and the blood is pumped into the pulmonary trunk through the pulmonary valve. The pulmonary trunk divides into pulmonary arteries and progressively smaller arteries throughout the lungs, until it reaches capillaries. As these pass by alveoli carbon dioxide is exchanged for oxygen. This happens through the passive process of diffusion.
In the left heart, oxygenated blood is returned to the left atrium via the pulmonary veins. It is then pumped into the left ventricle through the mitral valve and into the aorta through the aortic valve for systemic circulation. The aorta is a large artery that branches into many smaller arteries, arterioles, and ultimately capillaries. In the capillaries, oxygen and nutrients from blood are supplied to body cells for metabolism, and exchanged for carbon dioxide and waste products. Capillary blood, now deoxygenated, travels into venules and veins that ultimately collect in the superior and inferior vena cavae, and into the right heart.
Cardiac cycle
The cardiac cycle is the sequence of events in which the heart contracts and relaxes with every heartbeat. The period of time during which the ventricles contract, forcing blood out into the aorta and main pulmonary artery, is known as systole, while the period during which the ventricles relax and refill with blood is known as diastole. The atria and ventricles work in concert, so in systole when the ventricles are contracting, the atria are relaxed and collecting blood. When the ventricles are relaxed in diastole, the atria contract to pump blood to the ventricles. This coordination ensures blood is pumped efficiently to the body.
At the beginning of the cardiac cycle, the ventricles are relaxing. As they do so, they are filled by blood passing through the open mitral and tricuspid valves. After the ventricles have completed most of their filling, the atria contract, forcing further blood into the ventricles and priming the pump. Next, the ventricles start to contract. As the pressure rises within the cavities of the ventricles, the mitral and tricuspid valves are forced shut. As the pressure within the ventricles rises further, exceeding the pressure with the aorta and pulmonary arteries, the aortic and pulmonary valves open. Blood is ejected from the heart, causing the pressure within the ventricles to fall. Simultaneously, the atria refill as blood flows into the right atrium through the superior and inferior vena cavae, and into the left atrium through the pulmonary veins. Finally, when the pressure within the ventricles falls below the pressure within the aorta and pulmonary arteries, the aortic and pulmonary valves close. The ventricles start to relax, the mitral and tricuspid valves open, and the cycle begins again.
Cardiac output
Cardiac output (CO) is a measurement of the amount of blood pumped by each ventricle (stroke volume) in one minute. This is calculated by multiplying the stroke volume (SV) by the beats per minute of the heart rate (HR). So that: CO = SV x HR.
The cardiac output is normalized to body size through body surface area and is called the cardiac index.
The average cardiac output, using an average stroke volume of about 70mL, is 5.25 L/min, with a normal range of 4.0–8.0 L/min. The stroke volume is normally measured using an echocardiogram and can be influenced by the size of the heart, physical and mental condition of the individual, sex, contractility, duration of contraction, preload and afterload.
Preload refers to the filling pressure of the atria at the end of diastole, when the ventricles are at their fullest. A main factor is how long it takes the ventricles to fill: if the ventricles contract more frequently, then there is less time to fill and the preload will be less. Preload can also be affected by a person's blood volume. The force of each contraction of the heart muscle is proportional to the preload, described as the Frank-Starling mechanism. This states that the force of contraction is directly proportional to the initial length of muscle fiber, meaning a ventricle will contract more forcefully, the more it is stretched.
Afterload, or how much pressure the heart must generate to eject blood at systole, is influenced by vascular resistance. It can be influenced by narrowing of the heart valves (stenosis) or contraction or relaxation of the peripheral blood vessels.
The strength of heart muscle contractions controls the stroke volume. This can be influenced positively or negatively by agents termed inotropes. These agents can be a result of changes within the body, or be given as drugs as part of treatment for a medical disorder, or as a form of life support, particularly in intensive care units. Inotropes that increase the force of contraction are "positive" inotropes, and include sympathetic agents such as adrenaline, noradrenaline and dopamine. "Negative" inotropes decrease the force of contraction and include calcium channel blockers.
Electrical conduction
The normal rhythmical heart beat, called sinus rhythm, is established by the heart's own pacemaker, the sinoatrial node (also known as the sinus node or the SA node). Here an electrical signal is created that travels through the heart, causing the heart muscle to contract. The sinoatrial node is found in the upper part of the right atrium near to the junction with the superior vena cava. The electrical signal generated by the sinoatrial node travels through the right atrium in a radial way that is not completely understood. It travels to the left atrium via Bachmann's bundle, such that the muscles of the left and right atria contract together. The signal then travels to the atrioventricular node. This is found at the bottom of the right atrium in the atrioventricular septum, the boundary between the right atrium and the left ventricle. The septum is part of the cardiac skeleton, tissue within the heart that the electrical signal cannot pass through, which forces the signal to pass through the atrioventricular node only. The signal then travels along the bundle of His to left and right bundle branches through to the ventricles of the heart. In the ventricles the signal is carried by specialized tissue called the Purkinje fibers which then transmit the electric charge to the heart muscle.
Heart rate
The normal resting heart rate is called the sinus rhythm, created and sustained by the sinoatrial node, a group of pacemaking cells found in the wall of the right atrium. Cells in the sinoatrial node do this by creating an action potential. The cardiac action potential is created by the movement of specific electrolytes into and out of the pacemaker cells. The action potential then spreads to nearby cells.
When the sinoatrial cells are resting, they have a negative charge on their membranes. A rapid influx of sodium ions causes the membrane's charge to become positive; this is called depolarisation and occurs spontaneously. Once the cell has a sufficiently high charge, the sodium channels close and calcium ions then begin to enter the cell, shortly after which potassium begins to leave it. All the ions travel through ion channels in the membrane of the sinoatrial cells. The potassium and calcium start to move out of and into the cell only once it has a sufficiently high charge, and so are called voltage-gated. Shortly after this, the calcium channels close and potassium channels open, allowing potassium to leave the cell. This causes the cell to have a negative resting charge and is called repolarisation. When the membrane potential reaches approximately −60 mV, the potassium channels close and the process may begin again.
The ions move from areas where they are concentrated to where they are not. For this reason sodium moves into the cell from outside, and potassium moves from within the cell to outside the cell. Calcium also plays a critical role. Their influx through slow channels means that the sinoatrial cells have a prolonged "plateau" phase when they have a positive charge. A part of this is called the absolute refractory period. Calcium ions also combine with the regulatory protein troponin C in the troponin complex to enable contraction of the cardiac muscle, and separate from the protein to allow relaxation.
The adult resting heart rate ranges from 60 to 100 bpm. The resting heart rate of a newborn can be 129 beats per minute (bpm) and this gradually decreases until maturity. An athlete's heart rate can be lower than 60 bpm. During exercise the rate can be 150 bpm with maximum rates reaching from 200 to 220 bpm.
Influences
The normal sinus rhythm of the heart, giving the resting heart rate, is influenced by a number of factors. The cardiovascular centres in the brainstem control the sympathetic and parasympathetic influences to the heart through the vagus nerve and sympathetic trunk. These cardiovascular centres receive input from a series of receptors including baroreceptors, sensing the stretching of blood vessels and chemoreceptors, sensing the amount of oxygen and carbon dioxide in the blood and its pH. Through a series of reflexes these help regulate and sustain blood flow.
Baroreceptors are stretch receptors located in the aortic sinus, carotid bodies, the venae cavae, and other locations, including pulmonary vessels and the right side of the heart itself. Baroreceptors fire at a rate determined by how much they are stretched, which is influenced by blood pressure, level of physical activity, and the relative distribution of blood. With increased pressure and stretch, the rate of baroreceptor firing increases, and the cardiac centers decrease sympathetic stimulation and increase parasympathetic stimulation. As pressure and stretch decrease, the rate of baroreceptor firing decreases, and the cardiac centers increase sympathetic stimulation and decrease parasympathetic stimulation. There is a similar reflex, called the atrial reflex or Bainbridge reflex, associated with varying rates of blood flow to the atria. Increased venous return stretches the walls of the atria where specialized baroreceptors are located. However, as the atrial baroreceptors increase their rate of firing and as they stretch due to the increased blood pressure, the cardiac center responds by increasing sympathetic stimulation and inhibiting parasympathetic stimulation to increase heart rate. The opposite is also true. Chemoreceptors present in the carotid body or adjacent to the aorta in an aortic body respond to the blood's oxygen, carbon dioxide levels. Low oxygen or high carbon dioxide will stimulate firing of the receptors.
Exercise and fitness levels, age, body temperature, basal metabolic rate, and even a person's emotional state can all affect the heart rate. High levels of the hormones epinephrine, norepinephrine, and thyroid hormones can increase the heart rate. The levels of electrolytes including calcium, potassium, and sodium can also influence the speed and regularity of the heart rate; low blood oxygen, low blood pressure and dehydration may increase it.
Clinical significance
Diseases
Cardiovascular diseases, which include diseases of the heart, are the leading cause of death worldwide. The majority of cardiovascular disease is noncommunicable and related to lifestyle and other factors, becoming more prevalent with ageing. Heart disease is a major cause of death, accounting for an average of 30% of all deaths in 2008, globally. This rate varies from a lower 28% to a high 40% in high-income countries. Doctors that specialise in the heart are called cardiologists. Many other medical professionals are involved in treating diseases of the heart, including doctors, cardiothoracic surgeons, intensivists, and allied health practitioners including physiotherapists and dieticians.
Ischemic heart disease
Coronary artery disease, also known as ischemic heart disease, is caused by atherosclerosis—a build-up of fatty material along the inner walls of the arteries. These fatty deposits known as atherosclerotic plaques narrow the coronary arteries, and if severe may reduce blood flow to the heart. If a narrowing (or stenosis) is relatively minor then the patient may not experience any symptoms. Severe narrowings may cause chest pain (angina) or breathlessness during exercise or even at rest. The thin covering of an atherosclerotic plaque can rupture, exposing the fatty centre to the circulating blood. In this case a clot or thrombus can form, blocking the artery, and restricting blood flow to an area of heart muscle causing a myocardial infarction (a heart attack) or unstable angina. In the worst case this may cause cardiac arrest, a sudden and utter loss of output from the heart. Obesity, high blood pressure, uncontrolled diabetes, smoking and high cholesterol can all increase the risk of developing atherosclerosis and coronary artery disease.
Heart failure
Heart failure is defined as a condition in which the heart is unable to pump enough blood to meet the demands of the body. Patients with heart failure may experience breathlessness especially when lying flat, as well as ankle swelling, known as peripheral oedema. Heart failure is the result of many diseases affecting the heart, but is most commonly associated with ischemic heart disease, valvular heart disease, or high blood pressure. Less common causes include various cardiomyopathies. Heart failure is frequently associated with weakness of the heart muscle in the ventricles (systolic heart failure), but can also be seen in patients with heart muscle that is strong but stiff (diastolic heart failure). The condition may affect the left ventricle (causing predominantly breathlessness), the right ventricle (causing predominantly swelling of the legs and an elevated jugular venous pressure), or both ventricles. Patients with heart failure are at higher risk of developing dangerous heart rhythm disturbances or arrhythmias.
Cardiomyopathies
Cardiomyopathies are diseases affecting the muscle of the heart. Some cause abnormal thickening of the heart muscle (hypertrophic cardiomyopathy), some cause the heart to abnormally expand and weaken (dilated cardiomyopathy), some cause the heart muscle to become stiff and unable to fully relax between contractions (restrictive cardiomyopathy) and some make the heart prone to abnormal heart rhythms (arrhythmogenic cardiomyopathy). These conditions are often genetic and can be inherited, but some such as dilated cardiomyopathy may be caused by damage from toxins such as alcohol. Some cardiomyopathies such as hypertrophic cardiomopathy are linked to a higher risk of sudden cardiac death, particularly in athletes. Many cardiomyopathies can lead to heart failure in the later stages of the disease.
Valvular heart disease
Healthy heart valves allow blood to flow easily in one direction, and prevent it from flowing in the other direction. A diseased heart valve may have a narrow opening (stenosis), that restricts the flow of blood in the forward direction. A valve may otherwise be leaky, allowing blood to leak in the reverse direction (regurgitation). Valvular heart disease may cause breathlessness, blackouts, or chest pain, but may be asymptomatic and only detected on a routine examination by hearing abnormal heart sounds or a heart murmur. In the developed world, valvular heart disease is most commonly caused by degeneration secondary to old age, but may also be caused by infection of the heart valves (endocarditis). In some parts of the world rheumatic heart disease is a major cause of valvular heart disease, typically leading to mitral or aortic stenosis and caused by the body's immune system reacting to a streptococcal throat infection.
Cardiac arrhythmias
While in the healthy heart, waves of electrical impulses originate in the sinus node before spreading to the rest of the atria, the atrioventricular node, and finally the ventricles (referred to as a normal sinus rhythm), this normal rhythm can be disrupted. Abnormal heart rhythms or arrhythmias may be asymptomatic or may cause palpitations, blackouts, or breathlessness. Some types of arrhythmia such as atrial fibrillation increase the long term risk of stroke.
Some arrhythmias cause the heart to beat abnormally slowly, referred to as a bradycardia or bradyarrhythmia. This may be caused by an abnormally slow sinus node or damage within the cardiac conduction system (heart block). In other arrhythmias the heart may beat abnormally rapidly, referred to as a tachycardia or tachyarrhythmia. These arrhythmias can take many forms and can originate from different structures within the heart—some arise from the atria (e.g. atrial flutter), some from the atrioventricular node (e.g. AV nodal re-entrant tachycardia) whilst others arise from the ventricles (e.g. ventricular tachycardia). Some tachyarrhythmias are caused by scarring within the heart (e.g. some forms of ventricular tachycardia), others by an irritable focus (e.g. focal atrial tachycardia), while others are caused by additional abnormal conduction tissue that has been present since birth (e.g. Wolff-Parkinson-White syndrome). The most dangerous form of heart racing is ventricular fibrillation, in which the ventricles quiver rather than contract, and which if untreated is rapidly fatal.
Pericardial disease
The sac which surrounds the heart, called the pericardium, can become inflamed in a condition known as pericarditis. This condition typically causes chest pain that may spread to the back, and is often caused by a viral infection (glandular fever, cytomegalovirus, or coxsackievirus). Fluid can build up within the pericardial sac, referred to as a pericardial effusion. Pericardial effusions often occur secondary to pericarditis, kidney failure, or tumours, and frequently do not cause any symptoms. However, large effusions or effusions which accumulate rapidly can compress the heart in a condition known as cardiac tamponade, causing breathlessness and potentially fatal low blood pressure. Fluid can be removed from the pericardial space for diagnosis or to relieve tamponade using a syringe in a procedure called pericardiocentesis.
Congenital heart disease
Some people are born with hearts that are abnormal and these abnormalities are known as congenital heart defects. They may range from the relatively minor (e.g. patent foramen ovale, arguably a variant of normal) to serious life-threatening abnormalities (e.g. hypoplastic left heart syndrome). Common abnormalities include those that affect the heart muscle that separates the two side of the heart (a "hole in the heart", e.g. ventricular septal defect). Other defects include those affecting the heart valves (e.g. congenital aortic stenosis), or the main blood vessels that lead from the heart (e.g. coarctation of the aorta). More complex syndromes are seen that affect more than one part of the heart (e.g. Tetralogy of Fallot).
Some congenital heart defects allow blood that is low in oxygen that would normally be returned to the lungs to instead be pumped back to the rest of the body. These are known as cyanotic congenital heart defects and are often more serious. Major congenital heart defects are often picked up in childhood, shortly after birth, or even before a child is born (e.g. transposition of the great arteries), causing breathlessness and a lower rate of growth. More minor forms of congenital heart disease may remain undetected for many years and only reveal themselves in adult life (e.g., atrial septal defect).
Channelopathies
Channelopathies can be categorized based on the organ system they affect. In the cardiovascular system, the electrical impulse required for each heart beat is provided by the electrochemical gradient of each heart cell. Because the beating of the heart depends on the proper movement of ions across the surface membrane, cardiac ion channelopathies form a major group of heart diseases. Cardiac ion channelopathies may explain some of the cases of sudden death syndrome and sudden arrhythmic death syndrome. Long QT syndrome is the most common form of cardiac channelopathy.
Long QT syndrome (LQTS) – Mostly hereditary. On EKG can be observed as longer corrected QT interval (QTc). Characterized by fainting, sudden, life-threatening heart rhythm disturbances – Torsades de pointes type ventricular tachycardia, ventricular fibrillation and risk of sudden cardiac death.
Short QT syndrome.
Catecholaminergic polymorphic ventricular tachycardia (CPVT).
Progressive cardiac conduction defect (PCCD).
Early repolarisation syndrome (BER) – common in younger and active people, especially men, because it is affected by higher testosterone levels, which cause increased potassium currents, which further causes an elevation of the J-point on the EKG. In very rare cases, it can lead to ventricular fibrillation and death.
Brugada syndrome – a genetic disorder characterized by an abnormal EKG and is one of the most common causes of sudden cardiac death in young men.
Diagnosis
Heart disease is diagnosed by the taking of a medical history, a cardiac examination, and further investigations, including blood tests, echocardiograms, electrocardiograms, and imaging. Other invasive procedures such as cardiac catheterisation can also play a role.
Examination
The cardiac examination includes inspection, feeling the chest with the hands (palpation) and listening with a stethoscope (auscultation). It involves assessment of signs that may be visible on a person's hands (such as splinter haemorrhages), joints and other areas. A person's pulse is taken, usually at the radial artery near the wrist, in order to assess for the rhythm and strength of the pulse. The blood pressure is taken, using either a manual or automatic sphygmomanometer or using a more invasive measurement from within the artery. Any elevation of the jugular venous pulse is noted. A person's chest is felt for any transmitted vibrations from the heart, and then listened to with a stethoscope.
Heart sounds
Typically, healthy hearts have only two audible heart sounds, called S1 and S2. The first heart sound S1, is the sound created by the closing of the atrioventricular valves during ventricular contraction and is normally described as "lub". The second heart sound, S2, is the sound of the semilunar valves closing during ventricular diastole and is described as "dub". Each sound consists of two components, reflecting the slight difference in time as the two valves close. S2 may split into two distinct sounds, either as a result of inspiration or different valvular or cardiac problems. Additional heart sounds may also be present and these give rise to gallop rhythms. A third heart sound, S3 usually indicates an increase in ventricular blood volume. A fourth heart sound S4 is referred to as an atrial gallop and is produced by the sound of blood being forced into a stiff ventricle. The combined presence of S3 and S4 give a quadruple gallop.
Heart murmurs are abnormal heart sounds which can be either related to disease or benign, and there are several kinds. There are normally two heart sounds, and abnormal heart sounds can either be extra sounds, or "murmurs" related to the flow of blood between the sounds. Murmurs are graded by volume, from 1 (the quietest), to 6 (the loudest), and evaluated by their relationship to the heart sounds, position in the cardiac cycle, and additional features such as their radiation to other sites, changes with a person's position, the frequency of the sound as determined by the side of the stethoscope by which they are heard, and site at which they are heard loudest. Murmurs may be caused by damaged heart valves or congenital heart disease such as ventricular septal defects, or may be heard in normal hearts. A different type of sound, a pericardial friction rub can be heard in cases of pericarditis where the inflamed membranes can rub together.
Blood tests
Blood tests play an important role in the diagnosis and treatment of many cardiovascular conditions.
Troponin is a sensitive biomarker for a heart with insufficient blood supply. It is released 4–6 hours after injury and usually peaks at about 12–24 hours. Two tests of troponin are often taken—one at the time of initial presentation and another within 3–6 hours, with either a high level or a significant rise being diagnostic. A test for brain natriuretic peptide (BNP) can be used to evaluate for the presence of heart failure, and rises when there is increased demand on the left ventricle. These tests are considered biomarkers because they are highly specific for cardiac disease. Testing for the MB form of creatine kinase provides information about the heart's blood supply, but is used less frequently because it is less specific and sensitive.
Other blood tests are often taken to help understand a person's general health and risk factors that may contribute to heart disease. These often include a full blood count investigating for anaemia, and basic metabolic panel that may reveal any disturbances in electrolytes. A coagulation screen is often required to ensure that the right level of anticoagulation is given. Fasting lipids and fasting blood glucose (or an HbA1c level) are often ordered to evaluate a person's cholesterol and diabetes status, respectively.
Electrocardiogram
Using surface electrodes on the body, it is possible to record the electrical activity of the heart. This tracing of the electrical signal is the electrocardiogram (ECG) or (EKG). An ECG is a bedside test and involves the placement of ten leads on the body. This produces a "12 lead" ECG (three extra leads are calculated mathematically, and one lead is electrically ground, or earthed).
There are five prominent features on the ECG: the P wave (atrial depolarisation), the QRS complex (ventricular depolarisation) and the T wave (ventricular repolarisation). As the heart cells contract, they create a current that travels through the heart. A downward deflection on the ECG implies cells are becoming more positive in charge ("depolarising") in the direction of that lead, whereas an upward inflection implies cells are becoming more negative ("repolarising") in the direction of the lead. This depends on the position of the lead, so if a wave of depolarising moved from left to right, a lead on the left would show a negative deflection, and a lead on the right would show a positive deflection. The ECG is a useful tool in detecting rhythm disturbances and in detecting insufficient blood supply to the heart. Sometimes abnormalities are suspected, but not immediately visible on the ECG. Testing when exercising can be used to provoke an abnormality or an ECG can be worn for a longer period such as a 24-hour Holter monitor if a suspected rhythm abnormality is not present at the time of assessment.
Imaging
Several imaging methods can be used to assess the anatomy and function of the heart, including ultrasound (echocardiography), angiography, CT, MRI, and PET, scans. An echocardiogram is an ultrasound of the heart used to measure the heart's function, assess for valve disease, and look for any abnormalities. Echocardiography can be conducted by a probe on the chest (transthoracic), or by a probe in the esophagus (transesophageal). A typical echocardiography report will include information about the width of the valves noting any stenosis, whether there is any backflow of blood (regurgitation) and information about the blood volumes at the end of systole and diastole, including an ejection fraction, which describes how much blood is ejected from the left and right ventricles after systole. Ejection fraction can then be obtained by dividing the volume ejected by the heart (stroke volume) by the volume of the filled heart (end-diastolic volume). Echocardiograms can also be conducted under circumstances when the body is more stressed, in order to examine for signs of lack of blood supply. This cardiac stress test involves either direct exercise, or where this is not possible, injection of a drug such as dobutamine.
CT scans, chest X-rays and other forms of imaging can help evaluate the heart's size, evaluate for signs of pulmonary oedema, and indicate whether there is fluid around the heart. They are also useful for evaluating the aorta, the major blood vessel which leaves the heart.
Treatment
Diseases affecting the heart can be treated by a variety of methods including lifestyle modification, drug treatment, and surgery.
Ischemic heart disease
Narrowings of the coronary arteries (ischemic heart disease) are treated to relieve symptoms of chest pain caused by a partially narrowed artery (angina pectoris), to minimise heart muscle damage when an artery is completely occluded (myocardial infarction), or to prevent a myocardial infarction from occurring. Medications to improve angina symptoms include nitroglycerin, beta blockers, and calcium channel blockers, while preventative treatments include antiplatelets such as aspirin and statins, lifestyle measures such as stopping smoking and weight loss, and treatment of risk factors such as high blood pressure and diabetes.
In addition to using medications, narrowed heart arteries can be treated by expanding the narrowings or redirecting the flow of blood to bypass an obstruction. This may be performed using a percutaneous coronary intervention, during which narrowings can be expanded by passing small balloon-tipped wires into the coronary arteries, inflating the balloon to expand the narrowing, and sometimes leaving behind a metal scaffold known as a stent to keep the artery open.
If the narrowings in coronary arteries are unsuitable for treatment with a percutaneous coronary intervention, open surgery may be required. A coronary artery bypass graft can be performed, whereby a blood vessel from another part of the body (the saphenous vein, radial artery, or internal mammary artery) is used to redirect blood from a point before the narrowing (typically the aorta) to a point beyond the obstruction.
Valvular heart disease
Diseased heart valves that have become abnormally narrow or abnormally leaky may require surgery. This is traditionally performed as an open surgical procedure to replace the damaged heart valve with a tissue or metallic prosthetic valve. In some circumstances, the tricuspid or mitral valves can be repaired surgically, avoiding the need for a valve replacement. Heart valves can also be treated percutaneously, using techniques that share many similarities with percutaneous coronary intervention. Transcatheter aortic valve replacement is increasingly used for patients consider very high risk for open valve replacement.
Cardiac arrhythmias
Abnormal heart rhythms (arrhythmias) can be treated using antiarrhythmic drugs. These may work by manipulating the flow of electrolytes across the cell membrane (such as calcium channel blockers, sodium channel blockers, amiodarone, or digoxin), or modify the autonomic nervous system's effect on the heart (beta blockers and atropine). In some arrhythmias such as atrial fibrillation which increase the risk of stroke, this risk can be reduced using anticoagulants such as warfarin or novel oral anticoagulants.
If medications fail to control an arrhythmia, another treatment option may be catheter ablation. In these procedures, wires are passed from a vein or artery in the leg to the heart to find the abnormal area of tissue that is causing the arrhythmia. The abnormal tissue can be intentionally damaged, or ablated, by heating or freezing to prevent further heart rhythm disturbances. Whilst the majority of arrhythmias can be treated using minimally invasive catheter techniques, some arrhythmias (particularly atrial fibrillation) can also be treated using open or thoracoscopic surgery, either at the time of other cardiac surgery or as a standalone procedure. A cardioversion, whereby an electric shock is used to stun the heart out of an abnormal rhythm, may also be used.
Cardiac devices in the form of pacemakers or implantable defibrillators may also be required to treat arrhythmias. Pacemakers, comprising a small battery powered generator implanted under the skin and one or more leads that extend to the heart, are most commonly used to treat abnormally slow heart rhythms. Implantable defibrillators are used to treat serious life-threatening rapid heart rhythms. These devices monitor the heart, and if dangerous heart racing is detected can automatically deliver a shock to restore the heart to a normal rhythm. Implantable defibrillators are most commonly used in patients with heart failure, cardiomyopathies, or inherited arrhythmia syndromes.
Heart failure
As well as addressing the underlying cause for a patient's heart failure (most commonly ischemic heart disease or hypertension), the mainstay of heart failure treatment is with medication. These include drugs to prevent fluid from accumulating in the lungs by increasing the amount of urine a patient produces (diuretics), and drugs that attempt to preserve the pumping function of the heart (beta blockers, ACE inhibitors and mineralocorticoid receptor antagonists).
In some patients with heart failure, a specialised pacemaker known as cardiac resynchronisation therapy can be used to improve the heart's pumping efficiency. These devices are frequently combined with a defibrillator. In very severe cases of heart failure, a small pump called a ventricular assist device may be implanted which supplements the heart's own pumping ability. In the most severe cases, a cardiac transplant may be considered.
History
Ancient
Humans have known about the heart since ancient times, although its precise function and anatomy were not clearly understood. From the primarily religious views of earlier societies towards the heart, ancient Greeks are considered to have been the primary seat of scientific understanding of the heart in the ancient world. Aristotle considered the heart to be the organ responsible for creating blood; Plato considered the heart as the source of circulating blood and Hippocrates noted blood circulating cyclically from the body through the heart to the lungs. Erasistratos (304–250 BCE) noted the heart as a pump, causing dilation of blood vessels, and noted that arteries and veins both radiate from the heart, becoming progressively smaller with distance, although he believed they were filled with air and not blood. He also discovered the heart valves.
The Greek physician Galen (2nd century CE) knew blood vessels carried blood and identified venous (dark red) and arterial (brighter and thinner) blood, each with distinct and separate functions. Galen, noting the heart as the hottest organ in the body, concluded that it provided heat to the body. The heart did not pump blood around, the heart's motion sucked blood in during diastole and the blood moved by the pulsation of the arteries themselves. Galen believed the arterial blood was created by venous blood passing from the left ventricle to the right through 'pores' between the ventricles. Air from the lungs passed from the lungs via the pulmonary artery to the left side of the heart and created arterial blood.
These ideas went unchallenged for almost a thousand years.
Pre-modern
The earliest descriptions of the coronary and pulmonary circulation systems can be found in the Commentary on Anatomy in Avicenna's Canon, published in 1242 by Ibn al-Nafis. In his manuscript, al-Nafis wrote that blood passes through the pulmonary circulation instead of moving from the right to the left ventricle as previously believed by Galen. His work was later translated into Latin by Andrea Alpago.
In Europe, the teachings of Galen continued to dominate the academic community and his doctrines were adopted as the official canon of the Church. Andreas Vesalius questioned some of Galen's beliefs of the heart in De humani corporis fabrica (1543), but his magnum opus was interpreted as a challenge to the authorities and he was subjected to a number of attacks. Michael Servetus wrote in Christianismi Restitutio (1553) that blood flows from one side of the heart to the other via the lungs.
Modern
A breakthrough in understanding the flow of blood through the heart and body came with the publication of De Motu Cordis (1628) by the English physician William Harvey. Harvey's book completely describes the systemic circulation and the mechanical force of the heart, leading to an overhaul of the Galenic doctrines. Otto Frank (1865–1944) was a German physiologist; among his many published works are detailed studies of this important heart relationship. Ernest Starling (1866–1927) was an important English physiologist who also studied the heart. Although they worked largely independently, their combined efforts and similar conclusions have been recognized in the name "Frank–Starling mechanism".
Although Purkinje fibers and the bundle of His were discovered as early as the 19th century, their specific role in the electrical conduction system of the heart remained unknown until Sunao Tawara published his monograph, titled Das Reizleitungssystem des Säugetierherzens, in 1906. Tawara's discovery of the atrioventricular node prompted Arthur Keith and Martin Flack to look for similar structures in the heart, leading to their discovery of the sinoatrial node several months later. These structures form the anatomical basis of the electrocardiogram, whose inventor, Willem Einthoven, was awarded the Nobel Prize in Medicine or Physiology in 1924.
The first heart transplant in a human ever performed was by James Hardy in 1964, using a chimpanzee heart, but the patient died within 2 hours. The first human to human heart transplantation was performed in 1967 by the South African surgeon Christiaan Barnard at Groote Schuur Hospital in Cape Town. This marked an important milestone in cardiac surgery, capturing the attention of both the medical profession and the world at large. However, long-term survival rates of patients were initially very low. Louis Washkansky, the first recipient of a donated heart, died 18 days after the operation while other patients did not survive for more than a few weeks. The American surgeon Norman Shumway has been credited for his efforts to improve transplantation techniques, along with pioneers Richard Lower, Vladimir Demikhov and Adrian Kantrowitz. As of March 2000, more than 55,000 heart transplantations have been performed worldwide. The first successful transplant of a heart from a genetically modified pig to a human in which the patient lived for a longer time, was performed January 7, 2022 in Baltimore by heart surgeon Bartley P. Griffith, recipient was David Bennett (57) this successfully extended his life until 8 March 2022 (1 month and 30 days).
By the middle of the 20th century, heart disease had surpassed infectious disease as the leading cause of death in the United States, and it is currently the leading cause of deaths worldwide. Since 1948, the ongoing Framingham Heart Study has shed light on the effects of various influences on the heart, including diet, exercise, and common medications such as aspirin. Although the introduction of ACE inhibitors and beta blockers has improved the management of chronic heart failure, the disease continues to be an enormous medical and societal burden, with 30 to 40% of patients dying within a year of receiving the diagnosis.
Society and culture
Symbolism
As one of the vital organs, the heart was long identified as the center of the entire body, the seat of life, or emotion, or reason, will, intellect, purpose or the mind. The heart is an emblematic symbol in many religions, signifying "truth, conscience or moral courage in many religions—the temple or throne of God in Islamic and Judeo-Christian thought; the divine centre, or atman, and the third eye of transcendent wisdom in Hinduism; the diamond of purity and essence of the Buddha; the Taoist centre of understanding."
In the Hebrew Bible, the word for heart, lev, is used in these meanings, as the seat of emotion, the mind, and referring to the anatomical organ. It is also connected in function and symbolism to the stomach.
An important part of the concept of the soul in Ancient Egyptian religion was thought to be the heart, or ib. The ib or metaphysical heart was believed to be formed from one drop of blood from the child's mother's heart, taken at conception. To ancient Egyptians, the heart was the seat of emotion, thought, will, and intention. This is evidenced by Egyptian expressions which incorporate the word ib, such as Awi-ib for "happy" (literally, "long of heart"), Xak-ib for "estranged" (literally, "truncated of heart"). In Egyptian religion, the heart was the key to the afterlife. It was conceived as surviving death in the nether world, where it gave evidence for, or against, its possessor. The heart was therefore not removed from the body during mummification, and was believed to be the center of intelligence and feeling, and needed in the afterlife. It was thought that the heart was examined by Anubis and a variety of deities during the Weighing of the Heart ceremony. If the heart weighed more than the feather of Maat, which symbolized the ideal standard of behavior. If the scales balanced, it meant the heart's possessor had lived a just life and could enter the afterlife; if the heart was heavier, it would be devoured by the monster Ammit.
The Chinese character for "heart", 心, derives from a comparatively realistic depiction of a heart (indicating the heart chambers) in seal script. The Chinese word xīn also takes the metaphorical meanings of "mind", "intention", or "core", and is often translated as "heart-mind" as the ancient Chinese believed the heart was the center of human cognition. In Chinese medicine, the heart is seen as the center of 神 shén "spirit, consciousness". The heart is associated with the small intestine, tongue, governs the six organs and five viscera, and belongs to fire in the five elements.
The Sanskrit word for heart is hṛd or hṛdaya, found in the oldest surviving Sanskrit text, the Rigveda. In Sanskrit, it may mean both the anatomical object and "mind" or "soul", representing the seat of emotion. Hrd may be a cognate of the word for heart in Greek, Latin, and English.
Many classical philosophers and scientists, including Aristotle, considered the heart the seat of thought, reason, or emotion, often disregarding the brain as contributing to those functions. The identification of the heart as the seat of emotions in particular is due to the Roman physician Galen, who also located the seat of the passions in the liver, and the seat of reason in the brain.
The heart also played a role in the Aztec system of belief. The most common form of human sacrifice practiced by the Aztecs was heart-extraction. The Aztec believed that the heart (tona) was both the seat of the individual and a fragment of the Sun's heat (istli). To this day, the Nahua consider the Sun to be a heart-soul (tona-tiuh): "round, hot, pulsating".
Indigenous leaders from Alaska to Australia came together in 2020 to deliver a message to the world that humanity needs to shift from the mind to the heart, and let our heart be in charge of what we do. The message was made into a film, which highlighted that humanity must open their hearts to restore balance to the world. Kumu Sabra Kauka, a Hawaiian studies educator and tradition bearer summed up the message of the film saying "Listen to your heart. Follow your path. May it be clear, and for the good of all." The film was led by Illarion Merculieff from the Aleut (Unangan) tribe. Merculieff has written that Unangan Elders referred to the heart as a "source of wisdom", "a deeper portal of profound interconnectedness and awareness that exists between humans and all living things".
In Catholicism, there has been a long tradition of veneration of the heart, stemming from worship of the wounds of Jesus Christ which gained prominence from the mid sixteenth century. This tradition influenced the development of the medieval Christian devotion to the Sacred Heart of Jesus and the parallel veneration of the Immaculate Heart of Mary, made popular by John Eudes. There are also many references to the heart in the Christian Bible, including "Blessed are the pure in heart, for they will see God", "Above all else, guard your heart, for everything you do flows from it", "For where your treasure is, there your heart will be also", "For as a man thinks in his heart, so shall he be."
The expression of a broken heart is a cross-cultural reference to grief for a lost one or to unfulfilled romantic love.
The notion of "Cupid's arrows" is ancient, due to Ovid, but while Ovid describes Cupid as wounding his victims with his arrows, it is not made explicit that it is the heart that is wounded. The familiar iconography of Cupid shooting little heart symbols is a Renaissance theme that became tied to Valentine's Day.
In certain Trans-New Guinea languages, such as Foi and Momoona, the heart and seat of emotions are colexified, meaning they share the same word.
Food
Animal hearts are widely consumed as a type of offal. As they are almost entirely muscle, they are high in protein. They are often included in dishes with other internal organs, for example in the pan-Ottoman kokoretsi.
Chicken hearts are considered to be giblets, and are often grilled on skewers; examples of this are Japanese hāto yakitori, Brazilian churrasco de coração, and Indonesian chicken heart satay. They can also be pan-fried, as in Jerusalem mixed grill. In Egyptian cuisine, they can be used, finely chopped, as part of stuffing for chicken. Many recipes combined them with other giblets, such as the Mexican pollo en menudencias and the Russian ragu iz kurinyikh potrokhov.
The hearts of beef, pork, and mutton can generally be interchanged in recipes. As heart is a hard-working muscle, it makes for "firm and rather dry" meat, so is generally slow-cooked. Another way of dealing with toughness is to julienne the meat, as in Chinese stir-fried heart.
Beef heart is valued for its high meat quality and low price, being commonly disregarded in conventional meat pricing. It can be cut into steaks, comparable in quality to the more expensive cuts of meat from the same animal, though it is distinguished by a lack of a discernible grain. It was historically eaten in the United States as a cost-saving measure, but is today also eaten as an independently desirable ingredient. Beef heart may be grilled or braised. In the Peruvian anticuchos de corazón, barbecued beef hearts are grilled after being tenderized through long marination in a spice and vinegar mixture. An Australian recipe for "mock goose" is actually braised stuffed beef heart.
Pork heart can be stewed, poached, braised, or made into sausage. The Balinese oret is a sort of blood sausage made with pig heart and blood. A French recipe for cœur de porc à l'orange is made of braised heart with an orange sauce.
Other animals
Vertebrates
The size of the heart varies among the different animal groups, with hearts in vertebrates ranging from those of the smallest mice (12 mg) to the blue whale (600 kg). In vertebrates, the heart lies in the middle of the ventral part of the body, surrounded by a pericardium. which in some fish may be connected to the peritoneum. In all vertebrates, the heart has an asymmetric orientation, almost always on the left side. According to one theory, this is caused by a developmental axial twist in the early embryo.
The sinoatrial node is found in all amniotes but not in more primitive vertebrates. In these animals, the muscles of the heart are relatively continuous, and the sinus venosus coordinates the beat, which passes in a wave through the remaining chambers. Since the sinus venosus is incorporated into the right atrium in amniotes, it is likely homologous with the SA node. In teleosts, with their vestigial sinus venosus, the main centre of coordination is, instead, in the atrium. The rate of heartbeat varies enormously between different species, ranging from around 20 beats per minute in codfish to around 600 in hummingbirds and up to 1,200 bpm in the ruby-throated hummingbird.
Double circulatory systems
Adult amphibians and most reptiles have a double circulatory system, meaning a circulatory system divided into arterial and venous parts. However, the heart itself is not completely separated into two sides. Instead, it is separated into three chambers—two atria and one ventricle. Blood returning from both the systemic circulation and the lungs is returned, and blood is pumped simultaneously into the systemic circulation and the lungs. The double system allows blood to circulate to and from the lungs which deliver oxygenated blood directly to the heart.
In reptiles, other than snakes, the heart is usually situated around the middle of the thorax. In terrestrial and arboreal snakes, it is usually located nearer to the head; in aquatic species the heart is more centrally located. There is a heart with three chambers: two atria and one ventricle. The form and function of these hearts are different from mammalian hearts due to the fact that snakes have an elongated body, and thus are affected by different environmental factors. In particular, the snake's heart relative to the position in their body has been influenced greatly by gravity. Therefore, snakes that are larger in size tend to have a higher blood pressure due to gravitational change. The ventricle is incompletely separated into two-halves by a wall (septum), with a considerable gap near the pulmonary artery and aortic openings. In most reptilian species, there appears to be little, if any, mixing between the bloodstreams, so the aorta receives, essentially, only oxygenated blood. The exception to this rule is crocodiles, which have a four-chambered heart.
In the heart of lungfish, the septum extends partway into the ventricle. This allows for some degree of separation between the de-oxygenated bloodstream destined for the lungs and the oxygenated stream that is delivered to the rest of the body. The absence of such a division in living amphibian species may be partly due to the amount of respiration that occurs through the skin; thus, the blood returned to the heart through the venae cavae is already partially oxygenated. As a result, there may be less need for a finer division between the two bloodstreams than in lungfish or other tetrapods. Nonetheless, in at least some species of amphibian, the spongy nature of the ventricle does seem to maintain more of a separation between the bloodstreams. Also, the original valves of the conus arteriosus have been replaced by a spiral valve that divides it into two parallel parts, thereby helping to keep the two bloodstreams separate.
Full division
Archosaurs (crocodilians and birds) and mammals show complete separation of the heart into two pumps for a total of four heart chambers; it is thought that the four-chambered heart of archosaurs evolved independently from that of mammals. In crocodilians, there is a small opening, the foramen of Panizza, at the base of the arterial trunks and there is some degree of mixing between the blood in each side of the heart, during a dive underwater; thus, only in birds and mammals are the two streams of blood—those to the pulmonary and systemic circulations—permanently kept entirely separate by a physical barrier.
Fish
The heart evolved no less than 380 million years ago in fish. Fish have what is often described as a two-chambered heart, consisting of one atrium to receive blood and one ventricle to pump it. However, the fish heart has entry and exit compartments that may be called chambers, so it is also sometimes described as three-chambered or four-chambered, depending on what is counted as a chamber. The atrium and ventricle are sometimes considered "true chambers", while the others are considered "accessory chambers".
Primitive fish have a four-chambered heart, but the chambers are arranged sequentially so that this primitive heart is quite unlike the four-chambered hearts of mammals and birds. The first chamber is the sinus venosus, which collects deoxygenated blood from the body through the hepatic and cardinal veins. From here, blood flows into the atrium and then to the powerful muscular ventricle where the main pumping action will take place. The fourth and final chamber is the conus arteriosus, which contains several valves and sends blood to the ventral aorta. The ventral aorta delivers blood to the gills where it is oxygenated and flows, through the dorsal aorta, into the rest of the body. (In tetrapods, the ventral aorta has divided in two; one half forms the ascending aorta, while the other forms the pulmonary artery).
In the adult fish, the four chambers are not arranged in a straight row but instead form an S-shape, with the latter two chambers lying above the former two. This relatively simple pattern is found in cartilaginous fish and in the ray-finned fish. In teleosts, the conus arteriosus is very small and can more accurately be described as part of the aorta rather than of the heart proper. The conus arteriosus is not present in any amniotes, presumably having been absorbed into the ventricles over the course of evolution. Similarly, while the sinus venosus is present as a vestigial structure in some reptiles and birds, it is otherwise absorbed into the right atrium and is no longer distinguishable.
Invertebrates
Arthropods and most mollusks have an open circulatory system. In this system, deoxygenated blood collects around the heart in cavities (sinuses). This blood slowly permeates the heart through many small one-way channels. The heart then pumps the blood into the hemocoel, a cavity between the organs. The heart in arthropods is typically a muscular tube that runs the length of the body, under the back and from the base of the head. Instead of blood the circulatory fluid is haemolymph which carries the most commonly used respiratory pigment, copper-based haemocyanin as the oxygen transporter. Haemoglobin is only used by a few arthropods.
In some other invertebrates such as earthworms, the circulatory system is not used to transport oxygen and so is much reduced, having no veins or arteries and consisting of two connected tubes. Oxygen travels by diffusion and there are five small muscular vessels that connect these vessels that contract at the front of the animals that can be thought of as "hearts".
Squids and other cephalopods have two "gill hearts" also known as branchial hearts, and one "systemic heart". The branchial hearts have two atria and one ventricle each, and pump to the gills, whereas the systemic heart pumps to the body.
Only the chordates (including vertebrates) and the hemichordates have a central "heart", which is a vesicle formed from the thickening of the aorta and contracts to pump blood. This suggests a presence of it in the last common ancestor of these groups (may have been lost in the echinoderms).
Additional images
| Biology and health sciences | Biology | null |
36842 | https://en.wikipedia.org/wiki/Gossypium | Gossypium | Gossypium () is a genus of flowering plants in the tribe Gossypieae of the mallow family, Malvaceae, from which cotton is harvested. It is native to tropical and subtropical regions of the Old and New Worlds. There are about 50 Gossypium species, making it the largest genus in the tribe Gossypieae, and new species continue to be discovered. The name of the genus is derived from the Arabic word goz, which refers to a soft substance.
Cotton is the primary natural fibre used by humans today, amounting to about 80% of world natural fibre production. Where cotton is cultivated, it is a major oilseed crop and a main protein source for animal feed. Cotton is thus of great importance for agriculture, industry and trade, especially for tropical and subtropical countries in Africa, South America and Asia. Consequently, the genus Gossypium has long attracted the attention of scientists.
The origin of the genus Gossypium is dated to around 5–10 million years ago. Gossypium species are distributed in arid to semiarid regions of the tropics and subtropics. Generally shrubs or shrub-like plants, the species of this genus are extraordinarily diverse in morphology and adaptation, ranging from fire-adapted, herbaceous perennials in Australia to trees in Mexico. Most wild cottons are diploid, but a group of five species from America and Pacific islands are tetraploid, apparently due to a single hybridization event around 1.5 to 2 million years ago. The tetraploid species are G. hirsutum, G. tomentosum, G. mustelinum, G. barbadense, and G. darwinii.
Cultivated cottons are perennial shrubs, most often grown as annuals. Plants are 1–2 m high in modern cropping systems, sometimes higher in traditional, multiannual cropping systems, now largely disappearing. The leaves are broad and lobed, with three to five (or rarely seven) lobes. The seeds are contained in a capsule called a "boll", each seed surrounded by fibres of two types. These fibres are the more commercially interesting part of the plant and they are separated from the seed by a process called ginning. At the first ginning, the longer fibres, called staples, are removed and these are twisted together to form yarn for making thread and weaving into high quality textiles. At the second ginning, the shorter fibres, called "linters", are removed, and these are woven into lower quality textiles (which include the eponymous lint). Commercial species of cotton plant are G. hirsutum (97% of world production), G. barbadense (1–2%), G. arboreum and G. herbaceum (together, ~1%). Many varieties of cotton have been developed by selective breeding and hybridization of these species. Experiments are ongoing to cross-breed various desirable traits of wild cotton species into the principal commercial species, such as resistance to insects and diseases, and drought tolerance. Cotton fibres occur naturally in colours of white, brown, green, and some mixing of these.
Selected species
Subgenus Gossypium
Gossypium anomalum Wawra & Peyr.
Gossypium arboreum L. – tree cotton (India and Pakistan)
Gossypium herbaceum L. – Levant cotton (southern Africa and the Arabian Peninsula)
Subgenus Houzingenia
Gossypium raimondii Ulbr. – one of the putative progenitor species of tetraploid cotton, alongside G. arboreum
Gossypium thurberi Tod. – Arizona wild cotton (Arizona and northern Mexico)
Subgenus Karpas
Gossypium barbadense L. – Creole cotton/Sea Island Cotton (tropical South America)
Gossypium darwinii G.Watt – Darwin's cotton (Galápagos Islands)
Gossypium hirsutum L. – upland cotton (Central America, Mexico, the Caribbean and southern Florida)
Gossypium mustelinum Miers ex G.Watt
Gossypium tomentosum Nutt. ex Seem – Maʻo or Hawaiian cotton (Hawaii)
Subgenus Sturtia
Gossypium australe F.Muell (northwestern Australia)
Gossypium sturtianum J.H. Willis – Sturt's desert rose (Australia)
Formerly placed in genus Gossypium
Gossypioides brevilanatum (Hochr.) J.B.Hutch. (as G. brevilanatum Hochr.)
Gossypioides kirkii (Mast.) J.B.Hutch. (as Gossypium kirkii Mast.)
Kokia drynarioides (Seem.) Lewton (as G. drynarioides Seem.)
Gossypium genome
A public genome sequencing effort of cotton was initiated in 2007 by a consortium of public researchers. They agreed on a strategy to sequence the genome of cultivated, allotetraploid cotton. "Allotetraploid" means that the genomes of these cotton species comprise two distinct subgenomes, referred to as the At and Dt (the 't' for tetraploid, to distinguish them from the A and D genomes of the related diploid species). The strategy is to sequence first the D-genome relative of allotetraploid cottons, G. raimondii, a wild South American (Peru, Ecuador) cotton species, because of its smaller size due essentially to less repetitive DNA (retrotransposons mainly). It has nearly one-third the number of bases of tetraploid cotton (AD), and each chromosome is only present once. The A genome of G. arboreum, the 'Old-World' cotton species (grown in India in particular), would be sequenced next. Its genome is roughly twice the size of G. raimondii'''s. Once both A and D genome sequences are assembled, then research could begin to sequence the actual genomes of tetraploid cultivated cotton varieties. This strategy is out of necessity; if one were to sequence the tetraploid genome without model diploid genomes, the euchromatic DNA sequences of the AD genomes would co-assemble and the repetitive elements of AD genomes would assemble independently into A and D sequences, respectively. Then there would be no way to untangle the mess of AD sequences without comparing them to their diploid counterparts.
The public sector effort continues with the goal to create a high-quality, draft genome sequence from reads generated by all sources. The public-sector effort has generated Sanger reads of BACs, fosmids, and plasmids, as well as 454 reads. These later types of reads will be instrumental in assembling an initial draft of the D genome. In 2010, two companies (Monsanto and Illumina), completed enough Illumina sequencing to cover the D genome of G. raimondii about 50x. They announced they would donate their raw reads to the public. This public relations effort gave them some recognition for sequencing the cotton genome. Once the D genome is assembled from all of this raw material, it will undoubtedly assist in the assembly of the AD genomes of cultivated varieties of cotton, but a lot of hard work remains.
Cotton pests and diseases
Pests
Boll weevil, Anthonomus grandisCotton aphid, Aphis gossypiiCotton stainer, Dysdercus koenigiiCotton bollworm, Helicoverpa zea, and native budworm, Helicoverpa punctigera, are caterpillars that damage cotton crops.
Some other Lepidoptera (butterfly and moth) larvae also feed on cotton – see list of Lepidoptera that feed on cotton plants.
Green mirid (Creontiades dilutus), a sucking insect
Spider mites, Tetranychus urticae, T. ludeni and T. lambiThrips, Thrips tabaci and Frankliniella schultzeiDiseases
Alternaria leaf spot, caused by Alternaria macrospora and Alternaria alternataAnthracnose boll rot, caused by Colletotrichum gossypiiBlack root rot, caused by the fungus Thielaviopsis basicolaBlight caused by Xanthomonas campestris pv. malvacearum
Fusarium boll rot caused by Fusarium spp.
Phytophthora boll rot, caused by Phytophthora nicotianae var. parasitica
Sclerotinia boll rot, caused by the fungus Sclerotinia sclerotiorum
Stigmatomycosis, caused by the fungi Ashbya gossypii, Eremothecium coryli, (Nematospora coryli) and Aureobasidium pullulans
Gallery
| Biology and health sciences | Malvales | Plants |
36856 | https://en.wikipedia.org/wiki/Vertebrate | Vertebrate | Vertebrates () are animals with a vertebral column (backbone or spine), and a cranium, or skull. The vertebral column surrounds and protects the spinal cord, while the cranium protects the brain.
The vertebrates make up the subphylum Vertebrata with some 65,000 species, by far the largest grouping in the phylum Chordata. The vertebrates include mammals, birds, amphibians, and various classes of fish and reptiles. The fish include the jawless Agnatha, and the jawed Gnathostomata. The jawed fish include both the cartilaginous fish and the bony fish. Bony fish include the lobe-finned fish, which gave rise to the tetrapods, the animals with four limbs. Vertebrates make up less than five percent of all described animal species.
The first vertebrates appeared in the Cambrian explosion some 518 million years ago. Jawed vertebrates evolved in the Ordovician, followed by bony fishes in the Devonian. The first amphibians appeared on land in the Carboniferous. During the Triassic, mammals and dinosaurs appeared, the latter giving rise to birds in the Jurassic. Extant species are roughly equally divided between fishes of all kinds, and tetrapods. Populations of many species have been in steep decline since 1970 because of land-use change, overexploitation of natural resources, climate change, pollution and the impact of invasive species.
Characteristics
Unique features
Vertebrates belong to the chordates, a phylum characterised by five synapomorphies (unique characteristics), namely a notochord, a hollow nerve cord along the back, an endostyle (often as a thyroid gland), and pharyngeal gills arranged in pairs. Vertebrates share these characteristics with other chordates.
Vertebrates are distinguished from all other animals, including other chordates, by multiple synapomorphies, namely the vertebral column; skull of bone or cartilage, large brain divided into 3 or more sections, a muscular heart with multiple chambers; an inner ear with semicircular canals; sense organs including eyes, ears, and nose; and digestive organs including intestine, liver, pancreas, and stomach.
Physical
Vertebrates (and other chordates) belong to the Bilateria, a group of animals with mirror symmetrical bodies. They move, typically by swimming, using muscles along the back, supported by a strong but flexible skeletal structure, the spine or vertebral column. The name 'vertebrate' derives from the Latin , 'jointed', from vertebra, 'joint', in turn from Latin , 'to turn'.
As embryos, vertebrates still have a notochord; as adults, all but the jawless fishes have a vertebral column, made of bone or cartilage, instead. Vertebrate embryos have pharyngeal arches; in adult fish, these support the gills, while in adult tetrapods they develop into other structures.
In the embryo, a layer of cells along the back folds and fuses into a hollow neural tube. This develops into the spinal cord, and at its front end, the brain. The brain receives information about the world through nerves which carry signals from sense organs in the skin and body. Because the ancestors of vertebrates usually moved forwards, the front of the body encountered stimuli before the rest of the body, favouring cephalisation, the evolution of a head containing sense organs and a brain to process the sensory information.
Vertebrates have a tubular gut that extends from the mouth to the anus. The vertebral column typically continues beyond the anus to form an elongated tail.
The ancestral vertebrates, and most extant species, are aquatic and carry out gas exchange in their gills. The gills are finely-branched structures which bring the blood close to the water. They are positioned just behind the head, supported by cartilaginous or bony branchial arches. In jawed vertebrates, the first gill arch pair evolved into the jaws. In amphibians and some primitive bony fishes, the larvae have external gills, branching off from the gill arches. Oxygen is carried from the gills to the body in the blood, and carbon dioxide is returned to the gills, in a closed circulatory system driven by a chambered heart. The tetrapods have lost the gills of their fish ancestors; they have adapted the swim bladder (that fish use for buoyancy) into lungs to breathe air, and the circulatory system is adapted accordingly. At the same time, they adapted the bony fins of the lobe-finned fishes into two pairs of walking legs, carrying the weight of the body via the shoulder and pelvic girdles.
Vertebrates vary in size from the smallest frog species such as Brachycephalus pulex, with a minimum adult snout–vent length of to the blue whale, at up to and weighing some 150 tonnes.
Molecular
Molecular markers known as conserved signature indels in protein sequences have been identified and provide distinguishing criteria for the vertebrate subphylum. Five molecular markers are exclusively shared by all vertebrates and reliably distinguish them from all other animals; these include protein synthesis elongation factor-2, eukaryotic translation initiation factor 3, adenosine kinase and a protein related to ubiquitin carboxyl-terminal hydrolase). A specific relationship between vertebrates and tunicates is supported by two molecular markers, the proteins Rrp44 (associated with the exosome complex) and serine C-palmitoyltransferase. These are exclusively shared by species from these two subphyla, but not by cephalochordates.
Evolutionary history
Cambrian explosion: first vertebrates
Vertebrates originated during the Cambrian explosion at the start of the Paleozoic, which saw a rise in animal diversity. The earliest known vertebrates belong to the Chengjiang biota and lived about 518 million years ago. These include Haikouichthys, Myllokunmingia, Zhongjianichthys, and probably Yunnanozoon. Unlike other Cambrian animals, these groups had the basic vertebrate body plan: a notochord, rudimentary vertebrae, and a well-defined head and tail, but lacked jaws. A vertebrate group of uncertain phylogeny, small eel-like conodonts, are known from microfossils of their paired tooth segments from the late Cambrian to the end of the Triassic. Zoologists have debated whether teeth mineralized first, given the hard teeth of the soft-bodied conodonts, and then bones, or vice versa, but it seems that the mineralized skeleton came first.
Paleozoic: from fish to amphibians
The first jawed vertebrates may have appeared in the late Ordovician (~445 mya) and became common in the Devonian period, often known as the "Age of Fishes". The two groups of bony fishes, Actinopterygii and Sarcopterygii, evolved and became common. By the middle of the Devonian, a lineage of sarcopterygii with both gills and air-breathing lungs adapted to life in swampy pools used their muscular paired fins to propel themselves on land. The fins, already possessing bones and joints, evolved into two pairs of walking legs. These established themselves as amphibians, terrestrial tetrapods, in the next geological period, the Carboniferous. A group of vertebrates, the amniotes, with membranes around the embryo allowing it to survive on dry land, branched from amphibious tetrapods in the Carboniferous.
Mesozoic: from reptiles to mammals and birds
At the onset of the Mesozoic, all larger vertebrate groups were devastated after the largest mass extinction in earth history. The following recovery phase saw the emergence of many new vertebrate groups that are still around today, and this time has been described as the origin of modern ecosystems. On the continents, the ancestors of modern lissamphibians, turtles, crocodilians, lizards, and mammals appeared, as well as dinosaurs, which gave rise to birds later in the Mesozoic. In the seas, various groups of marine reptiles evolved, as did new groups of fish. At the end of the Mesozoic, another extinction event extirpated dinosaurs (other than birds) and many other vertebrate groups.
Cenozoic: Age of Mammals
The Cenozoic, the current era, is sometimes called the "Age of Mammals", because of the dominance of the terrestrial environment by that group. Placental mammals have predominantly occupied the Northern Hemisphere, with marsupial mammals in the Southern Hemisphere.
Approaches to classification
Taxonomic history
In 1811, Jean-Baptiste Lamarck defined the vertebrates as a taxonomic group, a phylum distinct from the invertebrates he was studying. He described them as consisting of four classes, namely fish, reptiles, birds, and mammals, but treated the cephalochordates and tunicates as molluscs. In 1866, Ernst Haeckel called both his "Craniata" (vertebrates) and his "Acrania" (cephalochordates) "Vertebrata". In 1877, Ray Lankester grouped the Craniates, cephalochordates, and "Urochordates (tunicates) as "Vertebrata". In 1880–1881, Francis Maitland Balfour placed the Vertebrata as a subphylum within the Chordates. In 2018, Naoki Irie and colleagues proposed making Vertebrata a full phylum.
Traditional taxonomy
Conventional evolutionary taxonomy groups extant vertebrates into seven classes based on traditional interpretations of gross anatomical and physiological traits. The commonly held classification lists three classes of fish and four of tetrapods. This ignores some of the natural relationships between the groupings. For example, the birds derive from a group of reptiles, so "Reptilia" excluding "Aves" is not a natural grouping; it is described as paraphyletic.
Subphylum Vertebrata
Class Agnatha (jawless fishes)
Class Chondrichthyes (cartilaginous fishes)
Class Osteichthyes (bony fishes)
Class Amphibia (amphibians)
Class Reptilia (reptiles: paraphyletic)
Class Aves (birds)
Class Mammalia (mammals)
In addition to these, there are two classes of extinct armoured fishes, Placodermi and Acanthodii, both paraphyletic.
Other ways of classifying the vertebrates have been devised, particularly with emphasis on the phylogeny of early amphibians and reptiles. An example based on work by M.J. Benton in 2004 is given here († = extinct):
Subphylum Vertebrata
Palaeospondylus
Infraphylum Agnatha or Cephalaspidomorphi (lampreys and other jawless fishes)
Superclass Anaspidomorphi (anaspids and relatives)
Infraphylum Gnathostomata (vertebrates with jaws)
Class Placodermi (extinct armoured fishes)
Class Chondrichthyes (cartilaginous fishes)
Class Acanthodii (extinct spiny "sharks")
Superclass Osteichthyes (bony fishes)
Class Actinopterygii (ray-finned bony fishes)
Class Sarcopterygii (lobe-finned fishes, including the tetrapods)
Superclass Tetrapoda (four-limbed vertebrates)
Class Amphibia (amphibians, some ancestral to the amniotes)—now a paraphyletic group
Class Synapsida (mammals and their extinct relatives)
Class Sauropsida (reptiles and birds)
While this traditional taxonomy is orderly, most of the groups are paraphyletic, meaning that the structure does not accurately reflect the natural evolved grouping. For instance, descendants of the first reptiles include modern reptiles, mammals and birds; the agnathans have given rise to the jawed vertebrates; the bony fishes have given rise to the land vertebrates; a group of amphibians, the labyrinthodonts, have given rise to the reptiles (traditionally including the mammal-like synapsids), which in turn have given rise to the mammals and birds. Most scientists working with vertebrates use a classification based purely on phylogeny, organized by their known evolutionary history.
External phylogeny
It was once thought that the Cephalochordata was the sister taxon to Vertebrata. This group, Notochordata, was taken to be sister to the Tunicata (the Notochordata hypothesis). Since 2006, analysis has shown that the tunicates + vertebrates form a clade, the Olfactores, with Cephalochordata as its sister (the Olfactores hypothesis), as shown in the following phylogenetic tree.
Internal phylogeny
The internal phylogeny of the vertebrates is shown in the tree.
The placement of hagfishes on the vertebrate tree of life has been controversial. Their lack of proper vertebrae (among other characteristics of jawless lampreys and jawed vertebrates) led phylogenetic analyses based on morphology to place them outside Vertebrata. Molecular data, however, indicates they are vertebrates closely related to lampreys. An older view is that they are a sister group of vertebrates in the common taxon of Craniata. In 2019, Tetsuto Miyashita and colleagues reconciled the two types of analysis, supporting the Cyclostomata hypothesis using only morphological data.
Diversity
Species by group
Described and extant vertebrate species are split roughly evenly but non-phylogenetically between non-tetrapod "fish" and tetrapods. The following table lists the number of described extant species for each vertebrate class as estimated in the IUCN Red List of Threatened Species, 2014.3. Paraphyletic groups are shown in quotation marks.
The IUCN estimates that 1,305,075 extant invertebrate species have been described, which means that less than 5% of the described animal species in the world are vertebrates.
Population trends
The Living Planet Index, following 16,704 populations of 4,005 species of vertebrates, shows a decline of 60% between 1970 and 2014. Since 1970, freshwater species declined 83%, and tropical populations in South and Central America declined 89%. The authors note that "An average trend in population change is not an average of total numbers of animals lost." According to WWF, this could lead to a sixth major extinction event. The five main causes of biodiversity loss are land-use change, overexploitation of natural resources, climate change, pollution and invasive species.
| Biology and health sciences | Biology | null |
36858 | https://en.wikipedia.org/wiki/Wheat | Wheat | Wheat is a group of wild and domesticated grasses of the genus Triticum (). They are cultivated for their cereal grains, which are staple foods around the world. Well-known wheat species and hybrids include the most widely grown common wheat (T. aestivum), spelt, durum, emmer, einkorn, and Khorasan or Kamut. The archaeological record suggests that wheat was first cultivated in the regions of the Fertile Crescent around 9600 BC.
Wheat is grown on a larger area of land than any other food crop ( in 2021). World trade in wheat is greater than for all other crops combined. In 2021, world wheat production was , making it the second most-produced cereal after maize (known as corn in North America and Australia; wheat is often called corn in countries including Britain). Since 1960, world production of wheat and other grain crops has tripled and is expected to grow further through the middle of the 21st century. Global demand for wheat is increasing because of the usefulness of gluten to the food industry.
Wheat is an important source of carbohydrates. Globally, it is the leading source of vegetable proteins in human food, having a protein content of about 13%, which is relatively high compared to other major cereals but relatively low in protein quality (supplying essential amino acids). When eaten as the whole grain, wheat is a source of multiple nutrients and dietary fiber. In a small part of the general population, gluten – which comprises most of the protein in wheat – can trigger coeliac disease, noncoeliac gluten sensitivity, gluten ataxia, and dermatitis herpetiformis.
Description
Wheat is a stout grass of medium to tall height. Its stem is jointed and usually hollow, forming a straw. There can be many stems on one plant. It has long narrow leaves, their bases sheathing the stem, one above each joint. At the top of the stem is the flower head, containing some 20 to 100 flowers. Each flower contains both male and female parts. The flowers are wind-pollinated, with over 99% of pollination events being self-pollinations and the rest cross-pollinations. The flower is housed in a pair of small leaflike glumes. The two (male) stamens and (female) stigmas protrude outside the glumes. The flowers are grouped into spikelets, each with between two and six flowers. Each fertilised carpel develops into a wheat grain or berry; botanically a caryopsis fruit, it is often called a seed. The grains ripen to a golden yellow; a head of grain is called an ear.
Leaves emerge from the shoot apical meristem in a telescoping fashion until the transition to reproduction i.e. flowering. The last leaf produced by a wheat plant is known as the flag leaf. It is denser and has a higher photosynthetic rate than other leaves, to supply carbohydrate to the developing ear. In temperate countries the flag leaf, along with the second and third highest leaf on the plant, supply the majority of carbohydrate in the grain and their condition is paramount to yield formation. Wheat is unusual among plants in having more stomata on the upper (adaxial) side of the leaf, than on the under (abaxial) side. It has been theorised that this might be an effect of it having been domesticated and cultivated longer than any other plant. Winter wheat generally produces up to 15 leaves per shoot and spring wheat up to 9 and winter crops may have up to 35 tillers (shoots) per plant (depending on cultivar).
Wheat roots are among the deepest of arable crops, extending as far down as . While the roots of a wheat plant are growing, the plant also accumulates an energy store in its stem, in the form of fructans, which helps the plant to yield under drought and disease pressure, but it has been observed that there is a trade-off between root growth and stem non-structural carbohydrate reserves. Root growth is likely to be prioritised in drought-adapted crops, while stem non-structural carbohydrate is prioritised in varieties developed for countries where disease is a bigger issue.
Depending on variety, wheat may be awned or not awned. Producing awns incurs a cost in grain number, but wheat awns photosynthesise more efficiently than their leaves with regards to water usage, so awns are much more frequent in varieties of wheat grown in hot drought-prone countries than those generally seen in temperate countries. For this reason, awned varieties could become more widely grown due to climate change. In Europe, however, a decline in climate resilience of wheat has been observed.
History
Domestication
Hunter-gatherers in West Asia harvested wild wheats for thousands of years before they were domesticated, perhaps as early as 21,000 BC, but they formed a minor component of their diets. In this phase of pre-domestication cultivation, early cultivars were spread around the region and slowly developed the traits that came to characterise their domesticated forms.
Repeated harvesting and sowing of the grains of wild grasses led to the creation of domestic strains, as mutant forms ('sports') of wheat were more amenable to cultivation. In domesticated wheat, grains are larger, and the seeds (inside the spikelets) remain attached to the ear by a toughened rachis during harvesting. In wild strains, a more fragile rachis allows the ear to shatter easily, dispersing the spikelets. Selection for larger grains and non-shattering heads by farmers might not have been deliberately intended, but simply have occurred because these traits made gathering the seeds easier; nevertheless such 'incidental' selection was an important part of crop domestication. As the traits that improve wheat as a food source involve the loss of the plant's natural seed dispersal mechanisms, highly domesticated strains of wheat cannot survive in the wild.
Wild einkorn wheat (T. monococcum subsp. boeoticum) grows across Southwest Asia in open parkland and steppe environments. It comprises three distinct races, only one of which, native to Southeast Anatolia, was domesticated. The main feature that distinguishes domestic einkorn from wild is that its ears do not shatter without pressure, making it dependent on humans for dispersal and reproduction. It also tends to have wider grains. Wild einkorn was collected at sites such as Tell Abu Hureyra () and Mureybet (), but the earliest archaeological evidence for the domestic form comes after in southern Turkey, at Çayönü, Cafer Höyük, and possibly Nevalı Çori. Genetic evidence indicates that it was domesticated in multiple places independently.
Wild emmer wheat (T. turgidum subsp. dicoccoides) is less widespread than einkorn, favouring the rocky basaltic and limestone soils found in the hilly flanks of the Fertile Crescent. It is more diverse, with domesticated varieties falling into two major groups: hulled or non-shattering, in which threshing separates the whole spikelet; and free-threshing, where the individual grains are separated. Both varieties probably existed in prehistory, but over time free-threshing cultivars became more common. Wild emmer was first cultivated in the southern Levant, as early as 9600 BC. Genetic studies have found that, like einkorn, it was domesticated in southeastern Anatolia, but only once. The earliest secure archaeological evidence for domestic emmer comes from Çayönü, , where distinctive scars on the spikelets indicated that they came from a hulled domestic variety. Slightly earlier finds have been reported from Tell Aswad in Syria, , but these were identified using a less reliable method based on grain size.
Early farming
Einkorn and emmer are considered two of the founder crops cultivated by the first farming societies in Neolithic West Asia. These communities also cultivated naked wheats (T. aestivum and T. durum) and a now-extinct domesticated form of Zanduri wheat (T. timopheevii), as well as a wide variety of other cereal and non-cereal crops. Wheat was relatively uncommon for the first thousand years of the Neolithic (when barley predominated), but became a staple after around 8500 BC. Early wheat cultivation did not demand much labour. Initially, farmers took advantage of wheat's ability to establish itself in annual grasslands by enclosing fields against grazing animals and re-sowing stands after they had been harvested, without the need to systematically remove vegetation or till the soil. They may also have exploited natural wetlands and floodplains to practice décrue farming, sowing seeds in the soil left behind by receding floodwater. It was harvested with stone-bladed sickles. The ease of storing wheat and other cereals led farming households to become gradually more reliant on it over time, especially after they developed individual storage facilities that were large enough to hold more than a year's supply.
Wheat grain was stored after threshing, with the chaff removed. It was then processed into flour using ground stone mortars. Bread made from ground einkorn and the tubers of a form of club rush (Bolboschoenus glaucus) was made as early as 12,400 BC. At Çatalhöyük (), both wholegrain wheat and flour was used to prepare bread, porridge and gruel. Apart from food, wheat may also have been important to Neolithic societies as a source of straw, which could be used for fuel, wicker-making, or wattle and daub construction.
Spread
Domestic wheat was quickly spread to regions where its wild ancestors did not grow naturally. Emmer was introduced to Cyprus as early as 8600 BC and einkorn ; emmer reached Greece by 6500 BC, Egypt shortly after 6000 BC, and Germany and Spain by 5000 BC. "The early Egyptians were developers of bread and the use of the oven and developed baking into one of the first large-scale food production industries." By 4000 BC, wheat had reached the British Isles and Scandinavia. Wheat likely appeared in China's lower Yellow River around 2600 BC.
The oldest evidence for hexaploid wheat has been confirmed through DNA analysis of wheat seeds, dating to around 6400–6200 BC, recovered from Çatalhöyük. the earliest known wheat with sufficient gluten for yeasted breads was found in a granary at Assiros in Macedonia dated to 1350 BC. From the Middle East, wheat continued to spread across Europe and to the Americas in the Columbian exchange. In the British Isles, wheat straw (thatch) was used for roofing in the Bronze Age, and remained in common use until the late 19th century. White wheat bread was historically a high status food, but during the nineteenth century it became in Britain an item of mass consumption, displacing oats, barley and rye from diets in the North of the country. It became "a sign of a high degree of culture". After 1860, the enormous expansion of wheat production in the United States flooded the world market, lowering prices by 40%, and (along with the expansion of potato growing) made a major contribution to the nutritional welfare of the poor.
Evolution
Phylogeny
Some wheat species are diploid, with two sets of chromosomes, but many are stable polyploids, with four sets of chromosomes (tetraploid) or six (hexaploid). Einkorn wheat (Triticum monococcum) is diploid (AA, two complements of seven chromosomes, 2n=14). Most tetraploid wheats (e.g. emmer and durum wheat) are derived from wild emmer, T. dicoccoides. Wild emmer is itself the result of a hybridization between two diploid wild grasses, T. urartu and a wild goatgrass such as Ae. speltoides. The hybridization that formed wild emmer (AABB, four complements of seven chromosomes in two groups, 4n=28) occurred in the wild, long before domestication, and was driven by natural selection. Hexaploid wheats evolved in farmers' fields as wild emmer hybridized with another goatgrass, Ae. squarrosa or Ae. tauschii, to make the hexaploid wheats including bread wheat.
A 2007 molecular phylogeny of the wheats gives the following not fully-resolved cladogram of major cultivated species; the large amount of hybridisation makes resolution difficult. Markings like "6N" indicate the degree of polyploidy of each species:
Taxonomy
During 10,000 years of cultivation, numerous forms of wheat, many of them hybrids, have developed under a combination of artificial and natural selection. This complexity and diversity of status has led to much confusion in the naming of wheats.
Major species
Hexaploid species (6N)
Common wheat or bread wheat (T. aestivum) – The most widely cultivated species in the world.
Spelt (T. spelta) – Another species largely replaced by bread wheat, but in the 21st century grown, often organically, for artisanal bread and pasta.
Tetraploid species (4N)
Durum (T. durum) – A wheat widely used today, and the second most widely cultivated wheat.
Emmer (T. turgidum subsp. dicoccum and T. t. conv. durum) – A species cultivated in ancient times, derived from wild emmer, T. dicoccoides, but no longer in widespread use.
Khorasan or Kamut (T. turgidum ssp. turanicum, also called T. turanicum) is an ancient grain type; Khorasan is a historical region in modern-day Afghanistan and the northeast of Iran. The grain is twice the size of modern wheat and has a rich nutty flavor.
Diploid species (2N)
Einkorn (T. monococcum). Domesticated from wild einkorn, T. boeoticum, at the same time as emmer wheat.
Hulled versus free-threshing species
The wild species of wheat, along with the domesticated varieties einkorn, emmer and spelt, have hulls. This more primitive morphology (in evolutionary terms) consists of toughened glumes that tightly enclose the grains, and (in domesticated wheats) a semi-brittle rachis that breaks easily on threshing. The result is that when threshed, the wheat ear breaks up into spikelets. To obtain the grain, further processing, such as milling or pounding, is needed to remove the hulls or husks. Hulled wheats are often stored as spikelets because the toughened glumes give good protection against pests of stored grain. In free-threshing (or naked) forms, such as durum wheat and common wheat, the glumes are fragile and the rachis tough. On threshing, the chaff breaks up, releasing the grains.
As a food
Naming of grain classes
Wheat grain classes are named by colour, season, and hardness. The classes used in the United States are:
Durum – Hard, translucent, light-coloured grain used to make semolina flour for pasta and bulghur; high in protein, specifically, gluten protein.
Hard Red Spring – Hard, brownish, high-protein wheat used for bread and hard baked goods. Bread flour and high-gluten flours are commonly made from hard red spring wheat. It is primarily traded on the Minneapolis Grain Exchange.
Hard Red Winter – Hard, brownish, mellow high-protein wheat used for bread, hard baked goods and as an adjunct to increase protein in pastry flour for pie crusts. Some brands of unbleached all-purpose flours are made from hard red winter wheat alone. It is primarily traded on the Kansas City Board of Trade. Many varieties grown from Kansas south descend from a variety known as "Turkey red", which was brought to Kansas by Mennonite immigrants from Russia. Marquis wheat was developed to prosper in the shorter growing season in Canada, and is grown as far south as southern Nebraska.
Soft Red Winter – Soft, low-protein wheat used for cakes, pie crusts, biscuits, and muffins. Cake flour, pastry flour, and some self-rising flours with baking powder and salt added, for example, are made from soft red winter wheat. It is primarily traded on the Chicago Board of Trade.
Hard White – Hard, light-coloured, opaque, chalky, medium-protein wheat planted in dry, temperate areas. Used for bread and brewing.
Soft White – Soft, light-coloured, very low protein wheat grown in temperate moist areas. Used for pie crusts and pastry.
Food value and uses
Wheat is a staple cereal worldwide. Raw wheat berries can be ground into flour or, using hard durum wheat only, can be ground into semolina; germinated and dried creating malt; crushed or cut into cracked wheat; parboiled (or steamed), dried, crushed and de-branned into bulgur also known as groats. If the raw wheat is broken into parts at the mill, as is usually done, the outer husk or bran can be used in several ways. Wheat is a major ingredient in such foods as bread, porridge, crackers, biscuits, muesli, pancakes, pasta, pies, pastries, pizza, semolina, cakes, cookies, muffins, rolls, doughnuts, gravy, beer, vodka, boza (a fermented beverage), and breakfast cereals. In manufacturing wheat products, gluten is valuable to impart viscoelastic functional qualities in dough, enabling the preparation of diverse processed foods such as breads, noodles, and pasta that facilitate wheat consumption.
Nutrition
Raw red winter wheat is 13% water, 71% carbohydrates including 12% dietary fiber, 13% protein, and 2% fat (table). Some 75–80% of the protein content is as gluten. In a reference amount of , wheat provides of food energy and is a rich source (20% or more of the Daily Value, DV) of multiple dietary minerals, such as manganese, phosphorus, magnesium, zinc, and iron (table). The B vitamins, niacin (36% DV), thiamine (33% DV), and vitamin B6 (23% DV), are present in significant amounts (table).
Wheat is a significant source of vegetable proteins in human food, having a relatively high protein content compared to other major cereals. However, wheat proteins have a low quality for human nutrition, according to the DIAAS protein quality evaluation method. Though they contain adequate amounts of the other essential amino acids, at least for adults, wheat proteins are deficient in the essential amino acid lysine. Because the proteins present in the wheat endosperm (gluten proteins) are particularly poor in lysine, white flours are more deficient in lysine compared with whole grains. Significant efforts in plant breeding are made to develop lysine-rich wheat varieties, without success, . Supplementation with proteins from other food sources (mainly legumes) is commonly used to compensate for this deficiency, since the limitation of a single essential amino acid causes the others to break down and become excreted, which is especially important during growth.
Health advisories
Consumed worldwide by billions of people, wheat is a significant food for human nutrition, particularly in the least developed countries where wheat products are primary foods. When eaten as the whole grain, wheat supplies multiple nutrients and dietary fiber recommended for children and adults.
In genetically susceptible people, wheat gluten can trigger coeliac disease. Coeliac disease affects about 1% of the general population in developed countries. The only known effective treatment is a strict lifelong gluten-free diet. While coeliac disease is caused by a reaction to wheat proteins, it is not the same as a wheat allergy. Other diseases triggered by eating wheat are non-coeliac gluten sensitivity (estimated to affect 0.5% to 13% of the general population), gluten ataxia, and dermatitis herpetiformis.
Certain short-chain carbohydrates present in wheat, known as FODMAPs (mainly fructose polymers), may be the cause of non-coeliac gluten sensitivity. , reviews have concluded that FODMAPs only explain certain gastrointestinal symptoms, such as bloating, but not the extra-digestive symptoms that people with non-coeliac gluten sensitivity may develop.
Other wheat proteins, amylase-trypsin inhibitors, have been identified as the possible activator of the innate immune system in coeliac disease and non-coeliac gluten sensitivity. These proteins are part of the plant's natural defense against insects and may cause intestinal inflammation in humans.
Production and consumption
Global
In 2022, world wheat production was 808.4 million tonnes, led by China, India, and Russia which collectively provided 43.22% of the world total. , the largest exporters were Russia (32 million tonnes), United States (27), Canada (23) and France (20), while the largest importers were Indonesia (11 million tonnes), Egypt (10.4) and Turkey (10.0).
In 2021, wheat was grown on worldwide, more than any other food crop.
World trade in wheat is greater than for all other crops combined.
Global demand for wheat is increasing due to the unique viscoelastic and adhesive properties of gluten proteins, which facilitate the production of processed foods, whose consumption is increasing as a result of the worldwide industrialization process and westernization of diets.
19th century
Wheat became a central agriculture endeavor in the worldwide British Empire in the 19th century, and remains of great importance in Australia, Canada and India. In Australia, with vast lands and a limited work force, expanded production depended on technological advances, especially regarding irrigation and machinery. By the 1840s there were 900 growers in South Australia. They used "Ridley's Stripper", a reaper-harvester perfected by John Ridley in 1843, to remove the heads of grain. In Canada, modern farm implements made large scale wheat farming possible from the late 1840s. By 1879, Saskatchewan was the center, followed by Alberta, Manitoba and Ontario, as the spread of railway lines allowed easy exports to Britain. By 1910, wheat made up 22% of Canada's exports, rising to 25% in 1930 despite the sharp decline in prices during the worldwide Great Depression. Efforts to expand wheat production in South Africa, Kenya and India were stymied by low yields and disease. However, by 2000 India had become the second largest producer of wheat in the world. In the 19th century the American wheat frontier moved rapidly westward. By the 1880s 70% of American exports went to British ports. The first successful grain elevator was built in Buffalo in 1842. The cost of transport fell rapidly. In 1869 it cost 37 cents to transport a bushel of wheat from Chicago to Liverpool. In 1905 it was 10 cents.
Late 20th century yields
In the 20th century, global wheat output expanded by about 5-fold, but until about 1955 most of this reflected increases in wheat crop area, with lesser (about 20%) increases in crop yields per unit area. After 1955 however, there was a ten-fold increase in the rate of wheat yield improvement per year, and this became the major factor allowing global wheat production to increase. Thus technological innovation and scientific crop management with synthetic nitrogen fertilizer, irrigation and wheat breeding were the main drivers of wheat output growth in the second half of the century. There were some significant decreases in wheat crop area, for instance in North America. Better seed storage and germination ability (and hence a smaller requirement to retain harvested crop for next year's seed) is another 20th-century technological innovation. In Medieval England, farmers saved one-quarter of their wheat harvest as seed for the next crop, leaving only three-quarters for food and feed consumption. By 1999, the global average seed use of wheat was about 6% of output.
In the 21st century, rising temperatures associated with global warming are reducing wheat yield in several locations.
Agronomy
Growing wheat
Wheat is an annual crop. It can be planted in autumn and harvested in early summer as winter wheat in climates that are not too severe, or planted in spring and harvested in autumn as spring wheat. It is normally planted after tilling the soil by ploughing and then harrowing to kill weeds and create an even surface. The seeds are then scattered on the surface, or drilled into the soil in rows. Winter wheat lies dormant during a winter freeze. It needs to develop to a height of 10 to 15 cm before the cold intervenes, so as to be able to survive the winter; it requires a period with the temperature at or near freezing, its dormancy then being broken by the thaw or rise in temperature. Spring wheat does not undergo dormancy. Wheat requires a deep soil, preferably a loam with organic matter, and available minerals including soil nitrogen, phosphorus, and potassium. An acid and peaty soil is not suitable. Wheat needs some 30 to 38 cm of rain in the growing season to form a good crop of grain.
The farmer may intervene while the crop is growing to add fertilizer, water by irrigation, or pesticides such as herbicides to kill broad-leaved weeds or insecticides to kill insect pests. The farmer may assess soil minerals, soil water, weed growth, or the arrival of pests to decide timely and cost-effective corrective actions, and crop ripeness and water content to select the right moment to harvest. Harvesting involves reaping, cutting the stems to gather the crop; and threshing, breaking the ears to release the grain; both steps are carried out by a combine harvester. The grain is then dried so that it can be stored safe from mould fungi.
Crop development
Wheat normally needs between 110 and 130 days between sowing and harvest, depending upon climate, seed type, and soil conditions. Optimal crop management requires that the farmer have a detailed understanding of each stage of development in the growing plants. In particular, spring fertilizers, herbicides, fungicides, and growth regulators are typically applied only at specific stages of plant development. For example, it is currently recommended that the second application of nitrogen is best done when the ear (not visible at this stage) is about 1 cm in size (Z31 on Zadoks scale). Knowledge of stages is also important to identify periods of higher risk from the climate. Farmers benefit from knowing when the 'flag leaf' (last leaf) appears, as this leaf represents about 75% of photosynthesis reactions during the grain filling period, and so should be preserved from disease or insect attacks to ensure a good yield. Several systems exist to identify crop stages, with the Feekes and Zadoks scales being the most widely used. Each scale is a standard system which describes successive stages reached by the crop during the agricultural season. For example, the stage of pollen formation from the mother cell, and the stages between anthesis and maturity, are susceptible to high temperatures, and this adverse effect is made worse by water stress.
Farming techniques
Technological advances in soil preparation and seed placement at planting time, use of crop rotation and fertilizers to improve plant growth, and advances in harvesting methods have all combined to promote wheat as a viable crop. When the use of seed drills replaced broadcasting sowing of seed in the 18th century, another great increase in productivity occurred. Yields of pure wheat per unit area increased as methods of crop rotation were applied to land that had long been in cultivation, and the use of fertilizers became widespread.
Improved agricultural husbandry has more recently included pervasive automation, starting with the use of threshing machines, and progressing to large and costly machines like the combine harvester which greatly increased productivity. At the same time, better varieties such as Norin 10 wheat, developed in Japan in the 1930s, or the dwarf wheat developed by Norman Borlaug in the Green Revolution, greatly increased yields.
In addition to gaps in farming system technology and knowledge, some large wheat grain-producing countries have significant losses after harvest at the farm and because of poor roads, inadequate storage technologies, inefficient supply chains and farmers' inability to bring the produce into retail markets dominated by small shopkeepers. Some 10% of total wheat production is lost at farm level, another 10% is lost because of poor storage and road networks, and additional amounts are lost at the retail level.
In the Punjab region of the Indian subcontinent, as well as North China, irrigation has been a major contributor to increased grain output. More widely over the last 40 years, a massive increase in fertilizer use together with the increased availability of semi-dwarf varieties in developing countries, has greatly increased yields per hectare. In developing countries, use of (mainly nitrogenous) fertilizer increased 25-fold in this period. However, farming systems rely on much more than fertilizer and breeding to improve productivity. A good illustration of this is Australian wheat growing in the southern winter cropping zone, where, despite low rainfall (300 mm), wheat cropping is successful even with relatively little use of nitrogenous fertilizer. This is achieved by crop rotation with leguminous pastures. The inclusion of a canola crop in the rotations has boosted wheat yields by a further 25%. In these low rainfall areas, better use of available soil-water (and better control of soil erosion) is achieved by retaining the stubble after harvesting and by minimizing tillage.
Pests and diseases
Pests and diseases consume 21.47% of the world's wheat crop annually.
Diseases
There are many wheat diseases, mainly caused by fungi, bacteria, and viruses. Plant breeding to develop new disease-resistant varieties, and sound crop management practices are important for preventing disease. Fungicides, used to prevent the significant crop losses from fungal disease, can be a significant variable cost in wheat production. Estimates of the amount of wheat production lost owing to plant diseases vary between 10 and 25% in Missouri. A wide range of organisms infect wheat, of which the most important are viruses and fungi.
The main wheat-disease categories are:
Seed-borne diseases: these include seed-borne scab, seed-borne Stagonospora (previously known as Septoria), common bunt (stinking smut), and loose smut. These are managed with fungicides.
Leaf- and head- blight diseases: Powdery mildew, leaf rust, Septoria tritici leaf blotch, Stagonospora (Septoria) nodorum leaf and glume blotch, and Fusarium head scab.
Crown and root rot diseases: Two of the more important of these are 'take-all' and Cephalosporium stripe. Both of these diseases are soil borne.
Stem rust diseases: Caused by Puccinia graminis f. sp. tritici (basidiomycete) fungi e.g. Ug99
Wheat blast: Caused by Magnaporthe oryzae Triticum.
Viral diseases: Wheat spindle streak mosaic (yellow mosaic) and barley yellow dwarf are the two most common viral diseases. Control can be achieved by using resistant varieties.
A historically significant disease of cereals including wheat, though commoner in rye is ergot; it is unusual among plant diseases in also causing sickness in humans who ate grain contaminated with the fungus involved, Claviceps purpurea.
Animal pests
Among insect pests of wheat is the wheat stem sawfly,
a chronic pest in the Northern Great Plains of the United States and in the Canadian Prairies.
Wheat is the food plant of the larvae of some Lepidoptera (butterfly and moth) species including the flame, rustic shoulder-knot, setaceous Hebrew character and turnip moth. Early in the season, many species of birds and rodents feed upon wheat crops. These animals can cause significant damage to a crop by digging up and eating newly planted seeds or young plants. They can also damage the crop late in the season by eating the grain from the mature spike. Recent post-harvest losses in cereals amount to billions of dollars per year in the United States alone, and damage to wheat by various borers, beetles and weevils is no exception. Rodents can also cause major losses during storage, and in major grain growing regions, field mice numbers can sometimes build up explosively to plague proportions because of the ready availability of food. To reduce the amount of wheat lost to post-harvest pests, Agricultural Research Service scientists have developed an "insect-o-graph", which can detect insects in wheat that are not visible to the naked eye. The device uses electrical signals to detect the insects as the wheat is being milled. The new technology is so precise that it can detect 5–10 infested seeds out of 30,000 good ones.
Breeding objectives
In traditional agricultural systems, wheat populations consist of landraces, informal farmer-maintained populations that often maintain high levels of morphological diversity. Although landraces of wheat are no longer extensively grown in Europe and North America, they continue to be important elsewhere. The origins of formal wheat breeding lie in the nineteenth century, when single line varieties were created through selection of seed from a single plant noted to have desired properties. Modern wheat breeding developed in the first years of the twentieth century and was closely linked to the development of Mendelian genetics. The standard method of breeding inbred wheat cultivars is by crossing two lines using hand emasculation, then selfing or inbreeding the progeny. Selections are identified (shown to have the genes responsible for the varietal differences) ten or more generations before release as a variety or cultivar.
Major breeding objectives include high grain yield, good quality, disease- and insect resistance and tolerance to abiotic stresses, including mineral, moisture and heat tolerance. Wheat has been the subject of mutation breeding, with the use of gamma-, x-rays, ultraviolet light (collectively, radiation breeding), and sometimes harsh chemicals. The varieties of wheat created through these methods are in the hundreds (going as far back as 1960), more of them being created in higher populated countries such as China. Bread wheat with high grain iron and zinc content has been developed through gamma radiation breeding, and through conventional selection breeding. International wheat breeding is led by the International Maize and Wheat Improvement Center in Mexico. ICARDA is another major public sector international wheat breeder, but it was forced to relocate from Syria to Lebanon in the Syrian Civil War.
Pathogens and wheat are in a constant process of coevolution. Spore-producing wheat rusts are substantially adapted towards successful spore propagation, which is essentially to say its R. These pathogens tend towards high-R evolutionary attractors.
For higher yields
The presence of certain versions of wheat genes has been important for crop yields. Genes for the 'dwarfing' trait, first used by Japanese wheat breeders to produce Norin 10 short-stalked wheat, have had a huge effect on wheat yields worldwide, and were major factors in the success of the Green Revolution in Mexico and Asia, an initiative led by Norman Borlaug. Dwarfing genes enable the carbon that is fixed in the plant during photosynthesis to be diverted towards seed production, and they also help prevent the problem of lodging. "Lodging" occurs when an ear stalk falls over in the wind and rots on the ground, and heavy nitrogenous fertilization of wheat makes the grass grow taller and become more susceptible to this problem. By 1997, 81% of the developing world's wheat area was planted to semi-dwarf wheats, giving both increased yields and better response to nitrogenous fertilizer.
T. turgidum subsp. polonicum, known for its longer glumes and grains, has been bred into main wheat lines for its grain size effect, and likely has contributed these traits to Triticum petropavlovskyi and the Portuguese landrace group Arrancada. As with many plants, MADS-box influences flower development, and more specifically, as with other agricultural Poaceae, influences yield. Despite that importance, little research has been done into MADS-box and other such spikelet and flower genetics in wheat specifically.
The world record wheat yield is about , reached in New Zealand in 2017. A project in the UK, led by Rothamsted Research has aimed to raise wheat yields in the country to by 2020, but in 2018 the UK record stood at , and the average yield was just .
For disease resistance
Wild grasses in the genus Triticum and related genera, and grasses such as rye have been a source of many disease-resistance traits for cultivated wheat breeding since the 1930s. Some resistance genes have been identified against Pyrenophora tritici-repentis, especially races 1 and 5, those most problematic in Kazakhstan. Wild relative, Aegilops tauschii is the source of several genes effective against TTKSK/Ug99 - Sr33, Sr45, Sr46, and SrTA1662 - of which Sr33 and SrTA1662 are the work of Olson et al., 2013, and Sr45 and Sr46 are also briefly reviewed therein.
is an R gene, a dominant negative for partial adult resistance discovered and molecularly characterized by Moore et al., 2015. Lr67 is effective against all races of leaf, stripe, and stem rusts, and powdery mildew (Blumeria graminis). This is produced by a mutation of two amino acids in what is predicted to be a hexose transporter. The product then heterodimerizes with the susceptible's product, with the downstream result of reducing glucose uptake.
is widely deployed in cultivars due to its abnormally broad effectiveness, conferring resistance against leaf- and stripe-rusts, and powdery mildew. An important quantitative resistance gene, Lr34, has been isolated and used intensively in wheat cultivation worldwide; it provides a novel resistance mechanism. Krattinger et al. 2009 find Lr34 to be an ABC transporter, and conclude that this is the probable reason for its effectiveness and the reason that it produces a 'slow rusting'/adult resistance phenotype.
is a widely used powdery mildew resistance introgressed from rye (Secale cereale). It comes from the rye 1R chromosome, a source of many resistances since the 1960s.
(FHB, Fusarium ear blight) is also an important breeding target. Marker-assisted breeding panels involving kompetitive allele specific PCR can be used. Singh et al. 2019 identify a KASP genetic marker for a pore-forming toxin-like gene providing FHB resistance.
In 2003 the first resistance genes against fungal diseases in wheat were isolated. In 2021, novel resistance genes were identified in wheat against powdery mildew and wheat leaf rust.
Modified resistance genes have been tested in transgenic wheat and barley plants.
To create hybrid vigor
Because wheat self-pollinates, creating hybrid seed to provide the possible benefits of heterosis, hybrid vigor (as in the familiar F1 hybrids of maize), is extremely labor-intensive; the high cost of hybrid wheat seed relative to its moderate benefits have kept farmers from adopting them widely despite nearly 90 years of effort. Commercial hybrid wheat seed has been produced using chemical hybridizing agents, plant growth regulators that selectively interfere with pollen development, or naturally occurring cytoplasmic male sterility systems. Hybrid wheat has been a limited commercial success in Europe (particularly France), the United States and South Africa.
Synthetic hexaploids made by crossing the wild goatgrass wheat ancestor Aegilops tauschii, and other Aegilops, and various durum wheats are now being deployed, and these increase the genetic diversity of cultivated wheats.
For gluten content
Modern bread wheat varieties have been cross-bred to contain greater amounts of gluten, which affords significant advantages for improving the quality of breads and pastas from a functional point of view. However, a 2020 study that grew and analyzed 60 wheat cultivars from between 1891 and 2010 found no changes in albumin/globulin and gluten contents over time. "Overall, the harvest year had a more significant effect on protein composition than the cultivar. At the protein level, we found no evidence to support an increased immunostimulatory potential of modern winter wheat."
For water efficiency
Stomata (or leaf pores) are involved in both uptake of carbon dioxide gas from the atmosphere and water vapor losses from the leaf due to water transpiration. Basic physiological investigation of these gas exchange processes has yielded carbon isotope based method used for breeding wheat varieties with improved water-use efficiency. These varieties can improve crop productivity in rain-fed dry-land wheat farms.
For insect resistance
The complex genome of wheat has made its improvement difficult. Comparison of hexaploid wheat genomes using a range of chromosome pseudomolecule and molecular scaffold assemblies in 2020 has enabled the resistance potential of its genes to be assessed. Findings include the identification of "a detailed multi-genome-derived nucleotide-binding leucine-rich repeat protein repertoire" which contributes to disease resistance, while the gene Sm1 provides a degree of insect resistance, for instance against the orange wheat blossom midge.
Genomics
Decoding the genome
In 2010, 95% of the genome of Chinese Spring line 42 wheat was decoded. This genome was released in a basic format for scientists and plant breeders to use but was not fully annotated. In 2012, an essentially complete gene set of bread wheat was published. Random shotgun libraries of total DNA and cDNA from the T. aestivum cv. Chinese Spring (CS42) were sequenced to generate 85 Gb of sequence (220 million reads) and identified between 94,000 and 96,000 genes. In 2018, a more complete Chinese Spring genome was released by a different team. In 2020, 15 genome sequences from various locations and varieties around the world were reported, with examples of their own use of the sequences to localize particular insect and disease resistance factors. is controlled by R genes which are highly race-specific.
Genetic engineering
For decades, the primary genetic modification technique has been non-homologous end joining (NHEJ). However, since its introduction, the / tool has been extensively adopted, for example:
To intentionally damage three homologs of TaNP1 (a glucose-methanol-choline oxidoreductase gene) to produce a novel male sterility trait, by Li et al. 2020
Blumeria graminis f.sp. tritici resistance has been produced by Shan et al. 2013 and Wang et al. 2014 by editing one of the mildew resistance locus o genes (more specifically one of the Triticum aestivum MLO (TaMLO) genes)
Triticum aestivum EDR1 (TaEDR1) (the EDR1 gene, which inhibits Bmt resistance) has been knocked out by Zhang et al. 2017 to improve that resistance
Triticum aestivum HRC (TaHRC) has been disabled by Su et al. 2019 thus producing Gibberella zeae resistance.
Triticum aestivum Ms1 (TaMs1) has been knocked out by Okada et al. 2019 to produce another novel male sterility
and Triticum aestivum acetolactate synthase (TaALS) and Triticum aestivum acetyl-CoA-carboxylase (TaACC) were subjected to base changes by Zhang et al. 2019 (in two publications) to confer herbicide resistance to ALS inhibitors and ACCase inhibitors respectively
these examples illustrate the rapid deployment and results that CRISPR/Cas9 has shown in wheat disease resistance improvement.
In art
The Dutch artist Vincent van Gogh created the series Wheat Fields between 1885 and 1890, consisting of dozens of paintings made mostly in different parts of rural France. They depict wheat crops, sometimes with farm workers, in varied seasons and styles, sometimes green, sometimes at harvest. Wheatfield with Crows was one of his last paintings, and is considered to be among his greatest works.
In 1967, the American artist Thomas Hart Benton made his oil on wood painting Wheat, showing a row of uncut wheat plants, occupying almost the whole height of the painting, between rows of freshly-cut stubble. The painting is held by the Smithsonian American Art Museum.
In 1982, the American conceptual artist Agnes Denes grew a two-acre field of wheat at Battery Park, Manhattan. The ephemeral artwork has been described as an act of protest. The harvested wheat was divided and sent to 28 world cities for an exhibition entitled "The International Art Show for the End of World Hunger".
| Biology and health sciences | Food and drink | null |
36863 | https://en.wikipedia.org/wiki/Lung | Lung | The lungs are the primary organs of the respiratory system in many animals, including humans. In mammals and most other tetrapods, two lungs are located near the backbone on either side of the heart. Their function in the respiratory system is to extract oxygen from the atmosphere and transfer it into the bloodstream, and to release carbon dioxide from the bloodstream into the atmosphere, in a process of gas exchange. Respiration is driven by different muscular systems in different species. Mammals, reptiles and birds use their musculoskeletal systems to support and foster breathing. In early tetrapods, air was driven into the lungs by the pharyngeal muscles via buccal pumping, a mechanism still seen in amphibians. In humans, the primary muscle that drives breathing is the diaphragm. The lungs also provide airflow that makes vocalisation including speech possible.
Humans have two lungs, a right lung and a left lung. They are situated within the thoracic cavity of the chest. The right lung is bigger than the left, and the left lung shares space in the chest with the heart. The lungs together weigh approximately 1.3 kilograms (2.9 lb), and the right is heavier. The lungs are part of the lower respiratory tract that begins at the trachea and branches into the bronchi and bronchioles, which receive air breathed in via the conducting zone. These divide until air reaches microscopic alveoli, where gas exchange takes place. Together, the lungs contain approximately 2,400 kilometers (1,500 mi) of airways and 300 to 500 million alveoli. Each lung is enclosed within a pleural sac of two pleurae which allows the inner and outer walls to slide over each other whilst breathing takes place, without much friction. The inner visceral pleura divides each lung as fissures into sections called lobes. The right lung has three lobes and the left has two. The lobes are further divided into bronchopulmonary segments and lobules. The lungs have a unique blood supply, receiving deoxygenated blood sent from the heart for the purposes of receiving oxygen (the pulmonary circulation) and a separate supply of oxygenated blood (the bronchial circulation).
The tissue of the lungs can be affected by a number of respiratory diseases including pneumonia and lung cancer. Chronic diseases such as chronic obstructive pulmonary disease and emphysema can be related to smoking or exposure to harmful substances. Diseases such as bronchitis can also affect the respiratory tract. Medical terms related to the lung often begin with pulmo-, from the Latin pulmonarius (of the lungs) as in pulmonology, or with pneumo- (from Greek πνεύμων "lung") as in pneumonia.
In embryonic development, the lungs begin to develop as an outpouching of the foregut, a tube which goes on to form the upper part of the digestive system. When the lungs are formed the fetus is held in the fluid-filled amniotic sac and so they do not function to breathe. Blood is also diverted from the lungs through the ductus arteriosus. At birth however, air begins to pass through the lungs, and the diversionary duct closes so that the lungs can begin to respire. The lungs only fully develop in early childhood.
Structure
Anatomy
In humans the lungs are located in the chest on either side of the heart in the rib cage. They are conical in shape with a narrow rounded apex at the top, and a broad concave base that rests on the convex surface of the diaphragm. The apex of the lung extends into the root of the neck, reaching shortly above the level of the sternal end of the first rib. The lungs stretch from close to the backbone in the rib cage to the front of the chest and downwards from the lower part of the trachea to the diaphragm.
The left lung shares space with the heart, and has an indentation in its border called the cardiac notch of the left lung to accommodate this. The front and outer sides of the lungs face the ribs, which make light indentations on their surfaces. The medial surfaces of the lungs face towards the centre of the chest, and lie against the heart, great vessels, and the carina where the trachea divides into the two main bronchi. The cardiac impression is an indentation formed on the surfaces of the lungs where they rest against the heart.
Both lungs have a central recession called the hilum, where the blood vessels and airways pass into the lungs making up the root of the lung. There are also bronchopulmonary lymph nodes on the hilum.
The lungs are surrounded by the pulmonary pleurae. The pleurae are two serous membranes; the outer parietal pleura lines the inner wall of the rib cage and the inner visceral pleura directly lines the surface of the lungs. Between the pleurae is a potential space called the pleural cavity containing a thin layer of lubricating pleural fluid.
Lobes
Each lung is divided into sections called lobes by the infoldings of the visceral pleura as fissures. Lobes are divided into segments, and segments have further divisions as lobules. There are three lobes in the right lung and two lobes in the left lung.
Fissures
The fissures are formed in early prenatal development by invaginations of the visceral pleura that divide the lobar bronchi, and section the lungs into lobes that helps in their expansion. The right lung is divided into three lobes by a horizontal fissure, and an oblique fissure. The left lung is divided into two lobes by an oblique fissure which is closely aligned with the oblique fissure in the right lung. In the right lung the upper horizontal fissure, separates the upper (superior) lobe from the middle lobe. The lower, oblique fissure separates the lower lobe from the middle and upper lobes.
Variations in the fissures are fairly common being either incompletely formed or present as an extra fissure as in the azygos fissure, or absent. Incomplete fissures are responsible for interlobar collateral ventilation, airflow between lobes which is unwanted in some lung volume reduction procedures.
Segments
The main or primary bronchi enter the lungs at the hilum and initially branch into secondary bronchi also known as lobar bronchi that supply air to each lobe of the lung. The lobar bronchi branch into tertiary bronchi also known as segmental bronchi and these supply air to the further divisions of the lobes known as bronchopulmonary segments. Each bronchopulmonary segment has its own (segmental) bronchus and arterial supply. Segments for the left and right lung are shown in the table. The segmental anatomy is useful clinically for localising disease processes in the lungs. A segment is a discrete unit that can be surgically removed without seriously affecting surrounding tissue.
Right lung
The right lung has both more lobes and segments than the left. It is divided into three lobes, an upper, middle, and a lower lobe by two fissures, one oblique and one horizontal. The upper, horizontal fissure, separates the upper from the middle lobe. It begins in the lower oblique fissure near the posterior border of the lung, and, running horizontally forward, cuts the anterior border on a level with the sternal end of the fourth costal cartilage; on the mediastinal surface it may be traced back to the hilum. The lower, oblique fissure, separates the lower from the middle and upper lobes and is closely aligned with the oblique fissure in the left lung.
The mediastinal surface of the right lung is indented by a number of nearby structures. The heart sits in an impression called the cardiac impression. Above the hilum of the lung is an arched groove for the azygos vein, and above this is a wide groove for the superior vena cava and right brachiocephalic vein; behind this, and close to the top of the lung is a groove for the brachiocephalic artery. There is a groove for the oesophagus behind the hilum and the pulmonary ligament, and near the lower part of the oesophageal groove is a deeper groove for the inferior vena cava before it enters the heart.
The weight of the right lung varies between individuals, with a standard reference range in men of and in women of .
Left lung
The left lung is divided into two lobes, an upper and a lower lobe, by the oblique fissure, which extends from the costal to the mediastinal surface of the lung both above and below the hilum. The left lung, unlike the right, does not have a middle lobe, though it does have a homologous feature, a projection of the upper lobe termed the lingula. Its name means "little tongue". The lingula on the left lung serves as an anatomic parallel to the middle lobe on the right lung, with both areas being predisposed to similar infections and anatomic complications. There are two bronchopulmonary segments of the lingula: superior and inferior.
The mediastinal surface of the left lung has a large cardiac impression where the heart sits. This is deeper and larger than that on the right lung, at which level the heart projects to the left.
On the same surface, immediately above the hilum, is a well-marked curved groove for the aortic arch, and a groove below it for the descending aorta. The left subclavian artery, a branch off the aortic arch, sits in a groove from the arch to near the apex of the lung. A shallower groove in front of the artery and near the edge of the lung, lodges the left brachiocephalic vein. The oesophagus may sit in a wider shallow impression at the base of the lung.
By standard reference range, the weight of the left lung is in men and in women.
Illustrations
Microanatomy
The lungs are part of the lower respiratory tract, and accommodate the bronchial airways when they branch from the trachea. The bronchial airways terminate in alveoli which make up the functional tissue (parenchyma) of the lung, and veins, arteries, nerves, and lymphatic vessels. The trachea and bronchi have plexuses of lymph capillaries in their mucosa and submucosa. The smaller bronchi have a single layer of lymph capillaries, and they are absent in the alveoli. The lungs are supplied with the largest lymphatic drainage system of any other organ in the body. Each lung is surrounded by a serous membrane of visceral pleura, which has an underlying layer of loose connective tissue attached to the substance of the lung.
Connective tissue
The connective tissue of the lungs is made up of elastic and collagen fibres that are interspersed between the capillaries and the alveolar walls. Elastin is the key protein of the extracellular matrix and is the main component of the elastic fibres. Elastin gives the necessary elasticity and resilience required for the persistent stretching involved in breathing, known as lung compliance. It is also responsible for the elastic recoil needed. Elastin is more concentrated in areas of high stress such as the openings of the alveoli, and alveolar junctions. The connective tissue links all the alveoli to form the lung parenchyma which has a sponge-like appearance. The alveoli have interconnecting air passages in their walls known as the pores of Kohn.
Respiratory epithelium
All of the lower respiratory tract including the trachea, bronchi, and bronchioles is lined with respiratory epithelium. This is a ciliated epithelium interspersed with goblet cells which produce mucin the main component of mucus, ciliated cells, basal cells, and in the terminal bronchioles–club cells with actions similar to basal cells, and macrophages. The epithelial cells, and the submucosal glands throughout the respiratory tract secrete airway surface liquid (ASL), the composition of which is tightly regulated and determines how well mucociliary clearance works.
Pulmonary neuroendocrine cells are found throughout the respiratory epithelium including the alveolar epithelium, though they only account for around 0.5 percent of the total epithelial population. PNECs are innervated airway epithelial cells that are particularly focused at airway junction points. These cells can produce serotonin, dopamine, and norepinephrine, as well as polypeptide products. Cytoplasmic processes from the pulmonary neuroendocrine cells extend into the airway lumen where they may sense the composition of inspired gas.
Bronchial airways
In the bronchi there are incomplete tracheal rings of cartilage and smaller plates of cartilage that keep them open. Bronchioles are too narrow to support cartilage and their walls are of smooth muscle, and this is largely absent in the narrower respiratory bronchioles which are mainly just of epithelium. The absence of cartilage in the terminal bronchioles gives them an alternative name of membranous bronchioles.
Respiratory zone
The conducting zone of the respiratory tract ends at the terminal bronchioles when they branch into the respiratory bronchioles. This marks the beginning of the terminal respiratory unit called the acinus which includes the respiratory bronchioles, the alveolar ducts, alveolar sacs, and alveoli. An acinus measures up to 10 mm in diameter. A primary pulmonary lobule is the part of the lung distal to the respiratory bronchiole. Thus, it includes the alveolar ducts, sacs, and alveoli but not the respiratory bronchioles.
The unit described as the secondary pulmonary lobule is the lobule most referred to as the pulmonary lobule or respiratory lobule. This lobule is a discrete unit that is the smallest component of the lung that can be seen without aid. The secondary pulmonary lobule is likely to be made up of between 30 and 50 primary lobules. The lobule is supplied by a terminal bronchiole that branches into respiratory bronchioles. The respiratory bronchioles supply the alveoli in each acinus and is accompanied by a pulmonary artery branch. Each lobule is enclosed by an interlobular septum. Each acinus is incompletely separated by an intralobular septum.
The respiratory bronchiole gives rise to the alveolar ducts that lead to the alveolar sacs, which contain two or more alveoli. The walls of the alveoli are extremely thin allowing a fast rate of diffusion. The alveoli have interconnecting small air passages in their walls known as the pores of Kohn.
Alveoli
Alveoli consist of two types of alveolar cell and an alveolar macrophage. The two types of cell are known as type I and type II cells (also known as pneumocytes). Types I and II make up the walls and alveolar septa. Type I cells provide 95% of the surface area of each alveoli and are flat ("squamous"), and Type II cells generally cluster in the corners of the alveoli and have a cuboidal shape. Despite this, cells occur in a roughly equal ratio of 1:1 or 6:4.
Type I are squamous epithelial cells that make up the alveolar wall structure. They have extremely thin walls that enable an easy gas exchange. These type I cells also make up the alveolar septa which separate each alveolus. The septa consist of an epithelial lining and associated basement membranes. Type I cells are not able to divide, and consequently rely on differentiation from Type II cells.
Type II are larger and they line the alveoli and produce and secrete epithelial lining fluid, and lung surfactant. Type II cells are able to divide and differentiate to Type I cells.
The alveolar macrophages have an important role in the immune system. They remove substances which deposit in the alveoli including loose red blood cells that have been forced out from blood vessels.
Microbiota
There is a large presence of microorganisms in the lungs known as the lung microbiota that interacts with the airway epithelial cells; an interaction of probable importance in maintaining homeostasis. The microbiota is complex and dynamic in healthy people, and altered in diseases such as asthma and COPD. For example significant changes can take place in COPD following infection with rhinovirus. Fungal genera that are commonly found as mycobiota in the microbiota include Candida, Malassezia, Saccharomyces, and Aspergillus.
Respiratory tract
The lower respiratory tract is part of the respiratory system, and consists of the trachea and the structures below this including the lungs. The trachea receives air from the pharynx and travels down to a place where it splits (the carina) into a right and left primary bronchus. These supply air to the right and left lungs, splitting progressively into the secondary and tertiary bronchi for the lobes of the lungs, and into smaller and smaller bronchioles until they become the respiratory bronchioles. These in turn supply air through alveolar ducts into the alveoli, where the exchange of gases take place. Oxygen breathed in, diffuses through the walls of the alveoli into the enveloping capillaries and into the circulation, and carbon dioxide diffuses from the blood into the lungs to be breathed out.
Estimates of the total surface area of lungs vary from ; although this is often quoted in textbooks and the media being "the size of a tennis court", it is actually less than half the size of a singles court.
The bronchi in the conducting zone are reinforced with hyaline cartilage in order to hold open the airways. The bronchioles have no cartilage and are surrounded instead by smooth muscle. Air is warmed to , humidified and cleansed by the conducting zone. Particles from the air being removed by the cilia on the respiratory epithelium lining the passageways, in a process called mucociliary clearance.
Pulmonary stretch receptors in the smooth muscle of the airways initiate a reflex known as the Hering–Breuer reflex that prevents the lungs from over-inflation, during forceful inspiration.
Blood supply
The lungs have a dual blood supply provided by a bronchial and a pulmonary circulation. The bronchial circulation supplies oxygenated blood to the airways of the lungs, through the bronchial arteries that leave the aorta. There are usually three arteries, two to the left lung and one to the right, and they branch alongside the bronchi and bronchioles. The pulmonary circulation carries deoxygenated blood from the heart to the lungs and returns the oxygenated blood to the heart to supply the rest of the body.
The blood volume of the lungs is about 450 millilitres on average, about 9% of the total blood volume of the entire circulatory system. This quantity can easily fluctuate from between one-half and twice the normal volume. Also, in the event of blood loss through hemorrhage, blood from the lungs can partially compensate by automatically transferring to the systemic circulation.
Nerve supply
The lungs are supplied by nerves of the autonomic nervous system. Input from the parasympathetic nervous system occurs via the vagus nerve. When stimulated by acetylcholine, this causes constriction of the smooth muscle lining the bronchus and bronchioles, and increases the secretions from glands. The lungs also have a sympathetic tone from norepinephrine acting on the beta 2 adrenoceptors in the respiratory tract, which causes bronchodilation.
The action of breathing takes place because of nerve signals sent by the respiratory center in the brainstem, along the phrenic nerve from the cervical plexus to the diaphragm.
Variation
The lobes of the lung are subject to anatomical variations. A horizontal interlobar fissure was found to be incomplete in 25% of right lungs, or even absent in 11% of all cases. An accessory fissure was also found in 14% and 22% of left and right lungs, respectively. An oblique fissure was found to be incomplete in 21% to 47% of left lungs. In some cases a fissure is absent, or extra, resulting in a right lung with only two lobes, or a left lung with three lobes.
A variation in the airway branching structure has been found specifically in the central airway
branching. This variation is associated with the development of COPD in adulthood.
Development
The development of the human lungs arise from the laryngotracheal groove and develop to maturity over several weeks in the foetus and for several years following birth.
The larynx, trachea, bronchi and lungs that make up the respiratory tract, begin to form during the fourth week of embryogenesis from the lung bud which appears ventrally to the caudal portion of the foregut.
The respiratory tract has a branching structure, and is also known as the respiratory tree. In the embryo this structure is developed in the process of branching morphogenesis, and is generated by the repeated splitting of the tip of the branch. In the development of the lungs (as in some other organs) the epithelium forms branching tubes. The lung has a left-right symmetry and each bud known as a bronchial bud grows out as a tubular epithelium that becomes a bronchus. Each bronchus branches into bronchioles. The branching is a result of the tip of each tube bifurcating. The branching process forms the bronchi, bronchioles, and ultimately the alveoli. The four genes mostly associated with branching morphogenesis in the lung are the intercellular signalling protein – sonic hedgehog (SHH), fibroblast growth factors FGF10 and FGFR2b, and bone morphogenetic protein BMP4. FGF10 is seen to have the most prominent role. FGF10 is a paracrine signalling molecule needed for epithelial branching, and SHH inhibits FGF10. The development of the alveoli is influenced by a different mechanism whereby continued bifurcation is stopped and the distal tips become dilated to form the alveoli.
At the end of the fourth week, the lung bud divides into two, the right and left primary bronchial buds on each side of the trachea. During the fifth week, the right bud branches into three secondary bronchial buds and the left branches into two secondary bronchial buds. These give rise to the lobes of the lungs, three on the right and two on the left. Over the following week, the secondary buds branch into tertiary buds, about ten on each side. From the sixth week to the sixteenth week, the major elements of the lungs appear except the alveoli. From week 16 to week 26, the bronchi enlarge and lung tissue becomes highly vascularised. Bronchioles and alveolar ducts also develop. By week 26, the terminal bronchioles have formed which branch into two respiratory bronchioles. During the period covering the 26th week until birth the important blood–air barrier is established. Specialised type I alveolar cells where gas exchange will take place, together with the type II alveolar cells that secrete pulmonary surfactant, appear. The surfactant reduces the surface tension at the air-alveolar surface which allows expansion of the alveolar sacs. The alveolar sacs contain the primitive alveoli that form at the end of the alveolar ducts,
and their appearance around the seventh month marks the point at which limited respiration would be possible, and the premature baby could survive.
Vitamin A deficiency
The developing lung is particularly vulnerable to changes in the levels of vitamin A. Vitamin A deficiency has been linked to changes in the epithelial lining of the lung and in the lung parenchyma. This can disrupt the normal physiology of the lung and predispose to respiratory diseases. Severe nutritional deficiency in vitamin A results in a reduction in the formation of the alveolar walls (septa) and to notable changes in the respiratory epithelium; alterations are noted in the extracellular matrix and in the protein content of the basement membrane. The extracellular matrix maintains lung elasticity; the basement membrane is associated with alveolar epithelium and is important in the blood-air barrier. The deficiency is associated with functional defects and disease states. Vitamin A is crucial in the development of the alveoli which continues for several years after birth.
After birth
At birth, the baby's lungs are filled with fluid secreted by the lungs and are not inflated. After birth the infant's central nervous system reacts to the sudden change in temperature and environment. This triggers the first breath, within about ten seconds after delivery. Before birth, the lungs are filled with fetal lung fluid. After the first breath, the fluid is quickly absorbed into the body or exhaled. The resistance in the lung's blood vessels decreases giving an increased surface area for gas exchange, and the lungs begin to breathe spontaneously. This accompanies other changes which result in an increased amount of blood entering the lung tissues.
At birth, the lungs are very undeveloped with only around one sixth of the alveoli of the adult lung present. The alveoli continue to form into early adulthood, and their ability to form when necessary is seen in the regeneration of the lung. Alveolar septa have a double capillary network instead of the single network of the developed lung. Only after the maturation of the capillary network can the lung enter a normal phase of growth. Following the early growth in numbers of alveoli there is another stage of the alveoli being enlarged.
Function
Gas exchange
The major function of the lungs is gas exchange between the lungs and the blood. The alveolar and pulmonary capillary gases equilibrate across the thin blood–air barrier. This thin membrane (about 0.5 –2 μm thick) is folded into about 300 million alveoli, providing an extremely large surface area (estimates varying between 70 and 145 m2) for gas exchange to occur.
The lungs are not capable of expanding to breathe on their own, and will only do so when there is an increase in the volume of the thoracic cavity. This is achieved by the muscles of respiration, through the contraction of the diaphragm, and the intercostal muscles which pull the rib cage upwards as shown in the diagram. During breathing out the muscles relax, returning the lungs to their resting position. At this point the lungs contain the functional residual capacity (FRC) of air, which, in the adult human, has a volume of about 2.5–3.0 litres.
During heavy breathing as in exertion, a large number of accessory muscles in the neck and abdomen are recruited, that during exhalation pull the ribcage down, decreasing the volume of the thoracic cavity. The FRC is now decreased, but since the lungs cannot be emptied completely there is still about a litre of residual air left. Lung function testing is carried out to evaluate lung volumes and capacities.
Protection
The lungs possess several characteristics which protect against infection. The respiratory tract is lined by respiratory epithelium or respiratory mucosa, with hair-like projections called cilia that beat rhythmically and carry mucus. This mucociliary clearance is an important defence system against air-borne infection. The dust particles and bacteria in the inhaled air are caught in the mucosal surface of the airways, and are moved up towards the pharynx by the rhythmic upward beating action of the cilia. The lining of the lung also secretes immunoglobulin A which protects against respiratory infections; goblet cells secrete mucus which also contains several antimicrobial compounds such as defensins, antiproteases, and antioxidants. A rare type of specialised cell called a pulmonary ionocyte that is suggested may regulate mucus viscosity has been described. In addition, the lining of the lung also contains macrophages, immune cells which engulf and destroy debris and microbes that enter the lung in a process known as phagocytosis; and dendritic cells which present antigens to activate components of the adaptive immune system such as T cells and B cells.
The size of the respiratory tract and the flow of air also protect the lungs from larger particles. Smaller particles deposit in the mouth and behind the mouth in the oropharynx, and larger particles are trapped in nasal hair after inhalation.
Other
In addition to their function in respiration, the lungs have a number of other functions. They are involved in maintaining homeostasis, helping in the regulation of blood pressure as part of the renin–angiotensin system. The inner lining of the blood vessels secretes angiotensin-converting enzyme (ACE) an enzyme that catalyses the conversion of angiotensin I to angiotensin II. The lungs are involved in the blood's acid–base homeostasis by expelling carbon dioxide when breathing.
The lungs also serve a protective role. Several blood-borne substances, such as a few types of prostaglandins, leukotrienes, serotonin and bradykinin, are excreted through the lungs. Drugs and other substances can be absorbed, modified or excreted in the lungs. The lungs filter out small blood clots from veins and prevent them from entering arteries and causing strokes.
The lungs also play a pivotal role in speech by providing air and airflow for the creation of vocal sounds, and other paralanguage communications such as sighs and gasps.
Research suggests a role of the lungs in the production of blood platelets.
Gene and protein expression
About 20,000 protein coding genes are expressed in human cells and almost 75% of these genes are expressed in the normal lung. A little less than 200 of these genes are more specifically expressed in the lung with less than 20 genes being highly lung specific. The highest expression of lung specific proteins are different surfactant proteins, such as SFTPA1, SFTPB and SFTPC, and napsin, expressed in type II pneumocytes. Other proteins with elevated expression in the lung are the dynein protein DNAH5 in ciliated cells, and the secreted SCGB1A1 protein in mucus-secreting goblet cells of the airway mucosa.
Clinical significance
Lungs can be affected by a number of diseases and disorders. Pulmonology is the medical speciality that deals with respiratory diseases involving the lungs and respiratory system. Cardiothoracic surgery deals with surgery of the lungs including lung volume reduction surgery, lobectomy, pneumectomy and lung transplantation.
Inflammation and infection
Inflammatory conditions of the lung tissue are pneumonia, of the respiratory tract are bronchitis and bronchiolitis, and of the pleurae surrounding the lungs pleurisy. Inflammation is usually caused by infections due to bacteria or viruses. When the lung tissue is inflamed due to other causes it is called pneumonitis. One major cause of bacterial pneumonia is tuberculosis. Chronic infections often occur in those with immunodeficiency and can include a fungal infection by Aspergillus fumigatus that can lead to an aspergilloma forming in the lung. In the US certain species of rat can transmit a hantavirus to humans that can cause untreatable hantavirus pulmonary syndrome with a similar presentation to that of acute respiratory distress syndrome (ARDS).
Alcohol affects the lungs and can cause inflammatory alcoholic lung disease. Acute exposure to alcohol stimulates the beating of cilia in the respiratory epithelium. However, chronic exposure has the effect of desensitising the ciliary response which reduces mucociliary clearance (MCC). MCC is an innate defense system protecting against pollutants and pathogens, and when this is disrupted the numbers of alveolar macrophages are decreased. A subsequent inflammatory response is the release of cytokines. Another consequence is the susceptibility to infection.
Blood-supply changes
A pulmonary embolism is a blood clot that becomes lodged in the pulmonary arteries. The majority of emboli arise because of deep vein thrombosis in the legs. Pulmonary emboli may be investigated using a ventilation/perfusion scan, a CT scan of the arteries of the lung, or blood tests such as the D-dimer. Pulmonary hypertension describes an increased pressure at the beginning of the pulmonary artery that has a large number of differing causes. Other rarer conditions may also affect the blood supply of the lung, such as granulomatosis with polyangiitis, which causes inflammation of the small blood vessels of the lungs and kidneys.
A lung contusion is a bruise caused by chest trauma. It results in hemorrhage of the alveoli causing a build-up of fluid which can impair breathing, and this can be either mild or severe.
The function of the lungs can also be affected by compression from fluid in the pleural cavity pleural effusion, or other substances such as air (pneumothorax), blood (hemothorax), or rarer causes. These may be investigated using a chest X-ray or CT scan, and may require the insertion of a surgical drain until the underlying cause is identified and treated.
Obstructive lung diseases
Asthma, bronchiectasis, and chronic obstructive pulmonary disease (COPD) that includes chronic bronchitis, and emphysema, are all obstructive lung diseases characterised by airway obstruction. This limits the amount of air that is able to enter alveoli because of constriction of the bronchial tree, due to inflammation. Obstructive lung diseases are often identified because of symptoms and diagnosed with pulmonary function tests such as spirometry.
Many obstructive lung diseases are managed by avoiding triggers (such as dust mites or smoking), with symptom control such as bronchodilators, and with suppression of inflammation (such as through corticosteroids) in severe cases. A common cause of chronic bronchitis, and emphysema, is smoking; and common causes of bronchiectasis include severe infections and cystic fibrosis. The definitive cause of asthma is not yet known, but it has been linked to other atopic diseases.
The breakdown of alveolar tissue, often as a result of tobacco-smoking leads to emphysema, which can become severe enough to develop into COPD. Elastase breaks down the elastin in the lung's connective tissue that can also result in emphysema. Elastase is inhibited by the acute-phase protein, alpha-1 antitrypsin, and when there is a deficiency in this, emphysema can develop. With persistent stress from smoking, the airway basal cells become disarranged and lose their regenerative ability needed to repair the epithelial barrier. The disorganised basal cells are seen to be responsible for the major airway changes that are characteristic of COPD, and with continued stress can undergo a malignant transformation. Studies have shown that the initial development of emphysema is centred on the early changes in the airway epithelium of the small airways. Basal cells become further deranged in a smoker's transition to clinically defined COPD.
Restrictive lung diseases
Some types of chronic lung diseases are classified as restrictive lung disease, because of a restriction in the amount of lung tissue involved in respiration. These include pulmonary fibrosis which can occur when the lung is inflamed for a long period of time. Fibrosis in the lung replaces functioning lung tissue with fibrous connective tissue. This can be due to a large variety of occupational lung diseases such as Coalworker's pneumoconiosis, autoimmune diseases or more rarely to a reaction to medication. Severe respiratory disorders, where spontaneous breathing is not enough to maintain life, may need the use of mechanical ventilation to ensure an adequate supply of air.
Cancers
Lung cancer can either arise directly from lung tissue or as a result of metastasis from another part of the body. There are two main types of primary tumour described as either small-cell or non-small-cell lung carcinomas. The major risk factor for cancer is smoking. Once a cancer is identified it is staged using scans such as a CT scan and a sample of tissue from a biopsy is taken. Cancers may be treated surgically by removing the tumour, the use of radiotherapy, chemotherapy or a combination, or with the aim of symptom control. Lung cancer screening is being recommended in the United States for high-risk populations.
Congenital disorders
Congenital disorders include cystic fibrosis, pulmonary hypoplasia (an incomplete development of the lungs)congenital diaphragmatic hernia, and infant respiratory distress syndrome caused by a deficiency in lung surfactant. An azygos lobe is a congenital anatomical variation which though usually without effect can cause problems in thoracoscopic procedures.
Pleural space pressure
A pneumothorax (collapsed lung) is an abnormal collection of air in the pleural space that causes an uncoupling of the lung from the chest wall. The lung cannot expand against the air pressure inside the pleural space. An easy to understand example is a traumatic pneumothorax, where air enters the pleural space from outside the body, as occurs with puncture to the chest wall. Similarly, scuba divers ascending while holding their breath with their lungs fully inflated can cause air sacs (alveoli) to burst and leak high pressure air into the pleural space.
Examination
As part of a physical examination in response to respiratory symptoms of shortness of breath, and cough, a lung examination may be carried out. This exam includes palpation and auscultation. The areas of the lungs that can be listened to using a stethoscope are called the lung fields, and these are the posterior, lateral, and anterior lung fields. The posterior fields can be listened to from the back and include: the lower lobes (taking up three quarters of the posterior fields); the anterior fields taking up the other quarter; and the lateral fields under the axillae, the left axilla for the lingual, the right axilla for the middle right lobe. The anterior fields can also be auscultated from the front. An area known as the triangle of auscultation is an area of thinner musculature on the back which allows improved listening. Abnormal breathing sounds heard during a lung exam can indicate the presence of a lung condition; wheezing for example is commonly associated with asthma and COPD.
Function testing
Lung function testing is carried out by evaluating a person's capacity to inhale and exhale in different circumstances. The volume of air inhaled and exhaled by a person at rest is the tidal volume (normally 500–750 mL); the inspiratory reserve volume and expiratory reserve volume are the additional amounts a person is able to forcibly inhale and exhale respectively. The summed total of forced inspiration and expiration is a person's vital capacity. Not all air is expelled from the lungs even after a forced breath out; the remainder of the air is called the residual volume. Together these terms are referred to as lung volumes.
Pulmonary plethysmographs are used to measure functional residual capacity. Functional residual capacity cannot be measured by tests that rely on breathing out, as a person is only able to breathe a maximum of 80% of their total functional capacity. The total lung capacity depends on the person's age, height, weight, and sex, and normally ranges between four and six litres. Females tend to have a 20–25% lower capacity than males. Tall people tend to have a larger total lung capacity than shorter people. Smokers have a lower capacity than nonsmokers. Thinner persons tend to have a larger capacity. Lung capacity can be increased by physical training as much as 40% but the effect may be modified by exposure to air pollution.
Other lung function tests include spirometry, measuring the amount (volume) and flow of air that can be inhaled and exhaled. The maximum volume of breath that can be exhaled is called the vital capacity. In particular, how much a person is able to exhale in one second (called forced expiratory volume (FEV1)) as a proportion of how much they are able to exhale in total (FEV). This ratio, the FEV1/FEV ratio, is important to distinguish whether a lung disease is restrictive or obstructive. Another test is that of the lung's diffusing capacity – this is a measure of the transfer of gas from air to the blood in the lung capillaries.
Culinary uses
Mammal lung is one of the main types of offal, or pluck, alongside the heart and trachea, and is consumed as a foodstuff around the world in dishes such as Scottish haggis. The United States Food and Drug Administration legally prohibits the sale of animal lungs due to concerns such as fungal spores or cross-contamination with other organs, although this has been criticised as unfounded.
Other animals
Birds
The lungs of birds are relatively small, but are connected to eight or nine air sacs that extend through much of the body, and are in turn connected to air spaces within the bones. On inhalation, air travels through the trachea of a bird into the air sacs. Air then travels continuously from the air sacs at the back, through the lungs, which are relatively fixed in size, to the air sacs at the front. From here, the air is exhaled. These fixed size lungs are called "circulatory lungs", as distinct from the "bellows-type lungs" found in most other animals.
The lungs of birds contain millions of tiny parallel passages called parabronchi. Small sacs called atria radiate from the walls of the tiny passages; these, like the alveoli in other lungs, are the site of gas exchange by simple diffusion. The blood flow around the parabronchi and their atria forms a cross-current process of gas exchange (see diagram on the right).
The air sacs, which hold air, do not contribute much to gas exchange, despite being thin-walled, as they are poorly vascularised. The air sacs expand and contract due to changes in the volume in the thorax and abdomen. This volume change is caused by the movement of the sternum and ribs and this movement is often synchronised with movement of the flight muscles.
Parabronchi in which the air flow is unidirectional are called paleopulmonic parabronchi and are found in all birds. Some birds, however, have, in addition, a lung structure where the air flow in the parabronchi is bidirectional. These are termed neopulmonic parabronchi.
Reptiles
The lungs of most reptiles have a single bronchus running down the centre, from which numerous branches reach out to individual pockets throughout the lungs. These pockets are similar to alveoli in mammals, but much larger and fewer in number. These give the lung a sponge-like texture. In tuataras, snakes, and some lizards, the lungs are simpler in structure, similar to that of typical amphibians.
Snakes and limbless lizards typically possess only the right lung as a major respiratory organ; the left lung is greatly reduced, or even absent. Amphisbaenians, however, have the opposite arrangement, with a major left lung, and a reduced or absent right lung.
Both crocodilians and monitor lizards have lungs similar to those of birds, providing a unidirectional airflow and even possessing air sacs. The now extinct pterosaurs have seemingly even further refined this type of lung, extending the airsacs into the wing membranes and, in the case of lonchodectids, Tupuxuara, and azhdarchoids, the hindlimbs.
Reptilian lungs typically receive air via expansion and contraction of the ribs driven by axial muscles and buccal pumping. Crocodilians also rely on the hepatic piston method, in which the liver is pulled back by a muscle anchored to the pubic bone (part of the pelvis) called the diaphragmaticus, which in turn creates negative pressure in the crocodile's thoracic cavity, allowing air to be moved into the lungs by Boyle's law. Turtles, which are unable to move their ribs, instead use their forelimbs and pectoral girdle to force air in and out of the lungs.
Amphibians
The lungs of most frogs and other amphibians are simple and balloon-like, with gas exchange limited to the outer surface of the lung. This is not very efficient, but amphibians have low metabolic demands and can also quickly dispose of carbon dioxide by diffusion across their skin in water, and supplement their oxygen supply by the same method. Amphibians employ a positive pressure system to get air to their lungs, forcing air down into the lungs by buccal pumping. This is distinct from most higher vertebrates, who use a breathing system driven by negative pressure where the lungs are inflated by expanding the rib cage. In buccal pumping, the floor of the mouth is lowered, filling the mouth cavity with air. The throat muscles then presses the throat against the underside of the skull, forcing the air into the lungs.
Due to the possibility of respiration across the skin combined with small size, all known lungless tetrapods are amphibians. The majority of salamander species are lungless salamanders, which respirate through their skin and tissues lining their mouth. This necessarily restricts their size: all are small and rather thread-like in appearance, maximising skin surface relative to body volume. Other known lungless tetrapods are the Bornean flat-headed frog and Atretochoana eiselti, a caecilian.
The lungs of amphibians typically have a few narrow internal walls (septa) of soft tissue around the outer walls, increasing the respiratory surface area and giving the lung a honeycomb appearance. In some salamanders, even these are lacking, and the lung has a smooth wall. In caecilians, as in snakes, only the right lung attains any size or development.
Fish
Lungs are found in three groups of fish; the coelacanths, the bichirs and the lungfish. Like in tetrapods, but unlike fish with swim bladder, the opening is at the ventral side of the oesophagus. The coelacanth has a nonfunctional and unpaired vestigial lung surrounded by a fatty organ. Bichirs, the only group of ray-finned fish with lungs, have a pair which are hollow unchambered sacs, where the gas-exchange occurs on very flat folds that increase their inner surface area. The lungs of lungfish show more resemblance to tetrapod lungs. There is an elaborate network of parenchymal septa, dividing them into numerous respiration chambers. In the Australian lungfish, there is only a single lung, albeit divided into two lobes. Other lungfish, however, have traditionally been considered having two lungs, but newer research defines paired lungs as bilateral lung buds that arise simultaneously and are both connected directly to the foregut, which is only seen in tetrapods. In all lungfish, including the Australian, the lungs are located in the upper dorsal part of the body, with the connecting duct curving around and above the oesophagus. The blood supply also twists around the oesophagus, suggesting that the lungs originally evolved in the ventral part of the body, as in other vertebrates.
Invertebrates
A number of invertebrates have lung-like structures that serve a similar respiratory purpose to true vertebrate lungs, but are not evolutionarily related and only arise out of convergent evolution. Some arachnids, such as spiders and scorpions, have structures called book lungs used for atmospheric gas exchange. Some species of spider have four pairs of book lungs but most have two pairs. Scorpions have spiracles on their body for the entrance of air to the book lungs.
The coconut crab is terrestrial and uses structures called branchiostegal lungs to breathe air. Juveniles are released into the ocean, however adults cannot swim and possess an only rudimentary set of gills. The adult crabs can breathe on land and hold their breath underwater. The branchiostegal lungs are seen as a developmental adaptive stage from water-living to enable land-living, or from fish to amphibian.
Pulmonates are mostly land snails and slugs that have developed a simple lung from the mantle cavity. An externally located opening called the pneumostome allows air to be taken into the mantle cavity lung.
Evolutionary origins
The lungs of today's terrestrial vertebrates and the gas bladders of today's fish are believed to have evolved from simple sacs, as outpocketings of the oesophagus, that allowed early fish to gulp air under oxygen-poor conditions. These outpocketings first arose in the bony fish. In most of the ray-finned fish, the sacs evolved into closed off gas bladders, while a number of carp, trout, herring, catfish, and eels have retained the physostome condition with the sac being open to the oesophagus. In more basal bony fish, such as the gar, bichir, bowfin and the lobe-finned fish, the bladders have evolved to primarily function as lungs. The lobe-finned fish gave rise to the land-based tetrapods. Thus, the lungs of vertebrates are homologous to the gas bladders of fish (but not to their gills).
| Biology and health sciences | Biology | null |
36869 | https://en.wikipedia.org/wiki/Cell%20division | Cell division | Cell division is the process by which a parent cell divides into two daughter cells. Cell division usually occurs as part of a larger cell cycle in which the cell grows and replicates its chromosome(s) before dividing. In eukaryotes, there are two distinct types of cell division: a vegetative division (mitosis), producing daughter cells genetically identical to the parent cell, and a cell division that produces haploid gametes for sexual reproduction (meiosis), reducing the number of chromosomes from two of each type in the diploid parent cell to one of each type in the daughter cells. Mitosis is a part of the cell cycle, in which, replicated chromosomes are separated into two new nuclei. Cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. In general, mitosis (division of the nucleus) is preceded by the S stage of interphase (during which the DNA replication occurs) and is followed by telophase and cytokinesis; which divides the cytoplasm, organelles, and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. The different stages of mitosis all together define the M phase of an animal cell cycle—the division of the mother cell into two genetically identical daughter cells.
To ensure proper progression through the cell cycle, DNA damage is detected and repaired at various checkpoints throughout the cycle. These checkpoints can halt progression through the cell cycle by inhibiting certain cyclin-CDK complexes. Meiosis undergoes two divisions resulting in four haploid daughter cells. Homologous chromosomes are separated in the first division of meiosis, such that each daughter cell has one copy of each chromosome. These chromosomes have already been replicated and have two sister chromatids which are then separated during the second division of meiosis. Both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. Both are believed to be present in the last eukaryotic common ancestor.
Prokaryotes (bacteria and archaea) usually undergo a vegetative cell division known as binary fission, where their genetic material is segregated equally into two daughter cells, but there are alternative manners of division, such as budding, that have been observed. All cell divisions, regardless of organism, are preceded by a single round of DNA replication.
For simple unicellular microorganisms such as the amoeba, one cell division is equivalent to reproduction – an entire new organism is created. On a larger scale, mitotic cell division can create progeny from multicellular organisms, such as plants that grow from cuttings. Mitotic cell division enables sexually reproducing organisms to develop from the one-celled zygote, which itself is produced by fusion of two gametes, each having been produced by meiotic cell division. After growth from the zygote to the adult, cell division by mitosis allows for continual construction and repair of the organism. The human body experiences about 10 quadrillion cell divisions in a lifetime.
The primary concern of cell division is the maintenance of the original cell's genome. Before division can occur, the genomic information that is stored in chromosomes must be replicated, and the duplicated genome must be cleanly divided between progeny cells. A great deal of cellular infrastructure is involved in ensuring consistency of genomic information among generations.
In bacteria
Bacterial cell division happens through binary fission or through budding. The divisome is a protein complex in bacteria that is responsible for cell division, constriction of inner and outer membranes during division, and remodeling of the peptidoglycan cell wall at the division site. A tubulin-like protein, FtsZ plays a critical role in formation of a contractile ring for the cell division.
In eukaryotes
Cell division in eukaryotes is more complicated than in prokaryotes. If the chromosomal number is reduced, eukaryotic cell division is classified as meiosis (reductional division). If the chromosomal number is not reduced, eukaryotic cell division is classified as mitosis (equational division). A primitive form of cell division, called amitosis, also exists. The amitotic or mitotic cell divisions are more atypical and diverse among the various groups of organisms, such as protists (namely diatoms, dinoflagellates, etc.) and fungi.
In the mitotic metaphase (see below), typically the chromosomes (each containing 2 sister chromatids that developed during replication in the S phase of interphase) align themselves on the metaphase plate. Then, the sister chromatids split and are distributed between two daughter cells.
In meiosis I, the homologous chromosomes are paired before being separated and distributed between two daughter cells. On the other hand, meiosis II is similar to mitosis. The chromatids are separated and distributed in the same way. In humans, other higher animals, and many other organisms, the process of meiosis is called gametic meiosis, during which meiosis produces four gametes. Whereas, in several other groups of organisms, especially in plants (observable during meiosis in lower plants, but during the vestigial stage in higher plants), meiosis gives rise to spores that germinate into the haploid vegetative phase (gametophyte). This kind of meiosis is called "sporic meiosis."
Phases of eukaryotic cell division
Interphase
Interphase is the process through which a cell must go before mitosis, meiosis, and cytokinesis. Interphase consists of three main phases: G1, S, and G2. G1 is a time of growth for the cell where specialized cellular functions occur in order to prepare the cell for DNA replication. There are checkpoints during interphase that allow the cell to either advance or halt further development. One of the checkpoint is between G1 and S, the purpose for this checkpoint is to check for appropriate cell size and any DNA damage . The second check point is in the G2 phase, this checkpoint also checks for cell size but also the DNA replication. The last check point is located at the site of metaphase, where it checks that the chromosomes are correctly connected to the mitotic spindles. In S phase, the chromosomes are replicated in order for the genetic content to be maintained. During G2, the cell undergoes the final stages of growth before it enters the M phase, where spindles are synthesized. The M phase can be either mitosis or meiosis depending on the type of cell. Germ cells, or gametes, undergo meiosis, while somatic cells will undergo mitosis. After the cell proceeds successfully through the M phase, it may then undergo cell division through cytokinesis. The control of each checkpoint is controlled by cyclin and cyclin-dependent kinases. The progression of interphase is the result of the increased amount of cyclin. As the amount of cyclin increases, more and more cyclin dependent kinases attach to cyclin signaling the cell further into interphase. At the peak of the cyclin, attached to the cyclin dependent kinases this system pushes the cell out of interphase and into the M phase, where mitosis, meiosis, and cytokinesis occur. There are three transition checkpoints the cell has to go through before entering the M phase. The most important being the G1-S transition checkpoint. If the cell does not pass this checkpoint, it results in the cell exiting the cell cycle.
Prophase
Prophase is the first stage of division. The nuclear envelope begins to be broken down in this stage, long strands of chromatin condense to form shorter more visible strands called chromosomes, the nucleolus disappears, and the mitotic spindle begins to assemble from the two centrosomes. Microtubules associated with the alignment and separation of chromosomes are referred to as the spindle and spindle fibers. Chromosomes will also be visible under a microscope and will be connected at the centromere. During this condensation and alignment period in meiosis, the homologous chromosomes undergo a break in their double-stranded DNA at the same locations, followed by a recombination of the now fragmented parental DNA strands into non-parental combinations, known as crossing over. This process is evidenced to be caused in a large part by the highly conserved Spo11 protein through a mechanism similar to that seen with topoisomerase in DNA replication and transcription.
Prometaphase
Prometaphase is the second stage of cell division. This stage begins with the complete breakdown of the nuclear envelope which exposes various structures to the cytoplasm. This breakdown then allows the spindle apparatus growing from the centrosome to attach to the kinetochores on the sister chromatids. Stable attachment of the spindle apparatus to the kinetochores on the sister chromatids will ensure error-free chromosome segregation during anaphase. Prometaphase follows prophase and precedes metaphase.
Metaphase
In metaphase, the centromeres of the chromosomes align themselves on the metaphase plate (or equatorial plate), an imaginary line that is at equal distances from the two centrosome poles and held together by complexes known as cohesins. Chromosomes line up in the middle of the cell by microtubule organizing centers (MTOCs) pushing and pulling on centromeres of both chromatids thereby causing the chromosome to move to the center. At this point the chromosomes are still condensing and are currently one step away from being the most coiled and condensed they will be, and the spindle fibers have already connected to the kinetochores. During this phase all the microtubules, with the exception of the kinetochores, are in a state of instability promoting their progression toward anaphase. At this point, the chromosomes are ready to split into opposite poles of the cell toward the spindle to which they are connected.
Anaphase
Anaphase is a very short stage of the cell cycle and it occurs after the chromosomes align at the mitotic plate. Kinetochores emit anaphase-inhibition signals until their attachment to the mitotic spindle. Once the final chromosome is properly aligned and attached the final signal dissipates and triggers the abrupt shift to anaphase. This abrupt shift is caused by the activation of the anaphase-promoting complex and its function of tagging degradation of proteins important toward the metaphase-anaphase transition. One of these proteins that is broken down is securin which through its breakdown releases the enzyme separase that cleaves the cohesin rings holding together the sister chromatids thereby leading to the chromosomes separating. After the chromosomes line up in the middle of the cell, the spindle fibers will pull them apart. The chromosomes are split apart while the sister chromatids move to opposite sides of the cell. As the sister chromatids are being pulled apart, the cell and plasma are elongated by non-kinetochore microtubules. Additionally, in this phase, the activation of the anaphase promoting complex through the association with Cdh-1 begins the degradation of mitotic cyclins.
Telophase
Telophase is the last stage of the cell cycle in which a cleavage furrow splits the cells cytoplasm (cytokinesis) and chromatin. This occurs through the synthesis of a new nuclear envelope that forms around the chromatin gathered at each pole. The nucleolus reforms as the chromatin reverts back to the loose state it possessed during interphase. The division of the cellular contents is not always equal and can vary by cell type as seen with oocyte formation where one of the four daughter cells possess the majority of the duckling.
Cytokinesis
The last stage of the cell division process is cytokinesis. In this stage there is a cytoplasmic division that occurs at the end of either mitosis or meiosis. At this stage there is a resulting irreversible separation leading to two daughter cells. Cell division plays an important role in determining the fate of the cell. This is due to there being the possibility of an asymmetric division. This as a result leads to cytokinesis producing unequal daughter cells containing completely different amounts or concentrations of fate-determining molecules.
In animals the cytokinesis ends with formation of a contractile ring and thereafter a cleavage. But in plants it happen differently. At first a cell plate is formed and then a cell wall develops between the two daughter cells.
In Fission yeast (S. pombe) the cytokinesis happens in G1 phase.
Variants
Cells are broadly classified into two main categories: simple non-nucleated prokaryotic cells and complex nucleated eukaryotic cells. Due to their structural differences, eukaryotic and prokaryotic cells do not divide in the same way. Also, the pattern of cell division that transforms eukaryotic stem cells into gametes (sperm cells in males or egg cells in females), termed meiosis, is different from that of the division of somatic cells in the body.
In 2022, scientists discovered a new type of cell division called asynthetic fission found in the squamous epithelial cells in the epidermis of juvenile zebrafish. When juvenile zebrafish are growing, skin cells must quickly cover the rapidly increasing surface area of the zebrafish. These skin cells divide without duplicating their DNA (the S phase of mitosis) causing up to 50% of the cells to have a reduced genome size. These cells are later replaced by cells with a standard amount of DNA. Scientists expect to find this type of division in other vertebrates.
DNA Damage Repair in the Cell Cycle
DNA damage is detected and repaired at various points in the cell cycle. The G1/S checkpoint, G2/M checkpoint, and the checkpoint between metaphase and anaphase all monitor for DNA damage and halt cell division by inhibiting different cyclin-CDK complexes. The p53 tumor-suppressor protein plays a crucial role at the G1/S checkpoint and the G2/M checkpoint. Activated p53 proteins result in the expression of many proteins that are important in cell cycle arrest, repair, and apoptosis. At the G1/S checkpoint, p53 acts to ensure that the cell is ready for DNA replication, while at the G2/M checkpoint p53 acts to ensure that the cells have properly duplicated their content before entering mitosis.
Specifically, when DNA damage is present, ATM and ATR kinases are activated, activating various checkpoint kinases. These checkpoint kinases phosphorylate p53, which stimulates the production of different enzymes associated with DNA repair. Activated p53 also upregulates p21, which inhibits various cyclin-cdk complexes. These cyclin-cdk complexes phosphorylate the Retinoblastoma (Rb) protein, a tumor suppressor bound with the E2F family of transcription factors. The binding of this Rb protein ensures that cells do not enter the S phase prematurely; however, if it is not able to be phosphorylated by these cyclin-cdk complexes, the protein will remain, and the cell will be halted in the G1 phase of the cell cycle.
If DNA is damaged, the cell can also alter the Akt pathway in which BAD is phosphorylated and dissociated from Bcl2, thus inhibiting apoptosis. If this pathway is altered by a loss of function mutation in Akt or Bcl2, then the cell with damaged DNA will be forced to undergo apoptosis. If the DNA damage cannot be repaired, activated p53 can induce cell death by apoptosis. It can do so by activating the p53 upregulated modulator of apoptosis (PUMA). PUMA is a pro-apoptotic protein that rapidly induces apoptosis by inhibiting the anti-apoptotic Bcl-2 family members.
Degradation
Multicellular organisms replace worn-out cells through cell division. In some animals, however, cell division eventually halts. In humans this occurs, on average, after 52 divisions, known as the Hayflick limit. The cell is then referred to as senescent. With each division the cells telomeres, protective sequences of DNA on the end of a chromosome that prevent degradation of the chromosomal DNA, shorten. This shortening has been correlated to negative effects such as age-related diseases and shortened lifespans in humans. Cancer cells, on the other hand, are not thought to degrade in this way, if at all. An enzyme complex called telomerase, present in large quantities in cancerous cells, rebuilds the telomeres through synthesis of telomeric DNA repeats, allowing division to continue indefinitely.
History
At the beginning of the 19th century, various hypotheses circulated about cell proliferation, which became observable in plant and animal organisms as a result of advances in microscopy. While the proliferation of cells on the inner side of old cells, the attachment of vesicles to existing cells, or crystallization in the intercellular space were postulated as mechanisms of cell proliferation, cell division itself had to fight for its acceptance for decades.
The Belgian botanist Barthélemy Charles Joseph Dumortier must be regarded as the first discoverer of cell division. In 1832, he described cell division in simple aquatic plants (French 'conferve') as follows (translated from French to English):
"The development of the conferve is as simple as its structure; it takes place by the attachment of new cells to the old, and this attachment always takes place from the end. The terminal cell elongates more than the deeper cells; then the production of a lateral bisector takes place in the inner fluid, which tends to divide the cell into two parts, of which the deeper one remains stationary, while the terminal part elongates again, forms a new inner partition, and so on. Is the production of the middle partition originally double or single? It is impossible to determine this, but it is always true that it later appears double when united, and that when two cells naturally separate, each of them is closed at both ends."
In 1835, the German botanist and physician Hugo von Mohl described plant cell division in much greater detail in his dissertation on freshwater and seawater algae for his PhD thesis in medicine and surgery:
"Among the most obscure phenomena of plant life is the manner in which the newly developing cells are formed. [...] and so there is no lack of manifold descriptions and explanations of this process. [...] and that gaps that were found in the observations were filled in by overly bold conclusions and assumptions." (translated from German to English)
In 1838, the German physician and botanist Franz Julius Ferdinand Meyen confirmed the mechanism of cell division at the root tips of plants.
The German-Polish physician Robert Remak suspected that he had already discovered animal cell division in the blood of chicken embryos in 1841, but it was not until 1852 that he was able to confirm animal cell division for the first time in bird embryos, frog larvae and mammals.
In 1943, cell division was filmed for the first time by Kurt Michel using a phase-contrast microscope.
| Biology and health sciences | Cellular division | Biology |
36891 | https://en.wikipedia.org/wiki/Silicate | Silicate | A silicate is any member of a family of polyatomic anions consisting of silicon and oxygen, usually with the general formula , where . The family includes orthosilicate (), metasilicate (), and pyrosilicate (, ). The name is also used for any salt of such anions, such as sodium metasilicate; or any ester containing the corresponding chemical group, such as tetramethyl orthosilicate. The name "silicate" is sometimes extended to any anions containing silicon, even if they do not fit the general formula or contain other atoms besides oxygen; such as hexafluorosilicate . Most commonly, silicates are encountered as silicate minerals.
For diverse manufacturing, technological, and artistic needs, silicates are versatile materials, both natural (such as granite, gravel, and garnet) and artificial (such as Portland cement, ceramics, glass, and waterglass).
Structural principles
In most silicates, silicon atom occupies the center of an idealized tetrahedron whose corners are four oxygen atoms, connected to it by single covalent bonds according to the octet rule. The oxygen atoms, which bears some negative charge, link to other cations (Mn+). This Si-O-M-O-Si linkage is strong and rigid, which properties are manifested in the rock-like silicates. The silicates can be classified according to the length and crosslinking of the silicate anions.
Isolated silicates
Isolated orthosilicate anions have the formula . A common mineral in this group is olivine ().
Two or more silicon atoms can share oxygen atoms in various ways, to form more complex anions, such as pyrosilicate .
Chains
With two shared oxides bound to each silicon, cyclic or polymeric structures can result. The cyclic metasilicate ring is a hexamer of SiO32-. Polymeric silicate anions of can exist also as long chains.
In single-chain silicates, which are a type of inosilicate, tetrahedra link to form a chain by sharing two oxygen atoms each. A common mineral in this group is pyroxene.
Double-chain silicates, the other category of inosilicates, occur when tetrahedra form a double chain (not always but mostly) by sharing two or three oxygen atoms each. Common minerals for this group are amphiboles.
Sheets
In this group, known as phyllosilicates, tetrahedra all share three oxygen atoms each and in turn link to form two-dimensional sheets. This structure does lead to minerals in this group having one strong cleavage plane. Micas fall into this group. Both muscovite and biotite have very weak layers that can be peeled off in sheets.
Framework
In a framework silicate, known as a tectosilicate, each tetrahedron shares all 4 oxygen atoms with its neighbours, forming a 3D structure. Quartz and feldspars are in this group.
Silicates with non-tetrahedral silicon
Although the tetrahedron is a common coordination geometry for silicon(IV) compounds, silicon may also occur with higher coordination numbers. For example, in the anion hexafluorosilicate , the silicon atom is surrounded by six fluorine atoms in an octahedral arrangement. This structure is also seen in the hexahydroxysilicate anion that occurs in thaumasite, a mineral found rarely in nature but sometimes observed among other calcium silicate hydrates artificially formed in cement and concrete structures submitted to a severe sulfate attack in argillaceous grounds containing oxidized pyrite.
At very high pressure, such as exists in the majority of the Earth's rock, even SiO2 adopts the six-coordinated octahedral geometry in the mineral stishovite, a dense polymorph of silica found in the lower mantle of the Earth and also formed by shock during meteorite impacts.
Chemical properties
Silicates with alkali cations and small or chain-like anions, such as sodium ortho- and metasilicate, are fairly soluble in water. They form several solid hydrates when crystallized from solution. Soluble sodium silicates and mixtures thereof, known as waterglass are important industrial and household chemicals. Silicates of non-alkali cations, or with sheet and tridimensional polymeric anions, generally have negligible solubility in water at normal conditions.
Reactions
Silicates are generally inert chemically. Hence they are common minerals. Their resiliency also recommends their use as building materials.
When treated with calcium oxides and water, silicate minerals form Portland cement.
Equilibria involving hydrolysis of silicate minerals are difficult to study. The chief challenge is the very low solubility of SiO44- and its various protonated forms. Such equilibria are relevant to the processes occurring on geological time scales. Some plants excrete ligands that dissolve silicates, a step in biomineralization.
Catechols can depolymerize SiO₂—a component of silicates with ionic structures like orthosilicate (SiO₄⁴⁻), metasilicate (SiO₂³⁻), and pyrosilicate (Si₂O₆⁷⁻)—by forming bis- and tris(catecholate)silicate dianions through coordination. This complexes can be further coated on various substrates for applications such as drug delivery systems, antibacterial and antifouling applications.
Detection
Silicate anions in solution react with molybdate anions yielding yellow silicomolybdate complexes. In a typical preparation, monomeric orthosilicate was found to react completely in 75 seconds; dimeric pyrosilicate in 10 minutes; and higher oligomers in considerably longer time. In particular, the reaction is not observed with suspensions of colloidal silica.
Zeolite formation and geopolymers polymerisation
The nature of soluble silicates is relevant to understanding biomineralization and the synthesis of aluminosilicates, such as the industrially important catalysts called zeolites. Along with aluminate anions, soluble silicate anions also play a major role in the polymerization mechanism of geopolymers. Geopolymers are amorphous aluminosilicates whose production requires less energy than that of ordinary Portland cement. So, geopolymer cements could contribute to limiting the emissions in the Earth atmosphere and the global warming caused by this greenhouse gas.
| Physical sciences | Salts | null |
36896 | https://en.wikipedia.org/wiki/Lion | Lion | The lion (Panthera leo) is a large cat of the genus Panthera, native to Africa and India. It has a muscular, broad-chested body; a short, rounded head; round ears; and a dark, hairy tuft at the tip of its tail. It is sexually dimorphic; adult male lions are larger than females and have a prominent mane. It is a social species, forming groups called prides. A lion's pride consists of a few adult males, related females, and cubs. Groups of female lions usually hunt together, preying mostly on medium-sized and large ungulates. The lion is an apex and keystone predator.
The lion inhabits grasslands, savannahs, and shrublands. It is usually more diurnal than other wild cats, but when persecuted, it adapts to being active at night and at twilight. During the Neolithic period, the lion ranged throughout Africa and Eurasia, from Southeast Europe to India, but it has been reduced to fragmented populations in sub-Saharan Africa and one population in western India. It has been listed as Vulnerable on the IUCN Red List since 1996 because populations in African countries have declined by about 43% since the early 1990s. Lion populations are untenable outside designated protected areas. Although the cause of the decline is not fully understood, habitat loss and conflicts with humans are the greatest causes for concern.
One of the most widely recognised animal symbols in human culture, the lion has been extensively depicted in sculptures and paintings, on national flags, and in literature and films. Lions have been kept in menageries since the time of the Roman Empire and have been a key species sought for exhibition in zoological gardens across the world since the late 18th century. Cultural depictions of lions were prominent in Ancient Egypt, and depictions have occurred in virtually all ancient and medieval cultures in the lion's historic and current range.
Etymology
The English word lion is derived via Anglo-Norman from Latin (nominative: ), which in turn was a borrowing from Ancient Greek . The Hebrew word may also be related. The generic name Panthera is traceable to the classical Latin word 'panthēra' and the ancient Greek word πάνθηρ 'panther'.
Taxonomy
Felis leo was the scientific name used by Carl Linnaeus in 1758, who described the lion in his work Systema Naturae. The genus name Panthera was coined by Lorenz Oken in 1816. Between the mid-18th and mid-20th centuries, 26 lion specimens were described and proposed as subspecies, of which 11 were recognised as valid in 2005. They were distinguished mostly by the size and colour of their manes and skins.
Subspecies
In the 19th and 20th centuries, several lion type specimens were described and proposed as subspecies, with about a dozen recognised as valid taxa until 2017. Between 2008 and 2016, IUCN Red List assessors used only two subspecific names: P. l. leo for African lion populations, and P. l. persica for the Asiatic lion population. In 2017, the Cat Classification Task Force of the Cat Specialist Group revised lion taxonomy, and recognises two subspecies based on results of several phylogeographic studies on lion evolution, namely:
P. l. leo − the nominate lion subspecies includes the Asiatic lion, the regionally extinct Barbary lion, and lion populations in West and northern parts of Central Africa. Synonyms include P. l. persica , P. l. senegalensis , P. l. kamptzi , and P. l. azandica . Multiple authors referred to it as 'northern lion' and 'northern subspecies'.
P. l. melanochaita − includes the extinct Cape lion and lion populations in East and Southern African regions. Synonyms include P. l. somaliensis , P. l. massaica , P. l. sabakiensis , P. l. bleyenberghi , P. l. roosevelti , P. l. nyanzae , P. l. hollisteri , P. l. krugeri , P. l. vernayi , and P. l. webbiensis . It has been referred to as 'southern subspecies' and 'southern lion'.
However, there seems to be some degree of overlap between both groups in northern Central Africa. DNA analysis from a more recent study indicates that Central African lions are derived from both northern and southern lions, as they cluster with P. leo leo in mtDNA-based phylogenies whereas their genomic DNA indicates a closer relationship with P. leo melanochaita.
Lion samples from some parts of the Ethiopian Highlands cluster genetically with those from Cameroon and Chad, while lions from other areas of Ethiopia cluster with samples from East Africa. Researchers, therefore, assume Ethiopia is a contact zone between the two subspecies. Genome-wide data of a wild-born historical lion sample from Sudan showed that it clustered with P. l. leo in mtDNA-based phylogenies, but with a high affinity to P. l. melanochaita. This result suggested that the taxonomic position of lions in Central Africa may require revision.
Fossil records
Other lion subspecies or sister species to the modern lion existed in prehistoric times:
P. l. sinhaleyus was a fossil carnassial excavated in Sri Lanka, which was attributed to a lion. It is thought to have become extinct around 39,000 years ago.
P. fossilis was larger than the modern lion and lived in the Middle Pleistocene. Bone fragments were excavated in caves in the United Kingdom, Germany, Italy and Czech Republic.
P. spelaea, or the cave lion, lived in Eurasia and Beringia during the Late Pleistocene. It became extinct due to climate warming or human expansion latest by 11,900 years ago. Bone fragments excavated in European, North Asian, Canadian and Alaskan caves indicate that it ranged from Europe across Siberia into western Alaska. It likely derived from P. fossilis, and was genetically isolated and highly distinct from the modern lion in Africa and Eurasia. It is depicted in Paleolithic cave paintings, ivory carvings, and clay busts.
P. atrox, or the American lion, ranged in the Americas from Canada to possibly Patagonia during the Late Pleistocene. It diverged from the cave lion around 165,000 years ago. A fossil from Edmonton dates to 11,355 ± 55 years ago.
Evolution
The Panthera lineage is estimated to have genetically diverged from the common ancestor of the Felidae around to . Results of analyses differ in the phylogenetic relationship of the lion; it was thought to form a sister group with the jaguar that diverged , but also with the leopard that diverged to . Hybridisation between lion and snow leopard ancestors possibly continued until about 2.1 million years ago. The lion-leopard clade was distributed in the Asian and African Palearctic since at least the early Pliocene. The earliest fossils recognisable as lions were found at Olduvai Gorge in Tanzania and are estimated to be up to 2 million years old.
Estimates for the divergence time of the modern and cave lion lineages range from 529,000 to 392,000 years ago based on mutation rate per generation time of the modern lion. There is no evidence for gene flow between the two lineages, indicating that they did not share the same geographic area. The Eurasian and American cave lions became extinct at the end of the last glacial period without mitochondrial descendants on other continents. The modern lion was probably widely distributed in Africa during the Middle Pleistocene and started to diverge in sub-Saharan Africa during the Late Pleistocene. Lion populations in East and Southern Africa became separated from populations in West and North Africa when the equatorial rainforest expanded 183,500 to 81,800 years ago. They shared a common ancestor probably between 98,000 and 52,000 years ago. Due to the expansion of the Sahara between 83,100 and 26,600 years ago, lion populations in West and North Africa became separated. As the rainforest decreased and thus gave rise to more open habitats, lions moved from West to Central Africa. Lions from North Africa dispersed to southern Europe and Asia between 38,800 and 8,300 years ago.
Extinction of lions in southern Europe, North Africa and the Middle East interrupted gene flow between lion populations in Asia and Africa. Genetic evidence revealed numerous mutations in lion samples from East and Southern Africa, which indicates that this group has a longer evolutionary history than genetically less diverse lion samples from Asia and West and Central Africa. A whole genome-wide sequence of lion samples showed that samples from West Africa shared alleles with samples from Southern Africa, and samples from Central Africa shared alleles with samples from Asia. This phenomenon indicates that Central Africa was a melting pot of lion populations after they had become isolated, possibly migrating through corridors in the Nile Basin during the early Holocene.
Hybrids
In zoos, lions have been bred with tigers to create hybrids for the curiosity of visitors or for scientific purpose. The liger is bigger than a lion and a tiger, whereas most tigons are relatively small compared to their parents because of reciprocal gene effects. The leopon is a hybrid between a lion and leopard.
Description
The lion is a muscular, broad-chested cat with a short, rounded head, a reduced neck, and round ears; males have broader heads. The fur varies in colour from light buff to silvery grey, yellowish red, and dark brown. The colours of the underparts are generally lighter. A new-born lion has dark spots, which fade as the cub reaches adulthood, although faint spots may still be seen on the legs and underparts. The tail of all lions ends in a dark, hairy tuft that, in some lions, conceals an approximately -long, hard "spine" or "spur" composed of dermal papillae. The functions of the spur are unknown. The tuft is absent at birth and develops at around months of age. It is readily identifiable at the age of seven months.
Its skull is very similar to that of the tiger, although the frontal region is usually more depressed and flattened and has a slightly shorter postorbital region and broader nasal openings than those of the tiger. Due to the amount of skull variation in the two species, usually only the structure of the lower jaw can be used as a reliable indicator of species.
The skeletal muscles of the lion make up 58.8% of its body weight and represent the highest percentage of muscles among mammals. The lion has a high concentration of fast twitch muscle fibres, giving them quick bursts of speed but less stamina.
Size
Among felids, the lion is second only to the tiger in size. The size and weight of adult lions vary across its range and habitats. Accounts of a few individuals that were larger than average exist from Africa and India.
Mane
The male lion's mane is the most recognisable feature of the species. It may have evolved around 320,000–190,000 years ago. It grows downwards and backwards, covering most of the head, neck, shoulders, and chest. The mane is typically brownish and tinged with yellow, rust, and black hairs. Mutations in the genes microphthalmia-associated transcription factor and tyrosinase are possibly responsible for the colour of manes. It starts growing when lions enter adolescence, when testosterone levels increase, and reach their full size at around four years old. Cool ambient temperatures in European and North American zoos may result in a heavier mane. On average, Asiatic lions have sparser manes than African lions.
This feature likely evolved to signal the fitness of males to females. Males with darker manes appear to have greater reproductive success and are more likely to remain in a pride for longer. They have longer and thicker hair and higher testosterone levels, but they are also more vulnerable to heat stress. The core body temperature does apparently not increase regardless of sex, season, feeding time, length and colour of mane, but only surface temperature is affected. Unlike in other felid species, female lions consistently interact with multiple males at once. Another hypothesis suggests that the mane also serves to protect the neck in fights, but this is disputed. During fights, including those involving maneless females and adolescents, the neck is not targeted as much as the face, back, and hindquarters. Injured lions also begin to lose their manes.
Almost all male lions in Pendjari National Park are either maneless or have very short manes. Maneless lions have also been reported in Senegal, in Sudan's Dinder National Park and in Tsavo East National Park, Kenya. Castrated lions often have little to no mane because the removal of the gonads inhibits testosterone production. Rarely, both wild and captive lionesses have manes. Increased testosterone may be the cause of maned lionesses reported in northern Botswana.
Colour variation
The white lion is a rare morph with a genetic condition called leucism, which is caused by a double recessive allele. It is not albino; it has normal pigmentation in the eyes and skin. White lions have occasionally been encountered in and around Kruger National Park and the adjacent Timbavati Private Game Reserve in eastern South Africa. They were removed from the wild in the 1970s, thus decreasing the white lion gene pool. Nevertheless, 17 births have been recorded in five prides between 2007 and 2015. White lions are selected for breeding in captivity. They have reportedly been bred in camps in South Africa for use as trophies to be killed during canned hunts.
Distribution and habitat
African lions live in scattered populations across sub-Saharan Africa. The lion prefers grassy plains and savannahs, scrub bordering rivers, and open woodlands with bushes. It rarely enters closed forests. On Mount Elgon, the lion has been recorded up to an elevation of and close to the snow line on Mount Kenya. Savannahs with an annual rainfall of make up the majority of lion habitat in Africa, estimated at at most, but remnant populations are also present in tropical moist forests in West Africa and montane forests in East Africa. The Asiatic lion now survives only in and around Gir National Park in Gujarat, western India. Its habitat is a mixture of dry savannah forest and very dry, deciduous scrub forest.
Historical range
In Africa, the range of the lion originally spanned most of the central African rainforest zone and the Sahara desert. In the 1960s, it became extinct in North Africa, except in the southern part of Sudan.
During the mid-Holocene, around 8,000-6,000 years ago, the range of lions expanded into Southeastern and Eastern Europe, partially re-occupying the range of the now extinct cave lion. In Hungary, the modern lion was present from about 4,500 to 3,200 years Before Present. In Ukraine, the modern lion was present from about 6,400 to 2,000 years Before Present. In Greece, it was common, as reported by Herodotus in 480 BC; it was considered rare by 300 BC and extirpated by AD 100.
In Asia the lion once ranged in regions where climatic conditions supported an abundance of prey. It was present in the Caucasus until the 10th century. It lived in Palestine until the Middle Ages and in Southwest Asia until the late 19th century. By the late 19th century, it had been extirpated in most of Turkey. The last live lion in Iran was sighted in 1942, about northwest of Dezful, although the corpse of a lioness was found on the banks of the Karun river in Khuzestan province in 1944. It once ranged from Sind and Punjab in Pakistan to Bengal and the Narmada River in central India.
Behaviour and ecology
Lions spend much of their time resting; they are inactive for about twenty hours per day. Although lions can be active at any time, their activity generally peaks after dusk with a period of socialising, grooming, and defecating. Intermittent bursts of activity continue until dawn, when hunting most often takes place. They spend an average of two hours a day walking and fifty minutes eating.
Group organisation
The lion is the most social of all wild felid species, living in groups of related individuals with their offspring. Such a group is called a "pride". Groups of male lions are called "coalitions". Females form the stable social unit in a pride and do not tolerate outside females. The majority of females remain in their birth prides while all males and some females will disperse. The average pride consists of around 15 lions, including several adult females and up to four males and their cubs of both sexes. Large prides, consisting of up to 30 individuals, have been observed. The sole exception to this pattern is the Tsavo lion pride that always has just one adult male. Prides act as fission–fusion societies, and members will split into subgroups that keep in contact with roars.
Nomadic lions range widely and move around sporadically, either in pairs or alone. Pairs are more frequent among related males. A lion may switch lifestyles; nomads can become residents and vice versa. Interactions between prides and nomads tend to be hostile, although pride females in estrus allow nomadic males to approach them. Males spend years in a nomadic phase before gaining residence in a pride. A study undertaken in the Serengeti National Park revealed that nomadic coalitions gain residency at between 3.5 and 7.3 years of age. In Kruger National Park, dispersing male lions move more than away from their natal pride in search of their own territory. Female lions stay closer to their natal pride. Therefore, female lions in an area are more closely related to each other than male lions in the same area.
The evolution of sociability in lions was likely driven both by high population density and the clumped resources of savannah habitats. The larger the pride, the more high-quality territory they can defend; "hotspots" being near river confluences, where the cats have better access to water, prey and shelter (via vegetation). The area occupied by a pride is called a "pride area" whereas that occupied by a nomad is a "range". Males associated with a pride patrol the fringes. Both males and females defend the pride against intruders, but the male lion is better-suited for this purpose due to its stockier, more powerful build. Some individuals consistently lead the defence against intruders, while others lag behind. Lions tend to assume specific roles in the pride; slower-moving individuals may provide other valuable services to the group. Alternatively, there may be rewards associated with being a leader that fends off intruders; the rank of lionesses in the pride is reflected in these responses. The male or males associated with the pride must defend their relationship with the pride from outside males who may attempt to usurp them. Dominance hierarchies do not appear to exist among individuals of either sex in a pride.
Asiatic lion prides differ in group composition. Male Asiatic lions are solitary or associate with up to three males, forming a loose pride while females associate with up to 12 other females, forming a stronger pride together with their cubs. Female and male lions associate only when mating. Coalitions of males hold territory for a longer time than single lions. Males in coalitions of three or four individuals exhibit a pronounced hierarchy, in which one male dominates the others and mates more frequently.
Hunting and diet
The lion is a generalist hypercarnivore and is considered to be both an apex and keystone predator due to its wide prey spectrum. Its prey consists mainly of medium-sized to large ungulates, particularly blue wildebeest, plains zebra, African buffalo, gemsbok and giraffe. It also frequently takes common warthog despite it being much smaller. In India, chital and sambar deer are the most common wild prey, while livestock contributes significantly to lion kills outside protected areas. It usually avoids fully grown adult elephants, rhinoceros and hippopotamus and small prey like dik-dik, hyraxes, hares and monkeys. Unusual prey include porcupines and small reptiles. Lions kill other predators but seldom consume them.
Young lions first display stalking behaviour at around three months of age, although they do not participate in hunting until they are almost a year old and begin to hunt effectively when nearing the age of two. Single lions are capable of bringing down zebra and wildebeest, while larger prey like buffalo and giraffe are riskier. In Chobe National Park, large prides have been observed hunting African bush elephants up to around 15 years old in exceptional cases, with the victims being calves, juveniles, and even subadults. In typical group hunts, each lioness has a favoured position in the group, either stalking prey on the "wing", then attacking, or moving a smaller distance in the centre of the group and capturing prey fleeing from other lionesses. Males attached to prides do not usually participate in group hunting.
Some evidence suggests, however, that males are just as successful as females; they are typically solo hunters who ambush prey in small bushland. They may join in the hunting of large, slower-moving prey like buffalo; and even hunt them on their own. Moderately-sized hunting groups generally have higher success rates than lone females and larger groups.
Lions are not particularly known for their stamina. For instance, a lioness's heart comprises only 0.57% of her body weight and a male's is about 0.45% of his body weight, whereas a hyena's heart comprises almost 1% of its body weight. Thus, lions run quickly only in short bursts at about and need to be close to their prey before starting the attack. They take advantage of factors that reduce visibility; many kills take place near some form of cover or at night. One study in 2018 recorded a lion running at a top speed of . The lion accelerates at the start of the chase by 9.5 m/s², whereas zebras, wildebeest and Thomson's gazelle accelerate by 5 m/s², 5.6 m/s² and 4.5 m/s², respectively; acceleration appears to be more important than steady displacement speed in lion hunts. The lion's attack is short and powerful; it attempts to catch prey with a fast rush and final leap, usually pulls it down by the rump, and kills with a clamping bite to the throat or muzzle. It can hold the prey's throat for up to 13 minutes, until the prey stops moving. It has a bite force from 1593.8 to 1768 Newtons at the canine tip and up 4167.6 Newtons at the carnassial notch.
Lions typically consume prey at the location of the hunt but sometimes drag large prey into cover. They tend to squabble over kills, particularly the males. Cubs suffer most when food is scarce but otherwise all pride members eat their fill, including old and crippled lions, which can live on leftovers. Large kills are shared more widely among pride members. An adult lioness requires an average of about of meat per day while males require about . Lions gorge themselves and eat up to in one session. If it is unable to consume all of the kill, it rests for a few hours before continuing to eat. On hot days, the pride retreats to shade with one or two males standing guard. Lions defend their kills from scavengers such as vultures and hyenas.
Lions scavenge on carrion when the opportunity arises, scavenging animals dead from natural causes such as disease or those that were killed by other predators. Scavenging lions keep a constant lookout for circling vultures, which indicate the death or distress of an animal. Most carrion on which both hyenas and lions feed upon are killed by hyenas rather than lions. Carrion is thought to provide a large part of lion diet.
Predatory competition
Lions and spotted hyenas occupy a similar ecological niche and compete for prey and carrion; a review of data across several studies indicates a dietary overlap of 58.6%. Lions typically ignore hyenas unless they are on a kill or are being harassed, while the latter tend to visibly react to the presence of lions with or without the presence of food. In the Ngorongoro crater, lions subsist largely on kills stolen from hyenas, causing them to increase their kill rate. In Botswana's Chobe National Park, the situation is reversed as hyenas there frequently challenge lions and steal their kills, obtaining food from 63% of all lion kills. When confronted on a kill, hyenas may either leave or wait patiently at a distance of until the lions have finished. Hyenas may feed alongside lions and force them off a kill. The two species attack one another even when there is no food involved for no apparent reason. Lions can account for up to 71% of hyena deaths in Etosha National Park. Hyenas have adapted by frequently mobbing lions that enter their home ranges. When the lion population in Kenya's Masai Mara National Reserve declined, the spotted hyena population increased rapidly.
Lions tend to dominate cheetahs and leopards, steal their kills and kill their cubs and even adults when given the chance. Cheetahs often lose their kills to lions or other predators. A study in the Serengeti ecosystem revealed that lions killed at least 17 of 125 cheetah cubs born between 1987 and 1990. Cheetahs avoid their competitors by hunting at different times and habitats. Leopards, by contrast, do not appear to be motivated by an avoidance of lions, as they use heavy vegetation regardless of whether lions are present in an area and both cats are active around the same time of day. In addition, there is no evidence that lions affect leopard abundance. Leopards take refuge in trees, though lionesses occasionally attempt to climb up and retrieve their kills.
Lions similarly dominate African wild dogs, taking their kills and dispatching pups or adult dogs. Population densities of wild dogs are low in areas where lions are more abundant. However, there are a few reported cases of old and wounded lions falling prey to wild dogs.
Reproduction and life cycle
Most lionesses reproduce by the time they are four years of age. Lions do not mate at a specific time of year and the females are polyestrous. Like those of other cats, the male lion's penis has spines that point backward. During withdrawal of the penis, the spines rake the walls of the female's vagina, which may cause ovulation. A lioness may mate with more than one male when she is in heat. Lions of both sexes may be involved in group homosexual and courtship activities. Males will also head-rub and roll around with each other before mounting each other. Generation length of the lion is about seven years. The average gestation period is around 110days; the female gives birth to a litter of between one and four cubs in a secluded den, which may be a thicket, a reed-bed, a cave, or some other sheltered area, usually away from the pride. She will often hunt alone while the cubs are still helpless, staying relatively close to the den. Lion cubs are born blind, their eyes opening around seven days after birth. They weigh at birth and are almost helpless, beginning to crawl a day or two after birth and walking around three weeks of age. To avoid a buildup of scent attracting the attention of predators, the lioness moves her cubs to a new den site several times a month, carrying them one-by-one by the nape of the neck.
Usually, the mother does not integrate herself and her cubs back into the pride until the cubs are six to eight weeks old. Sometimes the introduction to pride life occurs earlier, particularly if other lionesses have given birth at about the same time. When first introduced to the rest of the pride, lion cubs lack confidence when confronted with adults other than their mother. They soon begin to immerse themselves in the pride life, however, playing among themselves or attempting to initiate play with the adults. Lionesses with cubs of their own are more likely to be tolerant of another lioness's cubs than lionesses without cubs. Male tolerance of the cubs varies—one male could patiently let the cubs play with his tail or his mane, while another may snarl and bat the cubs away.
Pride lionesses often synchronise their reproductive cycles and communal rearing and suckling of the young, which suckle indiscriminately from any or all of the nursing females in the pride. The synchronisation of births is advantageous because the cubs grow to being roughly the same size and have an equal chance of survival, and sucklings are not dominated by older cubs. Weaning occurs after six or seven months. Male lions reach maturity at about three years of age and at four to five years are capable of challenging and displacing adult males associated with another pride. They begin to age and weaken at between 10 and 15 years of age at the latest.
When one or more new males oust the previous males associated with a pride, the victors often kill any existing young cubs, perhaps because females do not become fertile and receptive until their cubs mature or die. Females often fiercely defend their cubs from a usurping male but are rarely successful unless a group of three or four mothers within a pride join forces against the male. Cubs also die from starvation and abandonment, and predation by leopards, hyenas and wild dogs. Male cubs are excluded from their maternal pride when they reach maturity at around two or three years of age, while some females may leave when they reach the age of two. When a new male lion takes over a pride, adolescents both male and female may be evicted.
Health and mortality
Lions may live 12–17 years in the wild. Although adult lions have no natural predators, evidence suggests most die violently from attacks by humans or other lions. Lions often inflict serious injuries on members of other prides they encounter in territorial disputes or members of the home pride when fighting at a kill. Crippled lions and cubs may fall victim to hyenas and leopards or be trampled by buffalo or elephants. Careless lions may be maimed when hunting prey. Nile crocodiles may also kill and eat lions, evidenced by the occasional lion claw found in crocodile stomachs.
Ticks commonly infest the ears, neck and groin regions of the lions. Adult forms of several tapeworm species of the genus Taenia have been isolated from lion intestines, having been ingested as larvae in antelope meat. Lions in the Ngorongoro Crater were afflicted by an outbreak of stable fly (Stomoxys calcitrans) in 1962, resulting in lions becoming emaciated and covered in bloody, bare patches. Lions sought unsuccessfully to evade the biting flies by climbing trees or crawling into hyena burrows; many died or migrated and the local population dropped from 70 to 15 individuals. A more recent outbreak in 2001 killed six lions.
Captive lions have been infected with canine distemper virus (CDV) since at least the mid-1970s. CDV is spread by domestic dogs and other carnivores; a 1994 outbreak in Serengeti National Park resulted in many lions developing neurological symptoms such as seizures. During the outbreak, several lions died from pneumonia and encephalitis. Feline immunodeficiency virus and lentivirus also affect captive lions.
Communication
When resting, lion socialisation occurs through a number of behaviours; the animal's expressive movements are highly developed. The most common peaceful, tactile gestures are head rubbing and social licking, which have been compared with the role of allogrooming among primates. Head rubbing, nuzzling the forehead, face and neck against another lion appears to be a form of greeting and is seen often after an animal has been apart from others or after a fight or confrontation. Males tend to rub other males, while cubs and females rub females. Social licking often occurs in tandem with head rubbing; it is generally mutual and the recipient appears to express pleasure. The head and neck are the most common parts of the body licked; this behaviour may have arisen out of utility because lions cannot lick these areas themselves.
Lions have an array of facial expressions and body postures that serve as visual gestures. A common facial expression is the "grimace face" or flehmen response, which a lion makes when sniffing chemical signals and involves an open mouth with bared teeth, raised muzzle, wrinkled nose, closed eyes and relaxed ears. Lions also use chemical and visual marking; males spray urine and scrape plots of ground and objects within the territory.
The lion's repertoire of vocalisations is large; variations in intensity and pitch appear to be central to communication. Most lion vocalisations are variations of growling, snarling, meowing and roaring. Other sounds produced include puffing, bleating and humming. Roaring is used to advertise its presence. Lions most often roar at night, a sound that can be heard from a distance of . They tend to roar in a very characteristic manner starting with a few deep, long roars that subside into grunts.
Conservation
The lion is listed as Vulnerable on the IUCN Red List. The Indian population is listed on CITES Appendix I and the African population on CITES Appendix II.
In Africa
Several large and well-managed protected areas in Africa host large lion populations. Where an infrastructure for wildlife tourism has been developed, cash revenue for park management and local communities is a strong incentive for lion conservation. Most lions now live in East and Southern Africa; their numbers are rapidly decreasing, and fell by an estimated 30–50% in the late half of the 20th century. Primary causes of the decline include disease and human interference. In 1975, it was estimated that since the 1950s, lion numbers had decreased by half to 200,000 or fewer. Estimates of the African lion population range between 16,500 and 47,000 living in the wild in 2002–2004.
In the Republic of the Congo, Odzala-Kokoua National Park was considered a lion stronghold in the 1990s. By 2014, no lions were recorded in the protected area so the population is considered locally extinct. The West African lion population is isolated from the one in Central Africa, with little or no exchange of breeding individuals. In 2015, it was estimated that this population consists of about 400 animals, including fewer than 250 mature individuals. They persist in three protected areas in the region, mostly in one population in the W A P protected area complex, shared by Benin, Burkina Faso and Niger. This population is listed as Critically Endangered. Field surveys in the WAP ecosystem revealed that lion occupancy is lowest in the W National Park, and higher in areas with permanent staff and thus better protection.
A population occurs in Cameroon's Waza National Park, where between approximately 14 and 21 animals persisted as of 2009. In addition, 50 to 150 lions are estimated to be present in Burkina Faso's Arly-Singou ecosystem. In 2015, an adult male lion and a female lion were sighted in Ghana's Mole National Park. These were the first sightings of lions in the country in 39 years. In the same year, a population of up to 200 lions that was previously thought to have been extirpated was filmed in the Alatash National Park, Ethiopia, close to the Sudanese border.
In 2005, Lion Conservation Strategies were developed for West and Central Africa, and or East and Southern Africa. The strategies seek to maintain suitable habitat, ensure a sufficient wild prey base for lions, reduce factors that lead to further fragmentation of populations, and make lion–human coexistence sustainable. Lion depredation on livestock is significantly reduced in areas where herders keep livestock in improved enclosures. Such measures contribute to mitigating human–lion conflict.
In Asia
The last refuge of the Asiatic lion population is the Gir National Park and surrounding areas in the region of Saurashtra or Kathiawar Peninsula in Gujarat State, India. The population has risen from approximately 180 lions in 1974 to about 400 in 2010. It is geographically isolated, which can lead to inbreeding and reduced genetic diversity. Since 2008, the Asiatic lion has been listed as Endangered on the IUCN Red List. By 2015, the population had grown to 523 individuals inhabiting an area of in Saurashtra. In 2017, about 650 individuals were recorded during the Asiatic Lion Census.
The presence of numerous human settlements close to Gir National Park resulted in conflict between lions, local people and their livestock. Some consider the presence of lions a benefit, as they keep populations of crop damaging herbivores in check.
Captive breeding
Lions imported to Europe before the middle of the 19th century were possibly foremost Barbary lions from North Africa, or Cape lions from Southern Africa. Another 11 animals thought to be Barbary lions kept in Addis Ababa Zoo are descendants of animals owned by Emperor Haile Selassie. WildLink International in collaboration with Oxford University launched an ambitious International Barbary Lion Project with the aim of identifying and breeding Barbary lions in captivity for eventual reintroduction into a national park in the Atlas Mountains of Morocco. However, a genetic analysis showed that the captive lions at Addis Ababa Zoo were not Barbary lions, but rather closely related to wild lions in Chad and Cameroon.
In 1982, the Association of Zoos and Aquariums started a Species Survival Plan for the Asiatic lion to increase its chances of survival. In 1987, it was found that most lions in North American zoos were hybrids between African and Asiatic lions. Breeding programs need to note origins of the participating animals to avoid cross-breeding different subspecies and thus reducing their conservation value. Captive breeding of lions was halted to eliminate individuals of unknown origin and pedigree. Wild-born lions were imported to American zoos from Africa between 1989 and 1995. Breeding was continued in 1998 in the frame of an African lion Species Survival Plan.
About 77% of the captive lions registered in the International Species Information System in 2006 were of unknown origin; these animals might have carried genes that are extinct in the wild and may therefore be important to the maintenance of the overall genetic variability of the lion.
Interactions with humans
In zoos and circuses
Lions are part of a group of exotic animals that have been central to zoo exhibits since the late 18th century. Although many modern zoos are more selective about their exhibits, there are more than 1,000 African and 100 Asiatic lions in zoos and wildlife parks around the world. They are considered an ambassador species and are kept for tourism, education and conservation purposes. Lions can live over twenty years in captivity; for example, three sibling lions at the Honolulu Zoo lived to the age of 22 in 2007.
The first European "zoos" spread among noble and royal families in the 13th century, and until the 17th century were called seraglios. At that time, they came to be called menageries, an extension of the cabinet of curiosities. They spread from France and Italy during the Renaissance to the rest of Europe. In England, although the seraglio tradition was less developed, lions were kept at the Tower of London in a seraglio established by King John in the 13th century; this was probably stocked with animals from an earlier menagerie started in 1125 by Henry I at his hunting lodge in Woodstock, Oxfordshire, where according to William of Malmesbury lions had been stocked.
Lions were kept in cramped and squalid conditions at London Zoo until a larger lion house with roomier cages was built in the 1870s. Further changes took place in the early 20th century when Carl Hagenbeck designed enclosures with concrete "rocks", more open space and a moat instead of bars, more closely resembling a natural habitat. Hagenbeck designed lion enclosures for both Melbourne Zoo and Sydney's Taronga Zoo; although his designs were popular, the use of bars and caged enclosures prevailed in many zoos until the 1960s. In the late 20th century, larger, more natural enclosures and the use of wire mesh or laminated glass instead of lowered dens allowed visitors to come closer than ever to the animals; some attractions such as the Cat Forest/Lion Overlook of Oklahoma City Zoological Park placed the den on ground level, higher than visitors.
Lion taming has been part of both established circuses and individual acts such as Siegfried & Roy. The practice began in the early 19th century by Frenchman Henri Martin and American Isaac Van Amburgh, who both toured widely and whose techniques were copied by a number of followers. Martin composed a pantomime titled Les Lions de Mysore ("the lions of Mysore"), an idea Amburgh quickly borrowed. These acts eclipsed equestrianism acts as the central display of circus shows and entered public consciousness in the early 20th century with cinema. In demonstrating the superiority of human over animal, lion taming served a purpose similar to animal fights of previous centuries. The ultimate proof of a tamer's dominance and control over a lion is demonstrated by the placing of the tamer's head in the lion's mouth. The now-iconic lion tamer's chair was possibly first used by American Clyde Beatty (1903–1965).
Hunting and games
Lion hunting has occurred since ancient times and was often a royal tradition, intended to demonstrate the power of the king over nature. Such hunts took place in a reserved area in front of an audience. The monarch was accompanied by his men and controls were put in place to increase their safety and ease of killing. The earliest surviving record of lion hunting is an ancient Egyptian inscription dated circa 1380 BC that mentions Pharaoh Amenhotep III killing 102 lions in ten years "with his own arrows". The Assyrian emperor Ashurbanipal had one of his lion hunts depicted on a sequence of Assyrian palace reliefs , known as the Lion Hunt of Ashurbanipal. Lions were also hunted during the Mughal Empire, where Emperor Jahangir is said to have excelled at it. In Ancient Rome, lions were kept by emperors for hunts, gladiator fights and executions.
The Maasai people have traditionally viewed the killing of lions as a rite of passage. Historically, lions were hunted by individuals, however, due to reduced lion populations, elders discourage solo lion hunts. During the European colonisation of Africa in the 19th century, the hunting of lions was encouraged because they were considered pests and lion skins were sold for £1 each. The widely reproduced imagery of the heroic hunter chasing lions would dominate a large part of the century. Trophy hunting of lions in recent years has been met with controversy, notably with the killing of Cecil the lion in mid-2015.
Man-eating
Lions do not usually hunt humans but some (usually males) seem to seek them out. One well-publicised case is the Tsavo maneaters; in 1898, 28 officially recorded workers building the Uganda Railway were taken by lions over nine months during the construction of a bridge in Kenya. The hunter who killed the lions wrote a book detailing the animals' predatory behaviour; they were larger than normal and lacked manes, and one seemed to suffer from tooth decay. The infirmity theory, including tooth decay, is not favoured by all researchers; an analysis of teeth and jaws of man-eating lions in museum collections suggests that while tooth decay may explain some incidents, prey depletion in human-dominated areas is a more likely cause of lion predation on humans. Sick or injured animals may be more prone to man-eating but the behaviour is not unusual, nor necessarily aberrant.
Lions' proclivity for man-eating has been systematically examined. American and Tanzanian scientists report that man-eating behaviour in rural areas of Tanzania increased greatly from 1990 to 2005. At least 563 villagers were attacked and many eaten over this period. The incidents occurred near Selous Game Reserve in Rufiji River and in Lindi Region near the Mozambican border. While the expansion of villages into bush country is one concern, the authors argue conservation policy must mitigate the danger because in this case, conservation contributes directly to human deaths. Cases in Lindi in which lions seize humans from the centres of substantial villages have been documented. Another study of 1,000 people attacked by lions in southern Tanzania between 1988 and 2009 found that the weeks following the full moon, when there was less moonlight, were a strong indicator of increased night-time attacks on people.
According to Robert R. Frump, Mozambican refugees regularly crossing Kruger National Park, South Africa, at night are attacked and eaten by lions. Frump said thousands may have been killed in the decades after apartheid sealed the park and forced refugees to cross the park at night.
Cultural significance
The lion is one of the most widely recognised animal symbols in human culture. It has been extensively depicted in sculptures and paintings, on national flags, and in contemporary films and literature. It is considered to be the 'King of Beasts' and has symbolised power, royalty and protection. Several leaders have had "lion" in their name including Sundiata Keita of the Mali Empire, who was called "Lion of Mali", and Richard the Lionheart of England. The male's mane makes it a particularly recognisable feature and thus has been represented more than the female. Nevertheless, the lioness has also had importance as a guardian.
In sub-Saharan Africa, the lion has been a common character in stories, proverbs and dances, but rarely featured in visual arts. In the Swahili language, the lion is known as simba which also means "aggressive", "king" and "strong". In parts of West and East Africa, the lion is associated with healing and provides the connection between seers and the supernatural. In other East African traditions, the lion represents laziness. In much of African folklore, the lion is portrayed as having low intelligence and is easily tricked. In Nubia, the lion-god Apedemak was associated with the flooding of the Nile. In Ancient Egypt, lions were linked both with the sun and the waters of the Nile. Several gods were conceived as being part lion, including the war deities Sekhmet and Maahes, and Tefnut, the goddess of moisture. The lions mark where the sun rises and sets and symbolise yesterday and tomorrow.
The lion was a prominent symbol in ancient Mesopotamia from Sumer up to Assyrian and Babylonian times, where it was strongly associated with kingship. The big cat was a symbol and steed of fertility goddess Inanna. Lions decorate the Processional Way leading to the Ishtar Gate in Babylon which was built by Nebuchadnezzar II in the 6th century BCE. The Lion of Babylon symbolised the power of the king and protection of the land against enemies, but was also invoked for good luck. The constellation Leo the lion was first recognised by the Sumerians around 4,000 years ago and is the fifth sign of the zodiac. In ancient Israel, a lion represented the tribe of Judah. Lions are frequently mentioned in the Bible, notably in the Book of Daniel, in which the eponymous hero is forced to sleep in the lions' den.
Indo-Persian chroniclers regarded the lion as keeper of order in the realm of animals. The Sanskrit word mrigendra signifies a lion as king of animals. In India, the Lion Capital of Ashoka, erected by Emperor Ashoka in the 3rd century CE, depicts four lions standing back to back. In Hindu mythology, the half-lion Narasimha, an avatar of the deity Vishnu, battles and slays the evil ruler Hiranyakashipu. In Buddhist art, lions are associated with both arhats and bodhisattvas and may be ridden by the Manjushri. Though they were never native to the country, lions have played important roles in Chinese culture. Statues of the beast have guarded the entrances to the imperial palace and many religious shrines. The lion dance has been performed for over a thousand years.
In ancient Greece, the lion is featured in several of Aesop's fables, notably The Lion and the Mouse. In Greek mythology, the Nemean lion is slain by the hero Heracles who wears its skin. Lancelot and Gawain were also heroes slaying lions in medieval Europe. Lions continue to appear in modern literature such as the Cowardly Lion in L. Frank Baum's 1900 The Wonderful Wizard of Oz, and in C. S. Lewis's The Lion, the Witch and the Wardrobe. The lion was portrayed as the ruler of animals in the 1994 Disney animated feature film The Lion King.
| Biology and health sciences | Carnivora | null |
36941 | https://en.wikipedia.org/wiki/Laplace%27s%20equation | Laplace's equation | In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as
or
where is the Laplace operator, is the divergence operator (also symbolized "div"), is the gradient operator (also symbolized "grad"), and is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
If the right-hand side is specified as a given function, , we have
This is called Poisson's equation, a generalization of Laplace's equation. Laplace's equation and Poisson's equation are the simplest examples of elliptic partial differential equations. Laplace's equation is also a special case of the Helmholtz equation.
The general theory of solutions to Laplace's equation is known as potential theory. The twice continuously differentiable solutions of Laplace's equation are the harmonic functions, which are important in multiple branches of physics, notably electrostatics, gravitation, and fluid dynamics. In the study of heat conduction, the Laplace equation is the steady-state heat equation. In general, Laplace's equation describes situations of equilibrium, or those that do not depend explicitly on time.
Forms in different coordinate systems
In rectangular coordinates,
In cylindrical coordinates,
In spherical coordinates, using the convention,
More generally, in arbitrary curvilinear coordinates ,
or
where is the Euclidean metric tensor relative to the new coordinates and denotes its Christoffel symbols.
Boundary conditions
The Dirichlet problem for Laplace's equation consists of finding a solution on some domain such that on the boundary of is equal to some given function. Since the Laplace operator appears in the heat equation, one physical interpretation of this problem is as follows: fix the temperature on the boundary of the domain according to the given specification of the boundary condition. Allow heat to flow until a stationary state is reached in which the temperature at each point on the domain does not change anymore. The temperature distribution in the interior will then be given by the solution to the corresponding Dirichlet problem.
The Neumann boundary conditions for Laplace's equation specify not the function itself on the boundary of but its normal derivative. Physically, this corresponds to the construction of a potential for a vector field whose effect is known at the boundary of alone. For the example of the heat equation it amounts to prescribing the heat flux through the boundary. In particular, at an adiabatic boundary, the normal derivative of is zero.
Solutions of Laplace's equation are called harmonic functions; they are all analytic within the domain where the equation is satisfied. If any two functions are solutions to Laplace's equation (or any linear homogeneous differential equation), their sum (or any linear combination) is also a solution. This property, called the principle of superposition, is very useful. For example, solutions to complex problems can be constructed by summing simple solutions.
In two dimensions
Laplace's equation in two independent variables in rectangular coordinates has the form
Analytic functions
The real and imaginary parts of a complex analytic function both satisfy the Laplace equation. That is, if , and if
then the necessary condition that be analytic is that and be differentiable and that the Cauchy–Riemann equations be satisfied:
where is the first partial derivative of with respect to .
It follows that
Therefore satisfies the Laplace equation. A similar calculation shows that also satisfies the Laplace equation.
Conversely, given a harmonic function, it is the real part of an analytic function, (at least locally). If a trial form is
then the Cauchy–Riemann equations will be satisfied if we set
This relation does not determine , but only its increments:
The Laplace equation for implies that the integrability condition for is satisfied:
and thus may be defined by a line integral. The integrability condition and Stokes' theorem implies that the value of the line integral connecting two points is independent of the path. The resulting pair of solutions of the Laplace equation are called conjugate harmonic functions. This construction is only valid locally, or provided that the path does not loop around a singularity. For example, if and are polar coordinates and
then a corresponding analytic function is
However, the angle is single-valued only in a region that does not enclose the origin.
The close connection between the Laplace equation and analytic functions implies that any solution of the Laplace equation has derivatives of all orders, and can be expanded in a power series, at least inside a circle that does not enclose a singularity. This is in sharp contrast to solutions of the wave equation, which generally have less regularity.
There is an intimate connection between power series and Fourier series. If we expand a function in a power series inside a circle of radius , this means that
with suitably defined coefficients whose real and imaginary parts are given by
Therefore
which is a Fourier series for . These trigonometric functions can themselves be expanded, using multiple angle formulae.
Fluid flow
Let the quantities and be the horizontal and vertical components of the velocity field of a steady incompressible, irrotational flow in two dimensions. The continuity condition for an incompressible flow is that
and the condition that the flow be irrotational is that
If we define the differential of a function by
then the continuity condition is the integrability condition for this differential: the resulting function is called the stream function because it is constant along flow lines. The first derivatives of are given by
and the irrotationality condition implies that satisfies the Laplace equation. The harmonic function that is conjugate to is called the velocity potential. The Cauchy–Riemann equations imply that
Thus every analytic function corresponds to a steady incompressible, irrotational, inviscid fluid flow in the plane. The real part is the velocity potential, and the imaginary part is the stream function.
Electrostatics
According to Maxwell's equations, an electric field in two space dimensions that is independent of time satisfies
and
where is the charge density. The first Maxwell equation is the integrability condition for the differential
so the electric potential may be constructed to satisfy
The second of Maxwell's equations then implies that
which is the Poisson equation. The Laplace equation can be used in three-dimensional problems in electrostatics and fluid flow just as in two dimensions.
In three dimensions
Fundamental solution
A fundamental solution of Laplace's equation satisfies
where the Dirac delta function denotes a unit source concentrated at the point . No function has this property: in fact it is a distribution rather than a function; but it can be thought of as a limit of functions whose integrals over space are unity, and whose support (the region where the function is non-zero) shrinks to a point (see weak solution). It is common to take a different sign convention for this equation than one typically does when defining fundamental solutions. This choice of sign is often convenient to work with because −Δ is a positive operator. The definition of the fundamental solution thus implies that, if the Laplacian of is integrated over any volume that encloses the source point, then
The Laplace equation is unchanged under a rotation of coordinates, and hence we can expect that a fundamental solution may be obtained among solutions that only depend upon the distance from the source point. If we choose the volume to be a ball of radius around the source point, then Gauss's divergence theorem implies that
It follows that
on a sphere of radius that is centered on the source point, and hence
Note that, with the opposite sign convention (used in physics), this is the potential generated by a point particle, for an inverse-square law force, arising in the solution of Poisson equation. A similar argument shows that in two dimensions
where denotes the natural logarithm. Note that, with the opposite sign convention, this is the potential generated by a pointlike sink (see point particle), which is the solution of the Euler equations in two-dimensional incompressible flow.
Green's function
A Green's function is a fundamental solution that also satisfies a suitable condition on the boundary of a volume . For instance,
may satisfy
Now if is any solution of the Poisson equation in :
and assumes the boundary values on , then we may apply Green's identity, (a consequence of the divergence theorem) which states that
The notations un and Gn denote normal derivatives on . In view of the conditions satisfied by and , this result simplifies to
Thus the Green's function describes the influence at of the data and . For the case of the interior of a sphere of radius , the Green's function may be obtained by means of a reflection : the source point at distance from the center of the sphere is reflected along its radial line to a point P that is at a distance
Note that if is inside the sphere, then P′ will be outside the sphere. The Green's function is then given by
where denotes the distance to the source point and denotes the distance to the reflected point P′. A consequence of this expression for the Green's function is the Poisson integral formula'. Let , , and be spherical coordinates for the source point . Here denotes the angle with the vertical axis, which is contrary to the usual American mathematical notation, but agrees with standard European and physical practice. Then the solution of the Laplace equation with Dirichlet boundary values inside the sphere is given by
where
is the cosine of the angle between and . A simple consequence of this formula is that if is a harmonic function, then the value of at the center of the sphere is the mean value of its values on the sphere. This mean value property immediately implies that a non-constant harmonic function cannot assume its maximum value at an interior point.
Laplace's spherical harmonics
Laplace's equation in spherical coordinates is:
Consider the problem of finding solutions of the form . By separation of variables, two differential equations result by imposing Laplace's equation:
The second equation can be simplified under the assumption that has the form . Applying separation of variables again to the second equation gives way to the pair of differential equations
for some number . A priori, is a complex constant, but because must be a periodic function whose period evenly divides , is necessarily an integer and is a linear combination of the complex exponentials . The solution function is regular at the poles of the sphere, where . Imposing this regularity in the solution of the second equation at the boundary points of the domain is a Sturm–Liouville problem that forces the parameter to be of the form for some non-negative integer with ; this is also explained below in terms of the orbital angular momentum. Furthermore, a change of variables transforms this equation into the Legendre equation, whose solution is a multiple of the associated Legendre polynomial . Finally, the equation for has solutions of the form ; requiring the solution to be regular throughout forces .
Here the solution was assumed to have the special form . For a given value of , there are independent solutions of this form, one for each integer with . These angular solutions are a product of trigonometric functions, here represented as a complex exponential, and associated Legendre polynomials:
which fulfill
Here is called a spherical harmonic function of degree and order , is an associated Legendre polynomial, is a normalization constant, and and represent colatitude and longitude, respectively. In particular, the colatitude , or polar angle, ranges from at the North Pole, to at the Equator, to at the South Pole, and the longitude , or azimuth, may assume all values with . For a fixed integer , every solution of the eigenvalue problem
is a linear combination of . In fact, for any such solution, is the expression in spherical coordinates of a homogeneous polynomial that is harmonic (see below), and so counting dimensions shows that there are linearly independent such polynomials.
The general solution to Laplace's equation in a ball centered at the origin is a linear combination of the spherical harmonic functions multiplied by the appropriate scale factor ,
where the are constants and the factors are known as solid harmonics. Such an expansion is valid in the ball
For , the solid harmonics with negative powers of are chosen instead. In that case, one needs to expand the solution of known regions in Laurent series (about ), instead of Taylor series (about ), to match the terms and find .
Electrostatics and magnetostatics
Let be the electric field, be the electric charge density, and be the permittivity of free space. Then Gauss's law for electricity (Maxwell's first equation) in differential form states
Now, the electric field can be expressed as the negative gradient of the electric potential ,
if the field is irrotational, . The irrotationality of is also known as the electrostatic condition.
Plugging this relation into Gauss's law, we obtain Poisson's equation for electricity,
In the particular case of a source-free region, and Poisson's equation reduces to Laplace's equation for the electric potential.
If the electrostatic potential is specified on the boundary of a region , then it is uniquely determined. If is surrounded by a conducting material with a specified charge density , and if the total charge is known, then is also unique.
For the magnetic field, when there is no free current, We can thus define a magnetic scalar potential, , as
With the definition of :
it follows that
Similar to electrostatics, in a source-free region, and Poisson's equation reduces to Laplace's equation for the magnetic scalar potential ,
A potential that does not satisfy Laplace's equation together with the boundary condition is an invalid electrostatic or magnetic scalar potential.
Gravitation
Let be the gravitational field, the mass density, and the gravitational constant. Then Gauss's law for gravitation in differential form is
The gravitational field is conservative and can therefore be expressed as the negative gradient of the gravitational potential:
Using the differential form of Gauss's law of gravitation, we have
which is Poisson's equation for gravitational fields.
In empty space, and we have
which is Laplace's equation for gravitational fields.
In the Schwarzschild metric
S. Persides solved the Laplace equation in Schwarzschild spacetime on hypersurfaces of constant . Using the canonical variables , , the solution is
where is a spherical harmonic function, and
Here and are Legendre functions of the first and second kind, respectively, while is the Schwarzschild radius. The parameter is an arbitrary non-negative integer.
| Mathematics | Calculus and analysis | null |
36961 | https://en.wikipedia.org/wiki/Pion | Pion | In particle physics, a pion (, ) or pi meson, denoted with the Greek letter pi (), is any of three subatomic particles: , , and . Each pion consists of a quark and an antiquark and is therefore a meson. Pions are the lightest mesons and, more generally, the lightest hadrons. They are unstable, with the charged pions and decaying after a mean lifetime of 26.033 nanoseconds ( seconds), and the neutral pion decaying after a much shorter lifetime of 85 attoseconds ( seconds). Charged pions most often decay into muons and muon neutrinos, while neutral pions generally decay into gamma rays.
The exchange of virtual pions, along with vector, rho and omega mesons, provides an explanation for the residual strong force between nucleons. Pions are not produced in radioactive decay, but commonly are in high-energy collisions between hadrons. Pions also result from some matter–antimatter annihilation events. All types of pions are also produced in natural processes when high-energy cosmic-ray protons and other hadronic cosmic-ray components interact with matter in Earth's atmosphere. In 2013, the detection of characteristic gamma rays originating from the decay of neutral pions in two supernova remnants has shown that pions are produced copiously after supernovas, most probably in conjunction with production of high-energy protons that are detected on Earth as cosmic rays.
The pion also plays a crucial role in cosmology, by imposing an upper limit on the energies of cosmic rays surviving collisions with the cosmic microwave background, through the Greisen–Zatsepin–Kuzmin limit.
History
Theoretical work by Hideki Yukawa in 1935 had predicted the existence of mesons as the carrier particles of the strong nuclear force. From the range of the strong nuclear force (inferred from the radius of the atomic nucleus), Yukawa predicted the existence of a particle having a mass of about . Initially after its discovery in 1936, the muon (initially called the "mu meson") was thought to be this particle, since it has a mass of . However, later experiments showed that the muon did not participate in the strong nuclear interaction. In modern terminology, this makes the muon a lepton, and not a meson. However, some communities of astrophysicists continue to call the muon a "mu-meson". The pions, which turned out to be examples of Yukawa's proposed mesons, were discovered later: the charged pions in 1947, and the neutral pion in 1950.
In 1947, the first true mesons, the charged pions, were found by the collaboration led by Cecil Powell at the University of Bristol, in England. The discovery article had four authors: César Lattes, Giuseppe Occhialini, Hugh Muirhead and Powell. Since the advent of particle accelerators had not yet come, high-energy subatomic particles were only obtainable from atmospheric cosmic rays. Photographic emulsions based on the gelatin-silver process were placed for long periods of time in sites located at high-altitude mountains, first at Pic du Midi de Bigorre in the Pyrenees, and later at Chacaltaya in the Andes Mountains, where the plates were struck by cosmic rays.
After development, the photographic plates were inspected under a microscope by a team of about a dozen women. Marietta Kurz was the first person to detect the unusual "double meson" tracks, characteristic for a pion decaying into a muon, but they were too close to the edge of the photographic emulsion and deemed incomplete. A few days later, Irene Roberts observed the tracks left by pion decay that appeared in the discovery paper. Both women are credited in the figure captions in the article.
In 1948, Lattes, Eugene Gardner, and their team first artificially produced pions at the University of California's cyclotron in Berkeley, California, by bombarding carbon atoms with high-speed alpha particles. Further advanced theoretical work was carried out by Riazuddin, who in 1959 used the dispersion relation for Compton scattering of virtual photons on pions to analyze their charge radius.
Since the neutral pion is not electrically charged, it is more difficult to detect and observe than the charged pions are. Neutral pions do not leave tracks in photographic emulsions or Wilson cloud chambers. The existence of the neutral pion was inferred from observing its decay products from cosmic rays, a so-called "soft component" of slow electrons with photons. The was identified definitively at the University of California's cyclotron in 1949 by observing its decay into two photons. Later in the same year, they were also observed in cosmic-ray balloon experiments at Bristol University.
Possible applications
The use of pions in medical radiation therapy, such as for cancer, was explored at a number of research institutions, including the Los Alamos National Laboratory's Meson Physics Facility, which treated 228 patients between 1974 and 1981 in New Mexico, and the TRIUMF laboratory in Vancouver, British Columbia.
Theoretical overview
In the standard understanding of the strong force interaction as defined by quantum chromodynamics, pions are loosely portrayed as Goldstone bosons of spontaneously broken chiral symmetry. That explains why the masses of the three kinds of pions are considerably less than that of the other mesons, such as the scalar or vector mesons. If their current quarks were massless particles, it could make the chiral symmetry exact and thus the Goldstone theorem would dictate that all pions have a zero mass.
In fact, it was shown by Gell-Mann, Oakes and Renner (GMOR) that the square of the pion mass is proportional to the sum of the quark masses times the quark condensate:
with the quark condensate:
This is often known as the GMOR relation and it explicitly shows that in the massless quark limit. The same result also follows from Light-front holography.
Empirically, since the light quarks actually have minuscule nonzero masses, the pions also have nonzero rest masses. However, those masses are almost an order of magnitude smaller than that of the nucleons, roughly 45 MeV, where are the relevant current-quark masses in MeV, around 5−10 MeV.
The pion is one of the particles that mediate the residual strong interaction between a pair of nucleons. This interaction is attractive: it pulls the nucleons together. Written in a non-relativistic form, it is called the Yukawa potential. The pion, being spinless, has kinematics described by the Klein–Gordon equation. In the terms of quantum field theory, the effective field theory Lagrangian describing the pion-nucleon interaction is called the Yukawa interaction.
The nearly identical masses of and indicate that there must be a symmetry at play: this symmetry is called the SU(2) flavour symmetry or isospin. The reason that there are three pions, , and , is that these are understood to belong to the triplet representation or the adjoint representation 3 of SU(2). By contrast, the up and down quarks transform according to the fundamental representation 2 of SU(2), whereas the anti-quarks transform according to the conjugate representation 2*.
With the addition of the strange quark, the pions participate in a larger, SU(3), flavour symmetry, in the adjoint representation, 8, of SU(3). The other members of this octet are the four kaons and the eta meson.
Pions are pseudoscalars under a parity transformation. Pion currents thus couple to the axial vector current and so participate in the chiral anomaly.
Basic properties
Pions, which are mesons with zero spin, are composed of first-generation quarks. In the quark model, an up quark and an anti-down quark make up a , whereas a down quark and an anti-up quark make up the , and these are the antiparticles of one another. The neutral pion is a combination of an up quark with an anti-up quark, or a down quark with an anti-down quark. The two combinations have identical quantum numbers, and hence they are only found in superpositions. The lowest-energy superposition of these is the , which is its own antiparticle. Together, the pions form a triplet of isospin. Each pion has overall isospin () and third-component isospin equal to its charge ().
Charged pion decays
The mesons have a mass of and a mean lifetime of . They decay due to the weak interaction. The primary decay mode of a pion, with a branching fraction of 0.999877, is a leptonic decay into a muon and a muon neutrino:
The second most common decay mode of a pion, with a branching fraction of 0.000123, is also a leptonic decay into an electron and the corresponding electron antineutrino. This "electronic mode" was discovered at CERN in 1958:
The suppression of the electronic decay mode with respect to the muonic one is given approximately (up to a few percent effect of the radiative corrections) by the ratio of the half-widths of the pion–electron and the pion–muon decay reactions,
and is a spin effect known as helicity suppression.
Its mechanism is as follows: The negative pion has spin zero; therefore the lepton and the antineutrino must be emitted with opposite spins (and opposite linear momenta) to preserve net zero spin (and conserve linear momentum). However, because the weak interaction is sensitive only to the left chirality component of fields, the antineutrino has always left chirality, which means it is right-handed, since for massless anti-particles the helicity is opposite to the chirality. This implies that the lepton must be emitted with spin in the direction of its linear momentum (i.e., also right-handed). If, however, leptons were massless, they would only interact with the pion in the left-handed form (because for massless particles helicity is the same as chirality) and this decay mode would be prohibited. Therefore, suppression of the electron decay channel comes from the fact that the electron's mass is much smaller than the muon's. The electron is relatively massless compared with the muon, and thus the electronic mode is greatly suppressed relative to the muonic one, virtually prohibited.
Although this explanation suggests that parity violation is causing the helicity suppression, the fundamental reason lies in the vector-nature of the interaction which dictates a different handedness for the neutrino and the charged lepton. Thus, even a parity conserving interaction would yield the same suppression.
Measurements of the above ratio have been considered for decades to be a test of lepton universality. Experimentally, this ratio is .
Beyond the purely leptonic decays of pions, some structure-dependent radiative leptonic decays (that is, decay to the usual leptons plus a gamma ray) have also been observed.
Also observed, for charged pions only, is the very rare "pion beta decay" (with branching fraction of about ) into a neutral pion, an electron and an electron antineutrino (or for positive pions, a neutral pion, a positron, and electron neutrino).
The rate at which pions decay is a prominent quantity in many sub-fields of particle physics, such as chiral perturbation theory. This rate is parametrized by the pion decay constant (), related to the wave function overlap of the quark and antiquark, which is about .
Neutral pion decays
The meson has a mass of and a mean lifetime of . It decays via the electromagnetic force, which explains why its mean lifetime is much smaller than that of the charged pion (which can only decay via the weak force).
The dominant decay mode, with a branching ratio of , is into two photons:
The decay (as well as decays into any odd number of photons) is forbidden by the C-symmetry of the electromagnetic interaction: The intrinsic C-parity of the is +1, while the C-parity of a system of photons is .
The second largest decay mode () is the Dalitz decay (named after Richard Dalitz), which is a two-photon decay with an internal photon conversion resulting in a photon and an electron-positron pair in the final state:
The third largest established decay mode () is the double-Dalitz decay, with both photons undergoing internal conversion which leads to further suppression of the rate:
The fourth largest established decay mode is the loop-induced and therefore suppressed (and additionally helicity-suppressed) leptonic decay mode ():
The neutral pion has also been observed to decay into positronium with a branching fraction on the order of . No other decay modes have been established experimentally. The branching fractions above are the PDG central values, and their uncertainties are omitted, but available in the cited publication.
[a] Make-up inexact due to non-zero quark masses.
| Physical sciences | Bosons | Physics |
36979 | https://en.wikipedia.org/wiki/Rice | Rice | Rice is a cereal grain and in its domesticated form is the staple food of over half of the world's population, particularly in Asia and Africa. Rice is the seed of the grass species Oryza sativa (Asian rice)—or, much less commonly, Oryza glaberrima (African rice). Asian rice was domesticated in China some 13,500 to 8,200 years ago; African rice was domesticated in Africa about 3,000 years ago. Rice has become commonplace in many cultures worldwide; in 2021, 787 million tons were produced, placing it fourth after sugarcane, maize, and wheat. Only some 8% of rice is traded internationally. China, India, and Indonesia are the largest consumers of rice. A substantial amount of the rice produced in developing nations is lost after harvest through factors such as poor transport and storage. Rice yields can be reduced by pests including insects, rodents, and birds, as well as by weeds, and by diseases such as rice blast. Traditional rice polycultures such as rice-duck farming, and modern integrated pest management seek to control damage from pests in a sustainable way.
Many varieties of rice have been bred to improve crop quality and productivity. Biotechnology has created Green Revolution rice able to produce high yields when supplied with nitrogen fertiliser and managed intensively. Other products are rice able to express human proteins for medicinal use; flood-tolerant or deepwater rice; and drought-tolerant and salt-tolerant varieties. Rice is used as a model organism in biology.
Dry rice grain is milled to remove the outer layers; depending on how much is removed, products range from brown rice to rice with germ and white rice. Some is parboiled to make it easy to cook. Rice contains no gluten; it provides protein but not all the essential amino acids needed for good health. Rice of different types is eaten around the world. Long-grain rice tends to stay intact on cooking; medium-grain rice is stickier, and is used for sweet dishes, and in Italy for risotto; and sticky short-grain rice is used in Japanese sushi as it keeps its shape when cooked. White rice when cooked contains 29% carbohydrate and 2% protein, with some manganese. Golden rice is a variety produced by genetic engineering to contain vitamin A.
Production of rice is estimated to have caused over 1% of global greenhouse gas emissions in 2022. Predictions of how rice yields will be affected by climate change vary across geographies and socioeconomic contexts. In human culture, rice plays a role in various religions and traditions, such as in weddings.
Description
The rice plant can grow to over tall; if in deep water, it can reach a length of . A single plant may have several leafy stems or tillers. The upright stem is jointed with nodes along its length; a long slender leaf arises from each node. The self-fertile flowers are produced in a panicle, a branched inflorescence which arises from the last internode on the stem. There can be up to 350 spikelets in a panicle, each containing male and female flower parts (anthers and ovule). A fertilised ovule develops into the edible grain or caryopsis.
Rice is a cereal belonging to the family Poaceae. As a tropical crop, it can be grown during the two distinct seasons (dry and wet) of the year provided that sufficient water is made available. It is normally an annual, but in the tropics it can survive as a perennial, producing a ratoon crop.
Agronomy
Growing
Like all crops, rice depends for its growth on both biotic and abiotic environmental factors. The principal biotic factors are crop variety, pests, and plant diseases. Abiotic factors include the soil type, whether lowland or upland, amount of rain or irrigation water, temperature, day length, and intensity of sunlight.
Rice grains can be planted directly into the field where they will grow, or seedlings can be grown in a seedbed and transplanted into the field. Direct seeding needs some 60 to 80 kg of grain per hectare, while transplanting needs less, around 40 kg per hectare, but requires far more labour. Most rice in Asia is transplanted by hand. Mechanical transplanting takes less time but requires a carefully-prepared field and seedlings raised on mats or in trays to fit the machine. Rice does not thrive if continuously submerged. Rice can be grown in different environments, depending upon water availability. The usual arrangement is for lowland fields to be surrounded by bunds and flooded to a depth of a few centimetres until around a week before harvest time; this requires a large amount of water. The "alternate wetting and drying" technique uses less water. One form of this is to flood the field to a depth of 5 cm (2 in), then to let the water level drop to 15 cm (6 in) below surface level, as measured by looking into a perforated field water tube sunk into the soil, and then repeating the cycle. Deepwater rice varieties tolerate flooding to a depth of over 50 centimetres for at least a month. Upland rice is grown without flooding, in hilly or mountainous regions; it is rainfed like wheat or maize.
Harvesting
Across Asia, unmilled rice or "paddy" (Indonesian and Malay ), was traditionally the product of smallholder agriculture, with manual harvesting. Larger farms make use of machines such as combine harvesters to reduce the input of labour. The grain is ready to harvest when the moisture content is 20–25%. Harvesting involves reaping, stacking the cut stalks, threshing to separate the grain, and cleaning by winnowing or screening. The rice grain is dried as soon as possible to bring the moisture content down to a level that is safe from mould fungi. Traditional drying relies on the heat of the sun, with the grain spread out on mats or on pavements.
Evolution
Phylogeny
The edible rice species are members of the BOP clade within the grass family, the Poaceae. The rice subfamily, Oryzoideae, is sister to the bamboos, Bambusoideae, and the cereal subfamily Pooideae. The rice genus Oryza is one of eleven in the Oryzeae; it is sister to the Phyllorachideae. The edible rice species O. sativa and O. glaberrima are among some 300 species or subspecies in the genus.
History
Oryza sativa rice was first domesticated in China 9,000 years ago, by people of Neolithic cultures in the Upper and Lower Yangtze, associated with Hmong-Mien-speakers and pre-Austronesians, respectively. The functional allele for nonshattering, the critical indicator of domestication in grains, as well as five other single-nucleotide polymorphisms, is identical in both indica and japonica. This implies a single domestication event for O. sativa. Both indica and japonica forms of Asian rice sprang from a single domestication event in China from the wild rice Oryza rufipogon. Despite this evidence, it appears that indica rice arose when japonica arrived in India about 4,500 years ago and hybridised with another rice, whether an undomesticated proto-indica or wild O. nivara.
Rice was introduced early into Sino-Tibetan cultures in northern China by around 6000 to 5600 years ago, and to the Korean peninsula and Japan by around 5500 to 3200 years ago. It was also carried into Taiwan by the Dapenkeng culture by 5500 to 4000 years ago, before spreading southwards via the Austronesian migrations to Island Southeast Asia, Madagascar, and Guam, but did not survive the voyage to the rest of the Pacific. It reached Austroasiatic and Kra-Dai-speakers in Mainland Southeast Asia and southern China by 5000 years ago.
Rice spread around the rest of the world through cultivation, migration and trade, eventually to the Americas as part of the Columbian exchange after 1492. The now less common Oryza glaberrima (African rice) was independently domesticated in Africa around 3,000 years ago, and introduced to the Americas by the Spanish. In British North America by the time of the start of the American War of Independence, rice had become the fourth most valuable export commodity behind only tobacco, wheat, and fish.
Commerce
Production
In 2021, world production of rice was 787 million tonnes, led by China and India with a combined 52% of the total. This placed rice fourth in the list of crops by production, after sugarcane, maize, and wheat. Other major producers were Bangladesh, Indonesia and Vietnam. 90% of world production is from Asia.
Yield records
The average world yield for rice was , in 2022. Yuan Longping of China's National Hybrid Rice Research and Development Center set a world record for rice yield in 1999 at on a demonstration plot. This employed specially developed hybrid rice and the System of Rice Intensification (SRI), an innovation in rice farming.
Food security
Rice is a major food staple in Asia, Latin America, and some parts of Africa, feeding over half the world's population. However, a substantial part of the crop can be lost post-harvest through inefficient transportation, storage, and milling. A quarter of the crop in Nigeria is lost after harvest. Storage losses include damage by mould fungi if the rice is not dried sufficiently. In China, losses in modern metal silos were just 0.2%, compared to 7–13% when rice was stored by rural households.
Processing
The dry grain is milled to remove the outer layers, namely the husk and bran. These can be removed in a single step, in two steps, or as in commercial milling in a multi-step process of cleaning, dehusking, separation, polishing, grading, and weighing. Brown rice only has the inedible husk removed. Further milling removes bran and the germ to create successively whiter products. Parboiled rice is subjected to a steaming process before it is milled. This makes the grain harder, and moves some of the grain's vitamins and minerals into the white part of the rice so these are retained after milling. Rice does not contain gluten, so is suitable for people on a gluten-free diet. Rice is a good source of protein and a staple food in many parts of the world, but it is not a complete protein as it does not contain all of the essential amino acids in sufficient amounts for good health.
Trade
World trade figures are much smaller than those for production, as less than 8% of rice produced is traded internationally. China, an exporter of rice in the early 2000s, had become the world's largest importer of rice by 2013. Developing countries are the main players in the world rice trade; by 2012, India was the largest exporter of rice, with Thailand and Vietnam the other largest exporters.
Worldwide consumption
As of 2016, the countries that consumed the most rice were China (29% of total), India, and Indonesia. By 2020, Bangladesh had taken third place from Indonesia. On an annual average from 2020 to 2023, China consumed 154 million tonnes of rice, India consumed 109 million tonnes, and Bangladesh and Indonesia consumed about 36 million tonnes each. Across the world, rice consumption per capita fell in the 21st century as people in Asia and elsewhere ate less grain and more meat. An exception is Sub-Saharan Africa, where both per capita consumption of rice and population are increasing.
Food
Eating qualities
Rice is a commonly-eaten food around the world. The varieties of rice are typically classified as short-, medium-, and long-grained. Oryza sativa indica varieties are usually long-grained; Oryza sativa japonica varieties are usually short- or medium-grained. Short-grain rice, with the exception of Spanish Bomba, is usually sticky when cooked, and is suitable for puddings. Thai Jasmine rice is aromatic, and unusually for a long-grain rice has some stickiness, with a soft texture. Indian Basmati rice is very long-grained and aromatic. Italian Arborio rice, used for risotto, is of medium length, oval, and quite sticky. Japanese sushi rice is a sticky short-grain variety.
Nutrition
Cooked white rice is 69% water, 29% carbohydrates, 2% protein, and contains negligible fat (table). In a reference serving of , cooked white rice provides 130 calories of food energy, and contains moderate levels of manganese (18% DV), with no other micronutrients in significant content (all less than 10% of the Daily Value).
In 2018, the World Health Organization strongly recommended fortifying rice with iron, and conditionally recommended fortifying it with vitamin A and with folic acid.
Golden rice
Golden rice is a variety produced through genetic engineering to synthesize beta-carotene, a precursor of vitamin A, in the endosperm of the rice grain. It is intended to be grown and eaten in parts of the world where Vitamin A deficiency is prevalent. Golden rice has been opposed by activists, such as in the Philippines. In 2016 more than 100 Nobel laureates encouraged the use of genetically modified organisms, such as golden rice, for the benefits these could bring.
Rice and climate change
Greenhouse gases from rice production
In 2022, greenhouse gas emissions from rice cultivation were estimated at 5.7 billion tonnes CO2eq, representing 1.2% of total emissions. Within the agriculture sector, rice produces almost half the greenhouse gas emissions from croplands, some 30% of agricultural methane emissions, and 11% of agricultural nitrous oxide emissions. Methane is released from rice fields subject to long-term flooding, as this inhibits the soil from absorbing atmospheric oxygen, resulting in anaerobic fermentation of organic matter in the soil. Emissions can be limited by planting new varieties, not flooding continuously, and removing straw.
It is possible to cut methane emissions in rice cultivation by improved water management, combining dry seeding and one drawdown, or executing a sequence of wetting and drying. This results in emission reductions of up to 90% compared to full flooding and even increased yields.
Effects of climate change on rice production
Predictions of climate change's effects on rice cultivation vary. Global rice yield has been projected to decrease by around 3.2% with each 1 °C increase in global average temperature while another study predicts global rice cultivation will increase initially, plateauing at about 3 °C warming (2091–2100 relative to 1850–1900).
The impacts of climate change on rice cultivation vary across geographic location and socioeconomic context. For example, rising temperatures and decreasing solar radiation during the later years of the 20th century decreased rice yield by between 10% and 20% across 200 farms in seven Asian countries. This may have been caused by increased night-time respiration. IRRI has predicted that Asian rice yields will fall by some 20% per 1 °C rise in global mean temperature. Further, rice is unable to yield grain if the flowers experience a temperature of 35 °C or more for over one hour, so the crop would be lost under these conditions.
In the Po Valley in Italy, the arborio and carnaroli risotto rice varieties have suffered poor harvests through drought in the 21st century. The is developing drought-resistant varieties; its nuovo prometeo variety has deep roots that enable it to tolerate drought, but is not suitable for risotto.
Pests, weeds, and diseases
Pests and weeds
Rice yield can be reduced by weed growth, and a wide variety of pests including insects, nematodes, rodents such as rats, snails, and birds. Major rice insect pests include armyworms, rice bugs, black bugs, cutworms, field crickets, grasshoppers, leafhoppers, mealybugs, and planthoppers. High rates of nitrogen fertiliser application may worsen aphid outbreaks.
Weather conditions can contribute to pest outbreaks: rice gall midge outbreaks are worsened by high rainfall in the wet season, while thrips outbreaks are associated with drought.
Diseases
Rice blast, caused by the fungus Magnaporthe grisea, is the most serious disease of growing rice.
It and bacterial leaf streak (caused by Xanthomonas oryzae pv. oryzae) are perennially the two worst rice diseases worldwide; they are both among the ten most important diseases of all crop plants. Other major rice diseases include sheath blight (caused by Rhizoctonia solani), false smut (Ustilaginoidea virens), and bacterial panicle blight (Burkholderia glumae). Viral diseases include rice bunchy stunt, rice dwarf, rice tungro, and rice yellow mottle.
Pest management
Crop protection scientists are developing sustainable techniques for managing rice pests. Sustainable pest management is based on four principles: biodiversity, host plant resistance, landscape ecology, and hierarchies in a landscape—from biological to social. Farmers' pesticide applications are often unnecessary. Pesticides may actually induce resurgence of populations of rice pests such as the brown planthopper, both by destroying beneficial insects and by enhancing the pest's reproduction. The International Rice Research Institute (IRRI) demonstrated in 1993 that an 87.5% reduction in pesticide use can lead to an overall drop in pest numbers.
Farmers in China, Indonesia and the Philippines have traditionally managed weeds and pests by the polycultural practice of raising ducks and sometimes fish in their rice paddies. These produce valuable additional crops, eat small pest animals, manure the rice, and in the case of ducks also control weeds.
Rice plants produce their own chemical defences to protect themselves from pest attacks. Some synthetic chemicals, such as the herbicide 2,4-D, cause the plant to increase the production of certain defensive chemicals and thereby increase the plant's resistance to some types of pests. Conversely, other chemicals, such as the insecticide imidacloprid, appear to induce changes in the gene expression of the rice that make the plant more susceptible to certain pests.
Plant breeders have created rice cultivars incorporating resistance to various insect pests. Conventional plant breeding of resistant varieties has been limited by challenges such as rearing insect pests for testing, and the great diversity and continuous evolution of pests. Resistance genes are being sought from wild species of rice, and genetic engineering techniques are being applied.
Ecotypes and cultivars
The International Rice Research Institute maintains the International Rice Genebank, which holds over 100,000 rice varieties. Much of southeast Asia grows sticky or glutinous rice varieties. High-yield cultivars of rice suitable for cultivation in Africa, called the New Rice for Africa (NERICA), have been developed to improve food security and alleviate poverty in Sub-Saharan Africa.
The complete genome of rice was sequenced in 2005, making it the first crop plant to reach this status.
Since then, the genomes of hundreds of types of rice, both wild and cultivated, and including both Asian and African rice species, have been sequenced.
Biotechnology
High-yielding varieties
The high-yielding varieties are a group of crops created during the Green Revolution to increase global food production radically. The first Green Revolution rice variety, IR8, was produced in 1966 at the International Rice Research Institute through a cross between an Indonesian variety named "Peta" and a Chinese variety named "Dee Geo Woo Gen". Green Revolution varieties were bred to have short strong stems so that the rice would not lodge or fall over. This enabled them to stay upright and productive even with heavy applications of fertiliser.
Expression of human proteins
Ventria Bioscience has genetically modified rice to express lactoferrin and lysozyme which are proteins usually found in breast milk, and human serum albumin. These proteins have antiviral, antibacterial, and antifungal effects. Rice containing these added proteins can be used as a component in oral rehydration solutions to treat diarrheal diseases, thereby shortening their duration and reducing recurrence. Such supplements may also help reverse anemia.
Flood-tolerance
In areas subject to flooding, farmers have long planted flood tolerant varieties known as deepwater rice. In South and South East Asia, flooding affects some each year.
Flooding has historically led to massive losses in yields, such as in the Philippines, where in 2006, rice crops worth $65 million were lost to flooding.
Standard rice varieties cannot withstand stagnant flooding for more than about a week, since it disallows the plant access to necessary requirements such as sunlight and gas exchange. The Swarna Sub1 cultivar can tolerate week-long submergence, consuming carbohydrates efficiently and continuing to grow. So-called "scuba rice" with the Sub1A transgene is robustly tolerant of submergence for as long as two weeks, offering much improved flood survival for farmers' crops. IRRI has created Sub1A varieties and distributed them to Bangladesh, India, Indonesia, Nepal, and the Philippines.
Drought-tolerance
Drought represents a significant environmental stress for rice production, with of rainfed rice production in South and South East Asia often at risk. Under drought conditions, without sufficient water to afford them the ability to obtain the required levels of nutrients from the soil, conventional commercial rice varieties can be severely affected—as happened for example in India early in the 21st century.
The International Rice Research Institute conducts research into developing drought-tolerant rice varieties, including the varieties Sahbhagi Dhan, Sahod Ulan, and Sookha dhan, currently being employed by farmers in India, the Philippines, and Nepal respectively. In addition, in 2013 the Japanese National Institute for Agrobiological Sciences led a team which successfully inserted the DEEPER ROOTING 1 (DRO1) gene, from the Philippine upland rice variety Kinandang Patong, into the popular commercial rice variety IR64, giving rise to a far deeper root system in the resulting plants. This facilitates an improved ability for the rice plant to derive its required nutrients in times of drought via accessing deeper layers of soil, a feature demonstrated by trials which saw the IR64 + DRO1 rice yields drop by 10% under moderate drought conditions, compared to 60% for the unmodified IR64 variety.
Salt-tolerance
Soil salinity poses a major threat to rice crop productivity, particularly along low-lying coastal areas during the dry season. For example, roughly of the coastal areas of Bangladesh are affected by saline soils. These high concentrations of salt can severely affect rice plants' physiology, especially during early stages of growth, and as such farmers are often forced to abandon these areas.
Progress has been made in developing rice varieties capable of tolerating such conditions; the hybrid created from the cross between the commercial rice variety IR56 and the wild rice species Oryza coarctata is one example. O. coarctata can grow in soils with double the limit of salinity of normal varieties, but does not produce edible rice. Developed by the International Rice Research Institute, the hybrid variety utilises specialised leaf glands that remove salt into the atmosphere. It was produced from one successful embryo out of 34,000 crosses between the two species; this was then backcrossed to IR56 with the aim of preserving the genes responsible for salt tolerance that were inherited from O. coarctata.
Cold tolerance
Rice is sensitive to temperatures below 12C. Sowing takes place once the daily average temperature is reliably above this limit. Average temperatures below that reduce growth; if sustained for over four days, germination and seedling growth are harmed and seedlings may die. In larger plants subjected to cold, rice blast is encouraged, seriously reducing yield. As of 2022, researchers continue to study the mechanisms of chilling tolerance in rice and its genetic basis.
Reducing methane emissions
Producing rice in paddies is harmful for the environment due to the release of methane by methanogenic bacteria. These bacteria live in the anaerobic waterlogged soil, consuming nutrients released by rice roots. Putting the barley gene SUSIBA2 into rice creates a shift in biomass production from root to shoot, decreasing the methanogen population, and resulting in a reduction of methane emissions of up to 97%. Further, the modification increases the amount of rice grains.
Model organism
Rice is used as a model organism for investigating the mechanisms of meiosis and DNA repair in higher plants. For example, study using rice has shown that the gene OsRAD51C is necessary for the accurate repair of DNA double-strand breaks during meiosis.
In human culture
Rice plays an important role in certain religions and popular beliefs. In Hindu wedding ceremonies, rice, denoting fertility, prosperity, and purity, is thrown into the sacred fire, a custom modified in Western weddings, where people throw rice. In Malay weddings, rice features in multiple special wedding foods such as sweet glutinous rice. In Japan and the Philippines, rice wine is used for weddings and other celebrations. Dewi Sri is a goddess of the Indo-Malaysian archipelago, who in myth is transformed into rice or other crops. The start of the rice planting season is marked in Asian countries including Nepal and Cambodia with a Royal Ploughing Ceremony.
| Biology and health sciences | Food and drink | null |
36980 | https://en.wikipedia.org/wiki/Clay | Clay | Clay is a type of fine-grained natural soil material containing clay minerals (hydrous aluminium phyllosilicates, e.g. kaolinite, Al2Si2O5(OH)4). Most pure clay minerals are white or light-coloured, but natural clays show a variety of colours from impurities, such as a reddish or brownish colour from small amounts of iron oxide.
Clays develop plasticity when wet but can be hardened through firing. Clay is the longest-known ceramic material. Prehistoric humans discovered the useful properties of clay and used it for making pottery. Some of the earliest pottery shards have been dated to around 14,000 BCE, and clay tablets were the first known writing medium. Clay is used in many modern industrial processes, such as paper making, cement production, and chemical filtering. Between one-half and two-thirds of the world's population live or work in buildings made with clay, often baked into brick, as an essential part of its load-bearing structure. In agriculture, clay content is a major factor in determining land arability. Clay soils are generally less suitable for crops due to poor natural drainage, however clay soils are more fertile, due to higher cation-exchange capacity.
Clay is a very common substance. Shale, formed largely from clay, is the most common sedimentary rock. Although many naturally occurring deposits include both silts and clay, clays are distinguished from other fine-grained soils by differences in size and mineralogy. Silts, which are fine-grained soils that do not include clay minerals, tend to have larger particle sizes than clays. Mixtures of sand, silt and less than 40% clay are called loam. Soils high in swelling clays (expansive clay), which are clay minerals that readily expand in volume when they absorb water, are a major challenge in civil engineering.
Properties
The defining mechanical property of clay is its plasticity when wet and its ability to harden when dried or fired. Clays show a broad range of water content within which they are highly plastic, from a minimum water content (called the plastic limit) where the clay is just moist enough to mould, to a maximum water content (called the liquid limit) where the moulded clay is just dry enough to hold its shape. The plastic limit of kaolinite clay ranges from about 36% to 40% and its liquid limit ranges from about 58% to 72%. High-quality clay is also tough, as measured by the amount of mechanical work required to roll a sample of clay flat. Its toughness reflects a high degree of internal cohesion.
Clay has a high content of clay minerals that give it its plasticity. Clay minerals are hydrous aluminium phyllosilicate minerals, composed of aluminium and silicon ions bonded into tiny, thin plates by interconnecting oxygen and hydroxide ions. These plates are tough but flexible, and in moist clay, they adhere to each other. The resulting aggregates give clay the cohesion that makes it plastic. In kaolinite clay, the bonding between plates is provided by a film of water molecules that hydrogen bond the plates together. The bonds are weak enough to allow the plates to slip past each other when the clay is being moulded, but strong enough to hold the plates in place and allow the moulded clay to retain its shape after it is moulded. When the clay is dried, most of the water molecules are removed, and the plates hydrogen bond directly to each other, so that the dried clay is rigid but still fragile. If the clay is moistened again, it will once more become plastic. When the clay is fired to the earthenware stage, a dehydration reaction removes additional water from the clay, causing clay plates to irreversibly adhere to each other via stronger covalent bonding, which strengthens the material. The clay mineral kaolinite is transformed into a non-clay material, metakaolin, which remains rigid and hard if moistened again. Further firing through the stoneware and porcelain stages further recrystallizes the metakaolin into yet stronger minerals such as mullite.
The tiny size and plate form of clay particles gives clay minerals a high surface area. In some clay minerals, the plates carry a negative electrical charge that is balanced by a surrounding layer of positive ions (cations), such as sodium, potassium, or calcium. If the clay is mixed with a solution containing other cations, these can swap places with the cations in the layer around the clay particles, which gives clays a high capacity for ion exchange. The chemistry of clay minerals, including their capacity to retain nutrient cations such as potassium and ammonium, is important to soil fertility.
Clay is a common component of sedimentary rock. Shale is formed largely from clay and is the most common of sedimentary rocks. However, most clay deposits are impure. Many naturally occurring deposits include both silts and clay. Clays are distinguished from other fine-grained soils by differences in size and mineralogy. Silts, which are fine-grained soils that do not include clay minerals, tend to have larger particle sizes than clays. There is, however, some overlap in particle size and other physical properties. The distinction between silt and clay varies by discipline. Geologists and soil scientists usually consider the separation to occur at a particle size of 2 μm (clays being finer than silts), sedimentologists often use 4–5 μm, and colloid chemists use 1 μm. Clay-size particles and clay minerals are not the same, despite a degree of overlap in their respective definitions. Geotechnical engineers distinguish between silts and clays based on the plasticity properties of the soil, as measured by the soils' Atterberg limits. ISO 14688 grades clay particles as being smaller than 2 μm and silt particles as being larger. Mixtures of sand, silt and less than 40% clay are called loam.
Some clay minerals (such as smectite) are described as swelling clay minerals, because they have a great capacity to take up water, and they increase greatly in volume when they do so. When dried, they shrink back to their original volume. This produces distinctive textures, such as mudcracks or "popcorn" texture, in clay deposits. Soils containing swelling clay minerals (such as bentonite) pose a considerable challenge for civil engineering, because swelling clay can break foundations of buildings and ruin road beds.
Agriculture
Clay is generally considered undesirable for agriculture, although some amount of clay is a necessary component of good soil. Compared to other soils, clay soils are less suitable for crops due to their tendency to retain water, and require artificial drainage and tillage to make suitable for planting. However, clay soils are often more fertile and can hold onto nutrients better due to their higher cation-exchange capacity, allowing more land to remain in production rather than being left fallow. As clay tends to retain nutrients for longer before leaching them, this also means plants may require more fertilizer in clay soils.
Formation
Clay minerals most commonly form by prolonged chemical weathering of silicate-bearing rocks. They can also form locally from hydrothermal activity. Chemical weathering takes place largely by acid hydrolysis due to low concentrations of carbonic acid, dissolved in rainwater or released by plant roots. The acid breaks bonds between aluminium and oxygen, releasing other metal ions and silica (as a gel of orthosilicic acid).)
The clay minerals formed depend on the composition of the source rock and the climate. Acid weathering of feldspar-rich rock, such as granite, in warm climates tends to produce kaolin. Weathering of the same kind of rock under alkaline conditions produces illite. Smectite forms by weathering of igneous rock under alkaline conditions, while gibbsite forms by intense weathering of other clay minerals.
There are two types of clay deposits: primary and secondary. Primary clays form as residual deposits in soil and remain at the site of formation. Secondary clays are clays that have been transported from their original location by water erosion and deposited in a new sedimentary deposit. Secondary clay deposits are typically associated with very low energy depositional environments such as large lakes and marine basins.
Varieties
The main groups of clays include kaolinite, montmorillonite-smectite, and illite. Chlorite, vermiculite, talc, and pyrophyllite are sometimes also classified as clay minerals. There are approximately 30 different types of "pure" clays in these categories, but most "natural" clay deposits are mixtures of these different types, along with other weathered minerals. Clay minerals in clays are most easily identified using X-ray diffraction rather than chemical or physical tests.
Varve (or varved clay) is clay with visible annual layers that are formed by seasonal deposition of those layers and are marked by differences in erosion and organic content. This type of deposit is common in former glacial lakes. When fine sediments are delivered into the calm waters of these glacial lake basins away from the shoreline, they settle to the lake bed. The resulting seasonal layering is preserved in an even distribution of clay sediment banding.
Quick clay is a unique type of marine clay indigenous to the glaciated terrains of Norway, North America, Northern Ireland, and Sweden. It is a highly sensitive clay, prone to liquefaction, and has been involved in several deadly landslides.
Uses
Modelling clay is used in art and handicraft for sculpting.
Clays are used for making pottery, both utilitarian and decorative, and construction products, such as bricks, walls, and floor tiles. Different types of clay, when used with different minerals and firing conditions, are used to produce earthenware, stoneware, and porcelain. Prehistoric humans discovered the useful properties of clay. Some of the earliest pottery shards recovered are from central Honshu, Japan. They are associated with the Jōmon culture, and recovered deposits have been dated to around 14,000 BCE. Cooking pots, art objects, dishware, smoking pipes, and even musical instruments such as the ocarina can all be shaped from clay before being fired.
Ancient peoples in Mesopotamia adopted clay tablets as the first known writing medium. Clay was chosen due to the local material being easy to work with and widely available. Scribes wrote on the tablets by inscribing them with a script known as cuneiform, using a blunt reed called a stylus, which effectively produced the wedge shaped markings of their writing. After being written on, clay tablets could be reworked into fresh tablets and reused if needed, or fired to make them permanent records. Purpose-made clay balls were used as sling ammunition. Clay is used in many industrial processes, such as paper making, cement production, and chemical filtering. Bentonite clay is widely used as a mold binder in the manufacture of sand castings.
Materials
Clay is a common filler used in polymer nanocomposites. It can reduce the cost of the composite, as well as impart modified behavior: increased stiffness, decreased permeability, decreased electrical conductivity, etc.
Medicine
Traditional uses of clay as medicine go back to prehistoric times. An example is Armenian bole, which is used to soothe an upset stomach. Some animals such as parrots and pigs ingest clay for similar reasons. Kaolin clay and attapulgite have been used as anti-diarrheal medicines.
Construction
Clay as the defining ingredient of loam is one of the oldest building materials on Earth, among other ancient, naturally occurring geologic materials such as stone and organic materials like wood. Also a primary ingredient in many natural building techniques, clay is used to create adobe, cob, cordwood, and structures and building elements such as wattle and daub, clay plaster, clay render case, clay floors and clay paints and ceramic building material. Clay was used as a mortar in brick chimneys and stone walls where protected from water.
Clay, relatively impermeable to water, is also used where natural seals are needed, such as in pond linings, the cores of dams, or as a barrier in landfills against toxic seepage (lining the landfill, preferably in combination with geotextiles). Studies in the early 21st century have investigated clay's absorption capacities in various applications, such as the removal of heavy metals from waste water and air purification.
| Physical sciences | Petrology | null |
36984 | https://en.wikipedia.org/wiki/Salmon | Salmon | Salmon (; : salmon) is the common name for several commercially important species of euryhaline ray-finned fish from the genera Salmo and Oncorhynchus of the family Salmonidae, native to tributaries of the North Atlantic (Salmo) and North Pacific (Oncorhynchus) basins. Other closely related fish in the same family include trout, char, grayling, whitefish, lenok and taimen, all coldwater fish of the subarctic and cooler temperate regions with some sporadic endorheic populations in Central Asia.
Salmon are typically anadromous: they hatch in the shallow gravel beds of freshwater headstreams and spend their juvenile years in rivers, lakes and freshwater wetlands, migrate to the ocean as adults and live like sea fish, then return to their freshwater birthplace to reproduce. However, populations of several species are restricted to fresh waters (i.e. landlocked) throughout their lives. Folklore has it that the fish return to the exact stream where they themselves hatched to spawn, and tracking studies have shown this to be mostly true. A portion of a returning salmon run may stray and spawn in different freshwater systems; the percent of straying depends on the species of salmon. Homing behavior has been shown to depend on olfactory memory.
Salmon are important food fish and are intensively farmed in many parts of the world, with Norway being the world's largest producer of farmed salmon, followed by Chile. They are also highly prized game fish for recreational fishing, by both freshwater and saltwater anglers. Many species of salmon have since been introduced and naturalized into non-native environments such as the Great Lakes of North America, Patagonia in South America and South Island of New Zealand.
Name and etymology
The Modern English term salmon is derived from , and , which in turn are from Anglo-Norman: saumon, from , and from (which in turn might have originated from salire, meaning "to leap".). The unpronounced "l" absent from Middle English was later added as a Latinisation to make the word closer to its Latin root. The term salmon has mostly displaced its now dialectal synonym lax, in turn from , from , from from Proto-Indo-European: *lakso-.
Species
The seven commercially important species of salmon occur in two genera of the subfamily Salmoninae. The genus Salmo contains the Atlantic salmon, found in both sides of the North Atlantic, as well as more than 40 other species commonly named as trout. The genus Oncorhynchus contains 12 recognised species which occur naturally only in the North Pacific, six of which are known as Pacific salmon while the remainder are considered trout. Outside their native habitats, Chinook salmon have been successfully introduced in New Zealand and Patagonia, while coho, sockeye and Atlantic salmon have been established in Patagonia, as well.
† Both the Salmo and Oncorhynchus genera also contain a number of trout species informally referred to as salmon. Within Salmo, the Adriatic salmon (Salmo obtusirostris) and Black Sea salmon (Salmo labrax) have both been named as salmon in English, although they fall outside the generally recognized seven salmon species. The masu salmon (Oncorhynchus masou) is actually considered a trout ("cherry trout") in Japan, with masu actually being the Japanese word for trout. On the other hand, the steelhead and sea trout, the anadromous forms of rainbow trout and brown trout respectively, are from the same genera as salmon and live identical migratory lives, but neither is termed "salmon" .
The extinct Eosalmo driftwoodensis, the oldest known Salmoninae fish in the fossil record, helps scientists figure how the different species of salmon diverged from a common ancestor. The Eocene salmon's fossil from British Columbia provides evidence that the divergence between Pacific and Atlantic salmon had not yet occurred 40 million years ago. Both the fossil record and analysis of mitochondrial DNA suggest the divergence occurred 10 to 20 million years ago during the Miocene. This independent evidence from DNA analysis and the fossil record indicate that salmon divergence occurred long before the Quaternary glaciation began the cycle of glacial advance and retreat.
Non-salmon species of "salmon"
There are several other species of fish which are colloquially called "salmon" but are not true salmon. Of those listed below, the Danube salmon or huchen is a large freshwater salmonid closely related (from the same subfamily) to the seven species of salmon above, but others are fishes of unrelated orders, given the common name "salmon" simply due to similar shapes, behaviors and niches occupied:
Distribution
Atlantic salmon (Salmo salar) reproduce in northern rivers on both coasts of the Atlantic Ocean.
Landlocked Atlantic salmon (Salmo salar m. sebago) is a potamodromous (migratory only between fresh waters) subspecies/morph that live in a number of lakes in eastern North America and in Northern Europe, for instance in lakes Sebago, Onega, Ladoga, Saimaa, Vänern and Winnipesaukee. They are not a different species from the sea-run Atlantic salmon but have independently evolved a freshwater-only life cycle, which they maintain even when they could access the ocean.
Chinook salmon (Oncorhynchus tshawytscha) are also known in the United States as king salmon or "blackmouth salmon", and as "spring salmon" in British Columbia, Canada. Chinook salmon is the largest of all Pacific salmon, frequently exceeding and . The name tyee is also used in British Columbia to refer to Chinook salmon over 30 pounds and in the Columbia River watershed, especially large Chinooks were once referred to as June hogs. Chinook salmon are known to range as far north as the Mackenzie River and Kugluktuk in the central Canadian arctic, and as far south as the Central Californian Coast.
Chum salmon (Oncorhynchus keta) is known as dog salmon or calico salmon in some parts of the US, and as keta in the Russian Far East. This species has the widest geographic range of the Pacific species: in the eastern Pacific from north of the Mackenzie River in Canada to south of the Sacramento River in California and in the western Pacific from Lena River in Siberia to the island of Kyūshū in the Sea of Japan.
Coho salmon (Oncorhynchus kisutch) are also known in the US as silver salmon. This species is found throughout the coastal waters of Alaska and British Columbia and as far south as Central California (Monterey Bay). It is also now known to occur, albeit infrequently, in the Mackenzie River.
Masu salmon (Oncorhynchus masou), also known as in Japan, are found only in the western Pacific Ocean in Japan, Korea, and Russian Far East. A landlocked subspecies known as the Taiwanese salmon or Formosan salmon (Oncorhynchus masou formosanus) is found in central Taiwan's Chi Chia Wan Stream.
Pink salmon (Oncorhynchus gorbuscha), known as humpback salmon or "humpies" in southeast and southwest Alaska, are found in the western Pacific from Lena River in Siberia to Korea, found throughout northern Pacific, and in the eastern Pacific from the Mackenzie River in Canada to northern California, usually in shorter coastal streams. It is the smallest of the Pacific species, with an average weight of .
Sockeye salmon (Oncorhynchus nerka) is also known as red salmon in the US (especially Alaska). This lake-rearing species is found in the eastern Pacific from Bathurst Inlet in the Canadian Arctic to Klamath River in California, and in the western Pacific from the Anadyr River in Siberia to northern Hokkaidō island in Japan. Although most adult Pacific salmon feed on small fish, shrimp, and squid, sockeye feed on plankton they filter through gill rakers. Kokanee salmon are the landlocked form of sockeye salmon.
Danube salmon, or huchen (Hucho hucho), are the largest permanent freshwater salmonid species.
Life cycle
Salmon eggs are laid in freshwater streams typically at high latitudes. The eggs hatch into alevin or sac fry. The fry quickly develop into parr with camouflaging vertical stripes. The parr stay for six months to three years in their natal stream before becoming smolts, which are distinguished by their bright, silvery colour with scales that are easily rubbed off. Only 10% of all salmon eggs are estimated to survive to this stage.
The smolt body chemistry changes, allowing them to live in salt water. While a few species of salmon remain in fresh water throughout their life cycle, the majority are anadromous and migrate to the ocean for maturation: in these species, smolts spend a portion of their out-migration time in brackish water, where their body chemistry becomes accustomed to osmoregulation in the ocean. This body chemistry change is hormone-driven, causing physiological adjustments in the function of osmoregulatory organs such as the gills, which leads to large increases in their ability to secrete salt. Hormones involved in increasing salinity tolerance include insulin-like growth factor I, cortisol, and thyroid hormones, which permits the fish to endure the transition from a freshwater environment to the ocean.
The salmon spend about one to five years (depending on the species) in the open ocean, where they gradually become sexually mature. The adult salmon then return primarily to their natal streams to spawn. Atlantic salmon spend between one and four years at sea. When a fish returns after just one year's sea feeding, it is called a grilse in Canada, Britain, and Ireland. Grilse may be present at spawning, and go unnoticed by large males, releasing their own sperm on the eggs.
Prior to spawning, depending on the species, salmon undergo changes. They may grow a hump, develop canine-like teeth, or develop a kype (a pronounced curvature of the jaws in male salmon). All change from the silvery blue of a fresh-run fish from the sea to a darker colour. Salmon can make amazing journeys, sometimes moving hundreds of miles upstream against strong currents and rapids to reproduce. Chinook and sockeye salmon from central Idaho, for example, travel over and climb nearly from the Pacific Ocean as they return to spawn. Condition tends to deteriorate the longer the fish remain in fresh water, and they then deteriorate further after they spawn, when they are known as kelts. In all species of Pacific salmon, the mature individuals die within a few days or weeks of spawning, a trait known as semelparity. Between 2 and 4% of Atlantic salmon kelts survive to spawn again, all females. However, even in those species of salmon that may survive to spawn more than once (iteroparity), postspawning mortality is quite high (perhaps as high as 40 to 50%).
To lay her roe, the female salmon uses her tail (caudal fin), to create a low-pressure zone, lifting gravel to be swept downstream, excavating a shallow depression, called a redd. The redd may sometimes contain 5,000 eggs covering . The eggs usually range from orange to red. One or more males approach the female in her redd, depositing sperm, or milt, over the roe. The female then covers the eggs by disturbing the gravel at the upstream edge of the depression before moving on to make another redd. The female may make as many as seven redds before her supply of eggs is exhausted.
Each year, the fish experiences a period of rapid growth, often in summer, and one of slower growth, normally in winter. This results in ring formation around an earbone called the otolith (annuli), analogous to the growth rings visible in a tree trunk. Freshwater growth shows as densely crowded rings, sea growth as widely spaced rings; spawning is marked by significant erosion as body mass is converted into eggs and milt.
Freshwater streams and estuaries provide important habitat for many salmon species. They feed on terrestrial and aquatic insects, amphipods, and other crustaceans while young, and primarily on other fish when older. Eggs are laid in deeper water with larger gravel and need cool water and good water flow (to supply oxygen) to the developing embryos. Mortality of salmon in the early life stages is usually high due to natural predation and human-induced changes in habitat, such as siltation, high water temperatures, low oxygen concentration, loss of stream cover, and reductions in river flow. Estuaries and their associated wetlands provide vital nursery areas for the salmon prior to their departure to the open ocean. Wetlands not only help buffer the estuary from silt and pollutants, but also provide important feeding and hiding areas.
Salmon not killed by other means show greatly accelerated deterioration (phenoptosis, or "programmed aging") at the end of their lives. Their bodies rapidly deteriorate right after they spawn as a result of the release of massive amounts of corticosteroids.
Diet
Salmon are mid-level carnivores whose diet change according to their life stage. Salmon fry predominantly feed upon zooplanktons until they reach fingerling sizes, when they start to consume more aquatic invertebrates such as insect larvae, microcrustaceans and worms. As juveniles (parrs), they become more predatory and actively prey upon aquatic insects, small crustaceans, tadpoles and small bait fishes. They are also known to breach the water to attack terrestrial insects such as grasshoppers and dragonflies, as well as consuming fish eggs (even those of other salmon).
As adults, salmon behave like other mid-sized pelagic fish, eating a variety of sea creatures including smaller forage fish such as lanternfish, herrings, sand lances, mackerels and barracudina. They also eat krill, squid and polychaete worms.
Ecology
In the Pacific Northwest and Alaska, salmon are keystone species. The migration of salmon represent a massive retrograde nutrient transfer, rich in nitrogen, sulfur, carbon and phosphorus, from the ocean to the inland freshwater ecosystems. Predation by piscivorous land animals (such as ospreys, bears and otters) along the journey serve to transfer the nutrients from the water to land, and decomposition of salmon carcass benefits the forest ecosystem.
In the case of Pacific salmon, most (if not all) of the salmon that survive to reach the headwater spawning grounds will die after laying eggs and their dead bodies sink to cover the gravel beds, with the nutrients released from the biodegradation of their corpses providing a significant boost to these otherwise biomass-poor shallow streams.
Bears
Grizzly bears function as ecosystem engineers, capturing salmon and carrying them into adjacent dry land to eat the fish. There they deposit nutrient-rich urine and feces and partially eaten carcasses. Bears preparing for hibernation tend to preferentially consume the more nutrient- and energy-rich salmon roes and brain over the actual flesh, and are estimated to discard up to half the salmon they've harvested uneaten on the forest floor, in densities that can reach per hectare, providing as much as 24% of the total nitrogen available to the riparian woodlands. The foliage of spruce trees up to from a stream where grizzlies fish salmon have been found to contain nitrogen originating from the fished salmon.
Beavers
Beavers also function as ecosystem engineers; in the process of tree-cutting and damming, beavers alter the local ecosystems extensively. Beaver ponds can provide critical habitat for juvenile salmon.
An example of this was seen in the years following 1818 in the Columbia River Basin. In 1818, the British government made an agreement with the U.S. government to allow U.S. citizens access to the Columbia catchment (see Treaty of 1818). At the time, the Hudson's Bay Company sent word to trappers to extirpate all furbearers from the area in an effort to make the area less attractive to U.S. fur traders. In response to the elimination of beavers from large parts of the river system, salmon runs plummeted, even in the absence of many of the factors usually associated with the demise of salmon runs. Salmon recruitment can be affected by beavers' dams because dams can:
Slow the rate at which nutrients are flushed from the water system; nutrients provided by adult salmon dying throughout the fall and winter remain available in the spring to newly hatched juveniles
Provide deeper salmon pools where young salmon can avoid avian predators
Increase productivity through algal photosynthesis and by enhancing the conversion efficiency of the cellulose-powered detritus cycle
Create slow-water environments where juvenile salmon put the food they ingest into growth rather than into fighting currents
Increase structural complexity with many physical niches where salmon can avoid predators
Beaver dams are able to nurture salmon juveniles in estuarine tidal marshes where the salinity is less than 10 ppm. Beavers build small dams of generally less than high in channels in the myrtle zone. These dams can be overtopped at high tide and hold water at low tide. This provides refuges for juvenile salmon so they do not have to swim into large channels where they are subject to predation by larger fish.
Lampreys
It has been discovered that rivers which have seen a decline or disappearance of anadromous lampreys, loss of the lampreys also affects the salmon in a negative way. Like salmon, anadromous lampreys stop feeding and die after spawning, and their decomposing bodies release nutrients into the stream. Also, along with species like rainbow trout and Sacramento sucker, lampreys clean the gravel in the rivers during spawning. Their larvae, called ammocoetes, are filter feeders which contribute to the health of the waters. They are also a food source for the young salmon, and being fattier and oilier, it is assumed predators prefer them over salmon offspring, taking off some of the predation pressure on smolts. Adult lampreys are also the preferred prey of seals and sea lions, which can eat 30 lampreys to every salmon, allowing more adult salmon to enter the rivers to spawn without being eaten by the marine mammals.
Parasites
According to Canadian biologist Dorothy Kieser, the myxozoan parasite Henneguya salminicola is commonly found in the flesh of salmonids. It has been recorded in the field samples of salmon returning to the Haida Gwaii Islands. The fish responds by walling off the parasitic infection into a number of cysts that contain milky fluid. This fluid is an accumulation of a large number of parasites.
Henneguya and other parasites in the myxosporean group have complex life cycles, where the salmon is one of two hosts. The fish releases the spores after spawning. In the Henneguya case, the spores enter a second host, most likely an invertebrate, in the spawning stream. When juvenile salmon migrate to the Pacific Ocean, the second host releases a stage infective to salmon. The parasite is then carried in the salmon until the next spawning cycle. The myxosporean parasite that causes whirling disease in trout has a similar life cycle. However, as opposed to whirling disease, the Henneguya infestation does not appear to cause disease in the host salmon—even heavily infected fish tend to return to spawn successfully.
According to Dr. Kieser, a lot of work on Henneguya salminicola was done by scientists at the Pacific Biological Station in Nanaimo in the mid-1980s, in particular, an overview report which states, "the fish that have the longest fresh water residence time as juveniles have the most noticeable infections. Hence in order of prevalence, coho are most infected followed by sockeye, chinook, chum and pink. As well, the report says, at the time the studies were conducted, stocks from the middle and upper reaches of large river systems in British Columbia such as Fraser, Skeena, Nass and from mainland coastal streams in the southern half of B.C., "are more likely to have a low prevalence of infection." The report also states, "It should be stressed that Henneguya, economically deleterious though it is, is harmless from the view of public health. It is strictly a fish parasite that cannot live in or affect warm blooded animals, including man".
According to Klaus Schallie, Molluscan Shellfish Program Specialist with the Canadian Food Inspection Agency, "Henneguya salminicola is found in southern B.C. also and in all species of salmon. I have previously examined smoked chum salmon sides that were riddled with cysts and some sockeye runs in Barkley Sound (southern B.C., west coast of Vancouver Island) are noted for their high incidence of infestation."
Sea lice, particularly Lepeophtheirus salmonis and various Caligus species, including C. clemensi and C. rogercresseyi, can cause deadly infestations of both farm-grown and wild salmon. Sea lice are ectoparasites which feed on mucus, blood, and skin, and migrate and latch onto the skin of wild salmon during free-swimming, planktonic nauplii and copepodid larval stages, which can persist for several days.
Large numbers of highly populated, open-net salmon farms
can create exceptionally large concentrations of sea lice; when exposed in river estuaries containing large numbers of open-net farms, many young wild salmon are infected, and do not survive as a result. Adult salmon may survive otherwise critical numbers of sea lice, but small, thin-skinned juvenile salmon migrating to sea are highly vulnerable. On the Pacific coast of Canada, the louse-induced mortality of pink salmon in some regions is commonly over 80%.
Effect of pile driving
The risk of injury caused by underwater pile driving has been studied by Dr. Halvorsen and her co-workers. The study concluded that the fish are at risk of injury if the cumulative sound exposure level exceeds 210 dB relative to 1 μPa2 s.
Wild fisheries
Commercial
As can be seen from the production chart at the left, the global capture reported by different countries to the FAO of commercial wild salmon has remained fairly steady since 1990 at about one million tonnes per year. This is in contrast to farmed salmon (below) which has increased in the same period from about 0.6 million tonnes to well over two million tonnes.
Nearly all captured wild salmon are Pacific salmon. The capture of wild Atlantic salmon has always been relatively small, and has declined steadily since 1990. In 2011 only 2,500 tonnes were reported. In contrast, about half of all farmed salmon are Atlantic salmon.
Recreational
Recreational salmon fishing can be a technically demanding kind of sport fishing, not necessarily intuitive for beginning fishermen. A conflict exists between commercial fishermen and recreational fishermen for the right to salmon stock resources. Commercial fishing in estuaries and coastal areas is often restricted so enough salmon can return to their natal rivers where they can spawn and be available for sport fishing. On parts of the North American West Coast salmon sport fishing has completely replaced inshore commercial salmon fishing. In most cases, the commercial value of a salmon sold as seafood can be several times less than the value attributed to the same fish caught by a sport fisherman. This is "a powerful economic argument for allocating stock resources preferentially to sport fishing".
Farms
Salmon aquaculture is a major contributor to the world production of farmed finfish, representing about US$10 billion annually. Other commonly cultured fish species include tilapia, catfish, sea bass, carp and bream. Salmon farming is significant in Chile, Norway, Scotland, Canada and the Faroe Islands; it is the source for most salmon consumed in the United States and Europe. Atlantic salmon are also, in very small volumes, farmed in Russia and Tasmania, Australia.
Salmon are carnivorous, and need to be fed meals produced from catching other wild forage fish and other marine organisms. Salmon farming leads to a high demand for wild forage fish. As a predator, salmon require large nutritional intakes of protein, and farmed salmon consume more fish than they generate as a final product. On a dry weight basis, 2–4 kg of wild-caught fish are needed to produce one kilogram of salmon. As the salmon farming industry expands, it requires more forage fish for feed, at a time when 75% of the world's monitored fisheries are already near to or have exceeded their maximum sustainable yield. The industrial-scale extraction of wild forage fish for salmon farming affects the survivability of other wild predatory fish which rely on them for food. Research is ongoing into sustainable and plant-based salmon feeds.
Intensive salmon farming uses open-net cages, which have low production costs. It has the drawback of allowing disease and sea lice to spread to local wild salmon stocks.
Another form of salmon production, which is safer but less controllable, is to raise salmon in hatcheries until they are old enough to become independent. They are released into rivers in an attempt to increase the salmon population. This system is referred to as ranching. It was very common in countries such as Sweden, before the Norwegians developed salmon farming, but is seldom done by private companies. As anyone may catch the salmon when they return to spawn, a company is limited in benefiting financially from their investment.
Because of this, the ranching method has mainly been used by various public authorities and non-profit groups, such as the Cook Inlet Aquaculture Association, as a way to increase salmon populations in situations where they have declined due to overharvesting, construction of dams and habitat destruction or fragmentation. Negative consequences to this sort of population manipulation include genetic "dilution" of the wild stocks. Many jurisdictions are now beginning to discourage supplemental fish planting in favour of harvest controls, and habitat improvement and protection.
A variant method of fish stocking, called ocean ranching, is under development in Alaska. There, the young salmon are released into the ocean far from any wild salmon streams. When it is time for them to spawn, they return to where they were released, where fishermen can catch them.
An alternative method to hatcheries is to use spawning channels. These are artificial streams, usually parallel to an existing stream, with concrete or rip-rap sides and gravel bottoms. Water from the adjacent stream is piped into the top of the channel, sometimes via a header pond, to settle out sediment. Spawning success is often much better in channels than in adjacent streams due to the control of floods, which in some years can wash out the natural redds. Because of the lack of floods, spawning channels must sometimes be cleaned out to remove accumulated sediment. The same floods that destroy natural redds also clean the regular streams. Spawning channels preserve the natural selection of natural streams, as there is no benefit, as in hatcheries, to use prophylactic chemicals to control diseases.
Farm-raised salmon are fed the carotenoids astaxanthin and canthaxanthin to match their flesh colour to wild salmon to improve their marketability. Wild salmon get these carotenoids, primarily astaxanthin, from eating shellfish and krill.
One proposed alternative to the use of wild-caught fish as feed for the salmon, is the use of soy-based products. This should be better for the local environment of the fish farm, but producing soy beans has a high environmental cost for the producing region. The fish omega-3 fatty acid content would be reduced compared to fish-fed salmon.
Another possible alternative is a yeast-based coproduct of bioethanol production, proteinaceous fermentation biomass. Substituting such products for engineered feed can result in equal (sometimes enhanced) growth in fish. With its increasing availability, this would address the problems of rising costs for buying hatchery fish feed.
Yet another attractive alternative is the increased use of seaweed. Seaweed provides essential minerals and vitamins for growing organisms. It offers the advantage of providing natural amounts of dietary fiber and having a lower glycemic load than grain-based fish meal. In the best-case scenario, widespread use of seaweed could yield a future in aquaculture that eliminates the need for land, freshwater, or fertilizer to raise fish.
Management
Salmon population levels are of concern in the Atlantic and in some parts of the Pacific. The population of wild salmon declined markedly in recent decades, especially North Atlantic populations, which spawn in the waters of western Europe and eastern Canada, and wild salmon in the Snake and Columbia River systems in northwestern United States.
Alaska fishery stocks are still abundant, and catches have been on the rise in recent decades, after the state initiated limitations in 1972. Some of the most important Alaskan salmon sustainable wild fisheries are located near the Kenai River, Copper River, and in Bristol Bay. Fish farming of Pacific salmon is outlawed in the United States Exclusive Economic Zone, however, there is a substantial network of publicly funded hatcheries, and the State of Alaska's fisheries management system is viewed as a leader in the management of wild fish stocks.
In Canada, returning Skeena River wild salmon support commercial, subsistence and recreational fisheries, as well as the area's diverse wildlife on the coast and around communities hundreds of miles inland in the watershed. The status of wild salmon in Washington is mixed. Of 435 wild stocks of salmon and steelhead, only 187 of them were classified as healthy; 113 had an unknown status, one was extinct, 12 were in critical condition and 122 were experiencing depressed populations.
The commercial salmon fisheries in California have been either severely curtailed or closed completely in recent years, due to critically low returns on the Klamath and or Sacramento rivers, causing millions of dollars in losses to commercial fishermen. Both Atlantic and Pacific salmon are popular sportfish.
Salmon populations have been established in all the Great Lakes. Coho stocks were planted by the state of Michigan in the late 1960s to control the growing population of non-native alewife. Now Chinook (king), Atlantic, and coho (silver) salmon are annually stocked in all Great Lakes by most bordering states and provinces. These populations are not self-sustaining and do not provide much in the way of a commercial fishery, but have led to the development of a thriving sport fishery.
Wild, self sustaining Pacific salmon populations have been established in New Zealand, Chile, and Argentina. They are highly prized by sport fishers, but others worry about displacing native fish species. Also, and especially in Chile (Aquaculture in Chile), both Atlantic and Pacific salmon are used in net pen farming.
In 2020 researchers reported widespread declines in the sizes of four species of wild Pacific salmon: Chinook, chum, coho, and sockeye. These declines have been occurring for 30 years, and are thought to be associated with climate change and competition with growing numbers of pink and hatchery salmon.
As food
Salmon is a popular food fish. Classified as an oily fish, salmon is considered to be healthy due to the fish's high protein, high omega-3 fatty acids, and high vitamin D content. Salmon is also a source of cholesterol, with a range of depending on the species. According to reports in the journal Science, farmed salmon may contain high levels of dioxins. PCB (polychlorinated biphenyl) levels may be up to eight times higher in farmed salmon than in wild salmon, but still well below levels considered dangerous. Nonetheless, according to a 2006 study published in the Journal of the American Medical Association, the benefits of eating even farmed salmon still outweigh any risks imposed by contaminants. Farmed salmon has a high omega-3 fatty acid content comparable to wild salmon. The type of omega-3 present may not be a factor for other important health functions.
Salmon flesh is generally orange to red, although white-fleshed wild salmon with white-black skin colour occurs. The natural colour of salmon results from carotenoid pigments, largely astaxanthin, but also canthaxanthin, in the flesh. Wild salmon get these carotenoids from eating krill and other tiny shellfish.
The vast majority of Atlantic salmon available in market around the world are farmed (almost 99%), whereas the majority of Pacific salmon are wild-caught (greater than 80%). Canned salmon in the U.S. is usually wild Pacific catch, though some farmed salmon is available in canned form. Smoked salmon is another popular preparation method, and can either be hot or cold smoked. Lox can refer to either cold-smoked salmon or salmon cured in a brine solution (also called gravlax). Traditional canned salmon includes some skin (which is harmless) and bone (which adds calcium). Skinless and boneless canned salmon is also available.
Raw salmon flesh may contain Anisakis nematodes, marine parasites that cause anisakiasis. Before the availability of refrigeration, the Japanese did not consume raw salmon. Salmon and salmon roe have only recently come into use in making sashimi (raw fish) and sushi.
To the Indigenous peoples of the Pacific Northwest Coast, salmon is considered a vital part of the diet. Specifically, the indigenous peoples of Haida Gwaii, located near former Queen Charlotte Island in British Columbia, rely on salmon as one of their main sources of food, although many other bands have fished Pacific waters for centuries. Salmon are not only ancient and unique, but it is important because it is expressed in culture, art forms, and ceremonial feasts. Annually, salmon spawn in Haida, feeding on everything on the way upstream and down. Within the Haida nation, salmon is referred to as "tsiin", and is prepared in several ways including smoking, baking, frying, and making soup.
Historically, there has always been enough salmon, as traditional subsistence fishing methods did not result in overfishing, and people only took what they needed. In 2003, a report on First Nations participation in commercial fisheries, including salmon, commissioned by BC's Ministry of Agriculture and Food found that there were 595 First Nation-owned and operated commercial vessels in the province. Of those vessels, First Nations' members owned 564. However, employment within the industry has decreased overall by 50% in the last decade, with 8,142 registered commercial fishermen in 2003. This has affected employment for many fisherman, who rely on salmon as a source of income.
Black bears also rely on salmon as food. The leftovers the bears leave behind are considered important nutrients for the Canadian forest, such as the soil, trees and plants. In this sense, the salmon feed the forest and in return receive clean water and gravel in which to hatch and grow, sheltered from extremes of temperature and water flow in times of high and low rainfall. However, the condition of the salmon in Haida has been affected in recent decades. Due to logging and development, much of the salmon's habitat (i.e., Ain River) has been destroyed, resulting in the fish being close to endangered. For residents, this has resulted in limits on catches, in turn, has affected families diets, and cultural events such as feasts. Some of the salmon systems in danger include: the Davidon, Naden, Mamim, and Mathers.
Fishing
History
The salmon has long been at the heart of the culture and livelihood of coastal dwellers, which can be traced as far back as 5,000 years when archeologists discovered Nisqually tribe remnants. The original distribution of the genus Oncorhynchus covered the Pacific Rim coastline. History shows salmon used tributaries, rivers and estuaries without regard to jurisdiction for 18–22 million years. Baseline data is near impossible to recreate based on the inconsistent historical data, but there has been massive depletion since the 1900s. The Pacific Northwest once sprawled with native inhabitants who ensured little degradation was caused by their actions to salmon habitats. As animists, the indigenous people relied not only for salmon for food, but spiritual guidance. The role of the salmon spirit guided the people to respect ecological systems such as the rivers and tributaries the salmon used for spawning. Natives often used the entire fish and left little waste by turning the bladder into glue, and using bones for toys and skin for clothing and shoes. The original salmon ceremony, introduced by indigenous tribes on the Pacific coast, consisted of three major parts. First was the welcoming of the first catch, and then the cooking of it. Finally the bones were returned to the sea to induce hospitality so other salmon would give their lives to the people of that village.
Many tribes, such as the Yurok, had a taboo against harvesting the first fish that swam upriver in summer, but once they confirmed that the salmon run had returned in abundance they would begin to catch them in plentiful. The indigenous practices were guided by deep ecological wisdom, which was eradicated when Euro-American settlements began to be developed. Salmon have a much grander history than what is presently shown today. The salmon that once dominated the Pacific Ocean are now just a fraction in population and size. The Pacific salmon population is now less than 1–3% of what it was when Lewis and Clark arrived at the region. In his 1908 State of the Union address, U.S. President Theodore Roosevelt observed that the fisheries were in significant decline:
The salmon fisheries of the Columbia River are now but a fraction of what they were twenty-five years ago, and what they would be now if the United States Government had taken complete charge of them by intervening between Oregon and Washington. During these twenty-five years the fishermen of each State have naturally tried to take all they could get, and the two legislatures have never been able to agree on joint action of any kind adequate in degree for the protection of the fisheries. At the moment the fishing on the Oregon side is practically closed, while there is no limit on the Washington side of any kind, and no one can tell what the courts will decide as to the very statutes under which this action and non-action result. Meanwhile very few salmon reach the spawning grounds, and probably four years hence the fisheries will amount to nothing; and this comes from a struggle between the associated, or gill-net, fishermen on the one hand, and the owners of the fishing wheels up the river.
On the Columbia River, the Chief Joseph Dam completed in 1955 completely blocks salmon migration to the upper Columbia River system.
The Fraser River salmon population was affected by the 1914 slide caused by the Canadian Pacific Railway at Hells Gate. The 1917 catch was one quarter of the 1913 catch.
The origin of the word for "salmon" was one of the arguments about the location of the origin of the Indo-European languages.
Commercial fishing
Recreational fishing
Mythology
The salmon is an important creature in several strands of Celtic mythology and poetry, which often associated them with wisdom and venerability. In Irish folklore, fishermen associated salmon with fairies and thought it was unlucky to refer to them by name. In Irish mythology, a creature called the Salmon of Knowledge plays key role in the tale The Boyhood Deeds of Fionn. In the tale, the Salmon will grant powers of knowledge to whoever eats it, and is sought by poet Finn Eces for seven years. Finally Finn Eces catches the fish and gives it to his young pupil, Fionn mac Cumhaill, to prepare it for him. However, Fionn burns his thumb on the salmon's juices, and he instinctively puts it in his mouth. In so doing, he inadvertently gains the Salmon's wisdom. Elsewhere in Irish mythology, the salmon is also one of the incarnations of both Tuan mac Cairill and Fintan mac Bóchra.
Salmon also feature in Welsh mythology. In the prose tale Culhwch and Olwen, the Salmon of Llyn Llyw is the oldest animal in Britain, and the only creature who knows the location of Mabon ap Modron. After speaking to a string of other ancient animals who do not know his whereabouts, King Arthur's men Cai and Bedwyr are led to the Salmon of Llyn Llyw, who lets them ride its back to the walls of Mabon's prison in Gloucester.
In Norse mythology, after Loki tricked the blind god Höðr into killing his brother Baldr, Loki jumped into a river and transformed himself into a salmon to escape punishment from the other gods. When they held out a net to trap him he attempted to leap over it but was caught by Thor who grabbed him by the tail with his hand, and this is why the salmon's tail is tapered.
Salmon are central spiritually and culturally to Native American mythology on the Pacific coast, from the Haida and Coast Salish peoples, to the Nuu-chah-nulth peoples in British Columbia.
| Biology and health sciences | Salmoniformes | null |
37021 | https://en.wikipedia.org/wiki/Dirac%20delta%20function | Dirac delta function | In mathematical analysis, the Dirac delta function (or distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as
such that
Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.
The delta function was introduced by physicist Paul Dirac, and has since been applied routinely in physics and engineering to model point masses and instantaneous impulses. It is called the delta function because it is a continuous analogue of the Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1. The mathematical rigor of the delta function was disputed until Laurent Schwartz developed the theory of distributions, where it is defined as a linear form acting on functions.
Motivation and overview
The graph of the Dirac delta is usually thought of as following the whole x-axis and the positive y-axis. The Dirac delta is used to model a tall narrow spike function (an impulse), and other similar abstractions such as a point charge, point mass or electron point. For example, to calculate the dynamics of a billiard ball being struck, one can approximate the force of the impact by a Dirac delta. In doing so, one not only simplifies the equations, but one also is able to calculate the motion of the ball, by only considering the total impulse of the collision, without a detailed model of all of the elastic energy transfer at subatomic levels (for instance).
To be specific, suppose that a billiard ball is at rest. At time it is struck by another ball, imparting it with a momentum , with units kg⋅m⋅s−1. The exchange of momentum is not actually instantaneous, being mediated by elastic processes at the molecular and subatomic level, but for practical purposes it is convenient to consider that energy transfer as effectively instantaneous. The force therefore is ; the units of are s−1.
To model this situation more rigorously, suppose that the force instead is uniformly distributed over a small time interval That is,
Then the momentum at any time is found by integration:
Now, the model situation of an instantaneous transfer of momentum requires taking the limit as , giving a result everywhere except at :
Here the functions are thought of as useful approximations to the idea of instantaneous transfer of momentum.
The delta function allows us to construct an idealized limit of these approximations. Unfortunately, the actual limit of the functions (in the sense of pointwise convergence) is zero everywhere but a single point, where it is infinite. To make proper sense of the Dirac delta, we should instead insist that the property
which holds for all should continue to hold in the limit. So, in the equation it is understood that the limit is always taken .
In applied mathematics, as we have done here, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions centered at the origin with variance tending to zero.
The Dirac delta is not truly a function, at least not a usual one with domain and range in real numbers. For example, the objects and are equal everywhere except at yet have integrals that are different. According to Lebesgue integration theory, if and are functions such that almost everywhere, then is integrable if and only if is integrable and the integrals of and are identical. A rigorous approach to regarding the Dirac delta function as a mathematical object in its own right requires measure theory or the theory of distributions.
History
In physics, the Dirac delta function was popularized by Paul Dirac in this book The Principles of Quantum Mechanics published in 1930. However, Oliver Heaviside, 35 years before Dirac, described an impulsive function called the Heaviside step function for purposes and with properties analogous to Dirac's work. Even earlier several mathematicians and physicists used limits of sharply peaked functions in derivations.
An infinitesimal formula for an infinitely tall, unit impulse delta function (infinitesimal version of Cauchy distribution) explicitly appears in an 1827 text of Augustin-Louis Cauchy. Siméon Denis Poisson considered the issue in connection with the study of wave propagation as did Gustav Kirchhoff somewhat later. Kirchhoff and Hermann von Helmholtz also introduced the unit impulse as a limit of Gaussians, which also corresponded to Lord Kelvin's notion of a point heat source. The Dirac delta function as such was introduced by Paul Dirac in his 1927 paper The Physical Interpretation of the Quantum Dynamics. He called it the "delta function" since he used it as a continuum analogue of the discrete Kronecker delta.
Mathematicians refer to the same concept as a distribution rather than a function.
Joseph Fourier presented what is now called the Fourier integral theorem in his treatise Théorie analytique de la chaleur in the form:
which is tantamount to the introduction of the -function in the form:
Later, Augustin Cauchy expressed the theorem using exponentials:
Cauchy pointed out that in some circumstances the order of integration is significant in this result (contrast Fubini's theorem).
As justified using the theory of distributions, the Cauchy equation can be rearranged to resemble Fourier's original formulation and expose the δ-function as
where the δ-function is expressed as
A rigorous interpretation of the exponential form and the various limitations upon the function f necessary for its application extended over several centuries. The problems with a classical interpretation are explained as follows:
The greatest drawback of the classical Fourier transformation is a rather narrow class of functions (originals) for which it can be effectively computed. Namely, it is necessary that these functions decrease sufficiently rapidly to zero (in the neighborhood of infinity) to ensure the existence of the Fourier integral. For example, the Fourier transform of such simple functions as polynomials does not exist in the classical sense. The extension of the classical Fourier transformation to distributions considerably enlarged the class of functions that could be transformed and this removed many obstacles.
Further developments included generalization of the Fourier integral, "beginning with Plancherel's pathbreaking L2-theory (1910), continuing with Wiener's and Bochner's works (around 1930) and culminating with the amalgamation into L. Schwartz's theory of distributions (1945) ...", and leading to the formal development of the Dirac delta function.
Definitions
The Dirac delta function can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite,
and which is also constrained to satisfy the identity
This is merely a heuristic characterization. The Dirac delta is not a function in the traditional sense as no extended real number valued function defined on the real numbers has these properties.
As a measure
One way to rigorously capture the notion of the Dirac delta function is to define a measure, called Dirac measure, which accepts a subset of the real line as an argument, and returns if , and otherwise. If the delta function is conceptualized as modeling an idealized point mass at 0, then represents the mass contained in the set . One may then define the integral against as the integral of a function against this mass distribution. Formally, the Lebesgue integral provides the necessary analytic device. The Lebesgue integral with respect to the measure satisfies
for all continuous compactly supported functions . The measure is not absolutely continuous with respect to the Lebesgue measure—in fact, it is a singular measure. Consequently, the delta measure has no Radon–Nikodym derivative (with respect to Lebesgue measure)—no true function for which the property
holds. As a result, the latter notation is a convenient abuse of notation, and not a standard (Riemann or Lebesgue) integral.
As a probability measure on , the delta measure is characterized by its cumulative distribution function, which is the unit step function.
This means that is the integral of the cumulative indicator function with respect to the measure ; to wit,
the latter being the measure of this interval. Thus in particular the integration of the delta function against a continuous function can be properly understood as a Riemann–Stieltjes integral:
All higher moments of are zero. In particular, characteristic function and moment generating function are both equal to one.
As a distribution
In the theory of distributions, a generalized function is considered not a function in itself but only through how it affects other functions when "integrated" against them. In keeping with this philosophy, to define the delta function properly, it is enough to say what the "integral" of the delta function is against a sufficiently "good" test function . Test functions are also known as bump functions. If the delta function is already understood as a measure, then the Lebesgue integral of a test function against that measure supplies the necessary integral.
A typical space of test functions consists of all smooth functions on with compact support that have as many derivatives as required. As a distribution, the Dirac delta is a linear functional on the space of test functions and is defined by
for every test function .
For to be properly a distribution, it must be continuous in a suitable topology on the space of test functions. In general, for a linear functional on the space of test functions to define a distribution, it is necessary and sufficient that, for every positive integer there is an integer and a constant such that for every test function , one has the inequality
where represents the supremum. With the distribution, one has such an inequality (with with for all . Thus is a distribution of order zero. It is, furthermore, a distribution with compact support (the support being ).
The delta distribution can also be defined in several equivalent ways. For instance, it is the distributional derivative of the Heaviside step function. This means that for every test function , one has
Intuitively, if integration by parts were permitted, then the latter integral should simplify to
and indeed, a form of integration by parts is permitted for the Stieltjes integral, and in that case, one does have
In the context of measure theory, the Dirac measure gives rise to distribution by integration. Conversely, equation () defines a Daniell integral on the space of all compactly supported continuous functions which, by the Riesz representation theorem, can be represented as the Lebesgue integral of with respect to some Radon measure.
Generally, when the term Dirac delta function is used, it is in the sense of distributions rather than measures, the Dirac measure being among several terms for the corresponding notion in measure theory. Some sources may also use the term Dirac delta distribution.
Generalizations
The delta function can be defined in -dimensional Euclidean space as the measure such that
for every compactly supported continuous function . As a measure, the -dimensional delta function is the product measure of the 1-dimensional delta functions in each variable separately. Thus, formally, with , one has
The delta function can also be defined in the sense of distributions exactly as above in the one-dimensional case. However, despite widespread use in engineering contexts, () should be manipulated with care, since the product of distributions can only be defined under quite narrow circumstances.
The notion of a Dirac measure makes sense on any set. Thus if is a set, is a marked point, and is any sigma algebra of subsets of , then the measure defined on sets by
is the delta measure or unit mass concentrated at .
Another common generalization of the delta function is to a differentiable manifold where most of its properties as a distribution can also be exploited because of the differentiable structure. The delta function on a manifold centered at the point is defined as the following distribution:
for all compactly supported smooth real-valued functions on . A common special case of this construction is a case in which is an open set in the Euclidean space .
On a locally compact Hausdorff space , the Dirac delta measure concentrated at a point is the Radon measure associated with the Daniell integral () on compactly supported continuous functions . At this level of generality, calculus as such is no longer possible, however a variety of techniques from abstract analysis are available. For instance, the mapping is a continuous embedding of into the space of finite Radon measures on , equipped with its vague topology. Moreover, the convex hull of the image of under this embedding is dense in the space of probability measures on .
Properties
Scaling and symmetry
The delta function satisfies the following scaling property for a non-zero scalar :
and so
Scaling property proof:
where a change of variable is used. If is negative, i.e., , then
Thus,
In particular, the delta function is an even distribution (symmetry), in the sense that
which is homogeneous of degree .
Algebraic properties
The distributional product of with is equal to zero:
More generally, for all positive integers .
Conversely, if , where and are distributions, then
for some constant .
Translation
The integral of any function multiplied by the time-delayed Dirac delta is
This is sometimes referred to as the sifting property or the sampling property. The delta function is said to "sift out" the value of f(t) at t = T.
It follows that the effect of convolving a function with the time-delayed Dirac delta is to time-delay by the same amount:
The sifting property holds under the precise condition that be a tempered distribution (see the discussion of the Fourier transform below). As a special case, for instance, we have the identity (understood in the distribution sense)
Composition with a function
More generally, the delta distribution may be composed with a smooth function in such a way that the familiar change of variables formula holds (where ), that
provided that is a continuously differentiable function with nowhere zero. That is, there is a unique way to assign meaning to the distribution so that this identity holds for all compactly supported test functions . Therefore, the domain must be broken up to exclude the point. This distribution satisfies if is nowhere zero, and otherwise if has a real root at , then
It is natural therefore to the composition for continuously differentiable functions by
where the sum extends over all roots of , which are assumed to be simple. Thus, for example
In the integral form, the generalized scaling property may be written as
Indefinite integral
For a constant and a "well-behaved" arbitrary real-valued function ,
where is the Heaviside step function and is an integration constant.
Properties in n dimensions
The delta distribution in an -dimensional space satisfies the following scaling property instead,
so that is a homogeneous distribution of degree .
Under any reflection or rotation , the delta function is invariant,
As in the one-variable case, it is possible to define the composition of with a bi-Lipschitz function uniquely so that the following holds
for all compactly supported functions .
Using the coarea formula from geometric measure theory, one can also define the composition of the delta function with a submersion from one Euclidean space to another one of different dimension; the result is a type of current. In the special case of a continuously differentiable function such that the gradient of is nowhere zero, the following identity holds
where the integral on the right is over , the -dimensional surface defined by with respect to the Minkowski content measure. This is known as a simple layer integral.
More generally, if is a smooth hypersurface of , then we can associate to the distribution that integrates any compactly supported smooth function over :
where is the hypersurface measure associated to . This generalization is associated with the potential theory of simple layer potentials on . If is a domain in with smooth boundary , then is equal to the normal derivative of the indicator function of in the distribution sense,
where is the outward normal. For a proof, see e.g. the article on the surface delta function.
In three dimensions, the delta function is represented in spherical coordinates by:
Derivatives
The derivative of the Dirac delta distribution, denoted and also called the Dirac delta prime or Dirac delta derivative as described in Laplacian of the indicator, is defined on compactly supported smooth test functions by
The first equality here is a kind of integration by parts, for if were a true function then
By mathematical induction, the -th derivative of is defined similarly as the distribution given on test functions by
In particular, is an infinitely differentiable distribution.
The first derivative of the delta function is the distributional limit of the difference quotients:
More properly, one has
where is the translation operator, defined on functions by , and on a distribution by
In the theory of electromagnetism, the first derivative of the delta function represents a point magnetic dipole situated at the origin. Accordingly, it is referred to as a dipole or the doublet function.
The derivative of the delta function satisfies a number of basic properties, including:
which can be shown by applying a test function and integrating by parts.
The latter of these properties can also be demonstrated by applying distributional derivative definition, Leibniz 's theorem and linearity of inner product:
Furthermore, the convolution of with a compactly-supported, smooth function is
which follows from the properties of the distributional derivative of a convolution.
Higher dimensions
More generally, on an open set in the -dimensional Euclidean space , the Dirac delta distribution centered at a point is defined by
for all , the space of all smooth functions with compact support on . If is any multi-index with and denotes the associated mixed partial derivative operator, then the -th derivative of is given by
That is, the -th derivative of is the distribution whose value on any test function is the -th derivative of at (with the appropriate positive or negative sign).
The first partial derivatives of the delta function are thought of as double layers along the coordinate planes. More generally, the normal derivative of a simple layer supported on a surface is a double layer supported on that surface and represents a laminar magnetic monopole. Higher derivatives of the delta function are known in physics as multipoles.
Higher derivatives enter into mathematics naturally as the building blocks for the complete structure of distributions with point support. If is any distribution on supported on the set consisting of a single point, then there is an integer and coefficients such that
Representations
Nascent delta function
The delta function can be viewed as the limit of a sequence of functions
where is sometimes called a nascent delta function. This limit is meant in a weak sense: either that
for all continuous functions having compact support, or that this limit holds for all smooth functions with compact support. The difference between these two slightly different modes of weak convergence is often subtle: the former is convergence in the vague topology of measures, and the latter is convergence in the sense of distributions.
Approximations to the identity
Typically a nascent delta function can be constructed in the following manner. Let be an absolutely integrable function on of total integral , and define
In dimensions, one uses instead the scaling
Then a simple change of variables shows that also has integral . One may show that () holds for all continuous compactly supported functions , and so converges weakly to in the sense of measures.
The constructed in this way are known as an approximation to the identity. This terminology is because the space of absolutely integrable functions is closed under the operation of convolution of functions: whenever and are in . However, there is no identity in for the convolution product: no element such that for all . Nevertheless, the sequence does approximate such an identity in the sense that
This limit holds in the sense of mean convergence (convergence in ). Further conditions on the , for instance that it be a mollifier associated to a compactly supported function, are needed to ensure pointwise convergence almost everywhere.
If the initial is itself smooth and compactly supported then the sequence is called a mollifier. The standard mollifier is obtained by choosing to be a suitably normalized bump function, for instance
( ensuring that the total integral is 1).
In some situations such as numerical analysis, a piecewise linear approximation to the identity is desirable. This can be obtained by taking to be a hat function. With this choice of , one has
which are all continuous and compactly supported, although not smooth and so not a mollifier.
Probabilistic considerations
In the context of probability theory, it is natural to impose the additional condition that the initial in an approximation to the identity should be positive, as such a function then represents a probability distribution. Convolution with a probability distribution is sometimes favorable because it does not result in overshoot or undershoot, as the output is a convex combination of the input values, and thus falls between the maximum and minimum of the input function. Taking to be any probability distribution at all, and letting as above will give rise to an approximation to the identity. In general this converges more rapidly to a delta function if, in addition, has mean and has small higher moments. For instance, if is the uniform distribution on also known as the rectangular function, then:
Another example is with the Wigner semicircle distribution
This is continuous and compactly supported, but not a mollifier because it is not smooth.
Semigroups
Nascent delta functions often arise as convolution semigroups. This amounts to the further constraint that the convolution of with must satisfy
for all . Convolution semigroups in that form a nascent delta function are always an approximation to the identity in the above sense, however the semigroup condition is quite a strong restriction.
In practice, semigroups approximating the delta function arise as fundamental solutions or Green's functions to physically motivated elliptic or parabolic partial differential equations. In the context of applied mathematics, semigroups arise as the output of a linear time-invariant system. Abstractly, if A is a linear operator acting on functions of x, then a convolution semigroup arises by solving the initial value problem
in which the limit is as usual understood in the weak sense. Setting gives the associated nascent delta function.
Some examples of physically important convolution semigroups arising from such a fundamental solution include the following.
The heat kernel
The heat kernel, defined by
represents the temperature in an infinite wire at time , if a unit of heat energy is stored at the origin of the wire at time . This semigroup evolves according to the one-dimensional heat equation:
In probability theory, is a normal distribution of variance and mean . It represents the probability density at time of the position of a particle starting at the origin following a standard Brownian motion. In this context, the semigroup condition is then an expression of the Markov property of Brownian motion.
In higher-dimensional Euclidean space , the heat kernel is
and has the same physical interpretation, . It also represents a nascent delta function in the sense that in the distribution sense as .
The Poisson kernel
The Poisson kernel
is the fundamental solution of the Laplace equation in the upper half-plane. It represents the electrostatic potential in a semi-infinite plate whose potential along the edge is held at fixed at the delta function. The Poisson kernel is also closely related to the Cauchy distribution and Epanechnikov and Gaussian kernel functions. This semigroup evolves according to the equation
where the operator is rigorously defined as the Fourier multiplier
Oscillatory integrals
In areas of physics such as wave propagation and wave mechanics, the equations involved are hyperbolic and so may have more singular solutions. As a result, the nascent delta functions that arise as fundamental solutions of the associated Cauchy problems are generally oscillatory integrals. An example, which comes from a solution of the Euler–Tricomi equation of transonic gas dynamics, is the rescaled Airy function
Although using the Fourier transform, it is easy to see that this generates a semigroup in some sense—it is not absolutely integrable and so cannot define a semigroup in the above strong sense. Many nascent delta functions constructed as oscillatory integrals only converge in the sense of distributions (an example is the Dirichlet kernel below), rather than in the sense of measures.
Another example is the Cauchy problem for the wave equation in :
The solution represents the displacement from equilibrium of an infinite elastic string, with an initial disturbance at the origin.
Other approximations to the identity of this kind include the sinc function (used widely in electronics and telecommunications)
and the Bessel function
Plane wave decomposition
One approach to the study of a linear partial differential equation
where is a differential operator on , is to seek first a fundamental solution, which is a solution of the equation
When is particularly simple, this problem can often be resolved using the Fourier transform directly (as in the case of the Poisson kernel and heat kernel already mentioned). For more complicated operators, it is sometimes easier first to consider an equation of the form
where is a plane wave function, meaning that it has the form
for some vector . Such an equation can be resolved (if the coefficients of are analytic functions) by the Cauchy–Kovalevskaya theorem or (if the coefficients of are constant) by quadrature. So, if the delta function can be decomposed into plane waves, then one can in principle solve linear partial differential equations.
Such a decomposition of the delta function into plane waves was part of a general technique first introduced essentially by Johann Radon, and then developed in this form by Fritz John (1955). Choose so that is an even integer, and for a real number , put
Then is obtained by applying a power of the Laplacian to the integral with respect to the unit sphere measure of for in the unit sphere :
The Laplacian here is interpreted as a weak derivative, so that this equation is taken to mean that, for any test function ,
The result follows from the formula for the Newtonian potential (the fundamental solution of Poisson's equation). This is essentially a form of the inversion formula for the Radon transform because it recovers the value of from its integrals over hyperplanes. For instance, if is odd and , then the integral on the right hand side is
where is the Radon transform of :
An alternative equivalent expression of the plane wave decomposition is:
Fourier transform
The delta function is a tempered distribution, and therefore it has a well-defined Fourier transform. Formally, one finds
Properly speaking, the Fourier transform of a distribution is defined by imposing self-adjointness of the Fourier transform under the duality pairing of tempered distributions with Schwartz functions. Thus is defined as the unique tempered distribution satisfying
for all Schwartz functions . And indeed it follows from this that
As a result of this identity, the convolution of the delta function with any other tempered distribution is simply :
That is to say that is an identity element for the convolution on tempered distributions, and in fact, the space of compactly supported distributions under convolution is an associative algebra with identity the delta function. This property is fundamental in signal processing, as convolution with a tempered distribution is a linear time-invariant system, and applying the linear time-invariant system measures its impulse response. The impulse response can be computed to any desired degree of accuracy by choosing a suitable approximation for , and once it is known, it characterizes the system completely. See .
The inverse Fourier transform of the tempered distribution is the delta function. Formally, this is expressed as
and more rigorously, it follows since
for all Schwartz functions .
In these terms, the delta function provides a suggestive statement of the orthogonality property of the Fourier kernel on . Formally, one has
This is, of course, shorthand for the assertion that the Fourier transform of the tempered distribution
is
which again follows by imposing self-adjointness of the Fourier transform.
By analytic continuation of the Fourier transform, the Laplace transform of the delta function is found to be
Fourier kernels
In the study of Fourier series, a major question consists of determining whether and in what sense the Fourier series associated with a periodic function converges to the function. The -th partial sum of the Fourier series of a function of period is defined by convolution (on the interval ) with the Dirichlet kernel:
Thus,
where
A fundamental result of elementary Fourier series states that the Dirichlet kernel restricted to the interval tends to a multiple of the delta function as . This is interpreted in the distribution sense, that
for every compactly supported function . Thus, formally one has
on the interval .
Despite this, the result does not hold for all compactly supported functions: that is does not converge weakly in the sense of measures. The lack of convergence of the Fourier series has led to the introduction of a variety of summability methods to produce convergence. The method of Cesàro summation leads to the Fejér kernel
The Fejér kernels tend to the delta function in a stronger sense that
for every compactly supported function . The implication is that the Fourier series of any continuous function is Cesàro summable to the value of the function at every point.
Hilbert space theory
The Dirac delta distribution is a densely defined unbounded linear functional on the Hilbert space L2 of square-integrable functions. Indeed, smooth compactly supported functions are dense in , and the action of the delta distribution on such functions is well-defined. In many applications, it is possible to identify subspaces of and to give a stronger topology on which the delta function defines a bounded linear functional.
Sobolev spaces
The Sobolev embedding theorem for Sobolev spaces on the real line implies that any square-integrable function such that
is automatically continuous, and satisfies in particular
Thus is a bounded linear functional on the Sobolev space . Equivalently is an element of the continuous dual space of . More generally, in dimensions, one has provided .
Spaces of holomorphic functions
In complex analysis, the delta function enters via Cauchy's integral formula, which asserts that if is a domain in the complex plane with smooth boundary, then
for all holomorphic functions in that are continuous on the closure of . As a result, the delta function is represented in this class of holomorphic functions by the Cauchy integral:
Moreover, let be the Hardy space consisting of the closure in of all holomorphic functions in continuous up to the boundary of . Then functions in uniquely extend to holomorphic functions in , and the Cauchy integral formula continues to hold. In particular for , the delta function is a continuous linear functional on . This is a special case of the situation in several complex variables in which, for smooth domains , the Szegő kernel plays the role of the Cauchy integral.
Another representation of the delta function in a space of holomorphic functions is on the space of square-integrable holomorphic functions in an open set . This is a closed subspace of , and therefore is a Hilbert space. On the other hand, the functional that evaluates a holomorphic function in at a point of is a continuous functional, and so by the Riesz representation theorem, is represented by integration against a kernel , the Bergman kernel. This kernel is the analog of the delta function in this Hilbert space. A Hilbert space having such a kernel is called a reproducing kernel Hilbert space. In the special case of the unit disc, one has
Resolutions of the identity
Given a complete orthonormal basis set of functions in a separable Hilbert space, for example, the normalized eigenvectors of a compact self-adjoint operator, any vector can be expressed as
The coefficients {αn} are found as
which may be represented by the notation:
a form of the bra–ket notation of Dirac. Adopting this notation, the expansion of takes the dyadic form:
Letting denote the identity operator on the Hilbert space, the expression
is called a resolution of the identity. When the Hilbert space is the space of square-integrable functions on a domain , the quantity:
is an integral operator, and the expression for can be rewritten
The right-hand side converges to in the sense. It need not hold in a pointwise sense, even when is a continuous function. Nevertheless, it is common to abuse notation and write
resulting in the representation of the delta function:
With a suitable rigged Hilbert space where contains all compactly supported smooth functions, this summation may converge in , depending on the properties of the basis . In most cases of practical interest, the orthonormal basis comes from an integral or differential operator, in which case the series converges in the distribution sense.
Infinitesimal delta functions
Cauchy used an infinitesimal to write down a unit impulse, infinitely tall and narrow Dirac-type delta function satisfying in a number of articles in 1827. Cauchy defined an infinitesimal in Cours d'Analyse (1827) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology.
Non-standard analysis allows one to rigorously treat infinitesimals. The article by contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals. Here the Dirac delta can be given by an actual function, having the property that for every real function one has as anticipated by Fourier and Cauchy.
Dirac comb
A so-called uniform "pulse train" of Dirac delta measures, which is known as a Dirac comb, or as the Sha distribution, creates a sampling function, often used in digital signal processing (DSP) and discrete time signal analysis. The Dirac comb is given as the infinite sum, whose limit is understood in the distribution sense,
which is a sequence of point masses at each of the integers.
Up to an overall normalizing constant, the Dirac comb is equal to its own Fourier transform. This is significant because if is any Schwartz function, then the periodization of is given by the convolution
In particular,
is precisely the Poisson summation formula.
More generally, this formula remains to be true if is a tempered distribution of rapid descent or, equivalently, if is a slowly growing, ordinary function within the space of tempered distributions.
Sokhotski–Plemelj theorem
The Sokhotski–Plemelj theorem, important in quantum mechanics, relates the delta function to the distribution , the Cauchy principal value of the function , defined by
Sokhotsky's formula states that
Here the limit is understood in the distribution sense, that for all compactly supported smooth functions ,
Relationship to the Kronecker delta
The Kronecker delta is the quantity defined by
for all integers , . This function then satisfies the following analog of the sifting property: if (for in the set of all integers) is any doubly infinite sequence, then
Similarly, for any real or complex valued continuous function on , the Dirac delta satisfies the sifting property
This exhibits the Kronecker delta function as a discrete analog of the Dirac delta function.
Applications
Probability theory
In probability theory and statistics, the Dirac delta function is often used to represent a discrete distribution, or a partially discrete, partially continuous distribution, using a probability density function (which is normally used to represent absolutely continuous distributions). For example, the probability density function of a discrete distribution consisting of points , with corresponding probabilities , can be written as
As another example, consider a distribution in which 6/10 of the time returns a standard normal distribution, and 4/10 of the time returns exactly the value 3.5 (i.e. a partly continuous, partly discrete mixture distribution). The density function of this distribution can be written as
The delta function is also used to represent the resulting probability density function of a random variable that is transformed by continuously differentiable function. If is a continuous differentiable function, then the density of can be written as
The delta function is also used in a completely different way to represent the local time of a diffusion process (like Brownian motion). The local time of a stochastic process is given by
and represents the amount of time that the process spends at the point in the range of the process. More precisely, in one dimension this integral can be written
where is the indicator function of the interval
Quantum mechanics
The delta function is expedient in quantum mechanics. The wave function of a particle gives the probability amplitude of finding a particle within a given region of space. Wave functions are assumed to be elements of the Hilbert space of square-integrable functions, and the total probability of finding a particle within a given interval is the integral of the magnitude of the wave function squared over the interval. A set of wave functions is orthonormal if
where is the Kronecker delta. A set of orthonormal wave functions is complete in the space of square-integrable functions if any wave function can be expressed as a linear combination of the with complex coefficients:
where . Complete orthonormal systems of wave functions appear naturally as the eigenfunctions of the Hamiltonian (of a bound system) in quantum mechanics that measures the energy levels, which are called the eigenvalues. The set of eigenvalues, in this case, is known as the spectrum of the Hamiltonian. In bra–ket notation this equality implies the resolution of the identity:
Here the eigenvalues are assumed to be discrete, but the set of eigenvalues of an observable can also be continuous. An example is the position operator, . The spectrum of the position (in one dimension) is the entire real line and is called a continuous spectrum. However, unlike the Hamiltonian, the position operator lacks proper eigenfunctions. The conventional way to overcome this shortcoming is to widen the class of available functions by allowing distributions as well, i.e., to replace the Hilbert space with a rigged Hilbert space. In this context, the position operator has a complete set of generalized eigenfunctions, labeled by the points of the real line, given by
The generalized eigenfunctions of the position operator are called the eigenkets and are denoted by .
Similar considerations apply to any other (unbounded) self-adjoint operator with continuous spectrum and no degenerate eigenvalues, such as the momentum operator . In that case, there is a set of real numbers (the spectrum) and a collection of distributions with such that
That is, are the generalized eigenvectors of . If they form an "orthonormal basis" in the distribution sense, that is:
then for any test function ,
where . That is, there is a resolution of the identity
where the operator-valued integral is again understood in the weak sense. If the spectrum of has both continuous and discrete parts, then the resolution of the identity involves a summation over the discrete spectrum and an integral over the continuous spectrum.
The delta function also has many more specialized applications in quantum mechanics, such as the delta potential models for a single and double potential well.
Structural mechanics
The delta function can be used in structural mechanics to describe transient loads or point loads acting on structures. The governing equation of a simple mass–spring system excited by a sudden force impulse at time can be written
where is the mass, is the deflection, and is the spring constant.
As another example, the equation governing the static deflection of a slender beam is, according to Euler–Bernoulli theory,
where is the bending stiffness of the beam, is the deflection, is the spatial coordinate, and is the load distribution. If a beam is loaded by a point force at , the load distribution is written
As the integration of the delta function results in the Heaviside step function, it follows that the static deflection of a slender beam subject to multiple point loads is described by a set of piecewise polynomials.
Also, a point moment acting on a beam can be described by delta functions. Consider two opposing point forces at a distance apart. They then produce a moment acting on the beam. Now, let the distance approach the limit zero, while is kept constant. The load distribution, assuming a clockwise moment acting at , is written
Point moments can thus be represented by the derivative of the delta function. Integration of the beam equation again results in piecewise polynomial deflection.
| Mathematics | Specific functions | null |
37035 | https://en.wikipedia.org/wiki/Conway%27s%20Game%20of%20Life | Conway's Game of Life | The Game of Life, also known as Conway's Game of Life or simply Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. It is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. It is Turing complete and can simulate a universal constructor or any other Turing machine.
Rules
The universe of the Game of Life is an infinite, two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, live or dead (or populated and unpopulated, respectively). Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:
Any live cell with fewer than two live neighbours dies, as if by underpopulation.
Any live cell with two or three live neighbours lives on to the next generation.
Any live cell with more than three live neighbours dies, as if by overpopulation.
Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
The initial pattern constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed, live or dead; births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called a tick. Each generation is a pure function of the preceding one. The rules continue to be applied repeatedly to create further generations.
Origins
Stanisław Ulam, while working at the Los Alamos National Laboratory in the 1940s, studied the growth of crystals, using a simple lattice network as his model. At the same time, John von Neumann, Ulam's colleague at Los Alamos, was working on the problem of self-replicating systems. Von Neumann's initial design was founded upon the notion of one robot building another robot. This design is known as the kinematic model. As he developed this design, von Neumann came to realize the great difficulty of building a self-replicating robot, and of the great cost in providing the robot with a "sea of parts" from which to build its replicant. Neumann wrote a paper entitled "The general and logical theory of automata" for the Hixon Symposium in 1948. Ulam was the one who suggested using a discrete system for creating a reductionist model of self-replication. Ulam and von Neumann created a method for calculating liquid motion in the late 1950s. The driving concept of the method was to consider a liquid as a group of discrete units and calculate the motion of each based on its neighbours' behaviours. Thus was born the first system of cellular automata. Like Ulam's lattice network, von Neumann's cellular automata are two-dimensional, with his self-replicator implemented algorithmically. The result was a universal copier and constructor working within a cellular automaton with a small neighbourhood (only those cells that touch are neighbours; for von Neumann's cellular automata, only orthogonal cells), and with 29 states per cell. Von Neumann gave an existence proof that a particular pattern would make endless copies of itself within the given cellular universe by designing a 200,000 cell configuration that could do so. This design is known as the tessellation model, and is called a von Neumann universal constructor.
Motivated by questions in mathematical logic and in part by work on simulation games by Ulam, among others, John Conway began doing experiments in 1968 with a variety of different two-dimensional cellular automaton rules. Conway's initial goal was to define an interesting and unpredictable cellular automaton. According to Martin Gardner, Conway experimented with different rules, aiming for rules that would allow for patterns to "apparently" grow without limit, while keeping it difficult to prove that any given pattern would do so. Moreover, some "simple initial patterns" should "grow and change for a considerable period of time" before settling into a static configuration or a repeating loop. Conway later wrote that the basic motivation for Life was to create a "universal" cellular automaton.
The game made its first public appearance in the October 1970 issue of Scientific American, in Martin Gardner's "Mathematical Games" column, which was based on personal conversations with Conway. Theoretically, the Game of Life has the power of a universal Turing machine: anything that can be computed algorithmically can be computed within the Game of Life. Gardner wrote, "Because of Life's analogies with the rise, fall, and alterations of a society of living organisms, it belongs to a growing class of what are called 'simulation games' (games that resemble real-life processes)."
Since its publication, the Game of Life has attracted much interest because of the surprising ways in which the patterns can evolve. It provides an example of emergence and self-organization. A version of Life that incorporates random fluctuations has been used in physics to study phase transitions and nonequilibrium dynamics. The game can also serve as a didactic analogy, used to convey the somewhat counter-intuitive notion that design and organization can spontaneously emerge in the absence of a designer. For example, philosopher Daniel Dennett has used the analogy of the Game of Life "universe" extensively to illustrate the possible evolution of complex philosophical constructs, such as consciousness and free will, from the relatively simple set of deterministic physical laws which might govern our universe.
The popularity of the Game of Life was helped by its coming into being at the same time as increasingly inexpensive computer access. The game could be run for hours on these machines, which would otherwise have remained unused at night. In this respect, it foreshadowed the later popularity of computer-generated fractals. For many, the Game of Life was simply a programming challenge: a fun way to use otherwise wasted CPU cycles. For some, however, the Game of Life had more philosophical connotations. It developed a cult following through the 1970s and beyond; current developments have gone so far as to create theoretic emulations of computer systems within the confines of a Game of Life board.
Examples of patterns
Many different types of patterns occur in the Game of Life, which are classified according to their behaviour. Common pattern types include: still lifes, which do not change from one generation to the next; oscillators, which return to their initial state after a finite number of generations; and spaceships, which translate themselves across the grid.
The earliest interesting patterns in the Game of Life were discovered without the use of computers. The simplest still lifes and oscillators were discovered while tracking the fates of various small starting configurations using graph paper, blackboards, and physical game boards, such as those used in Go. During this early research, Conway discovered that the R-pentomino failed to stabilize in a small number of generations. In fact, it takes 1103 generations to stabilize, by which time it has a population of 116 and has generated six escaping gliders; these were the first spaceships ever discovered.
Frequently occurring examples (in that they emerge frequently from a random starting configuration of cells) of the three aforementioned pattern types are shown below, with live cells shown in black and dead cells in white. Period refers to the number of ticks a pattern must iterate through before returning to its initial configuration.
The pulsar is the most common period-3 oscillator. The great majority of naturally occurring oscillators have a period of 2, like the blinker and the toad, but oscillators of all periods are known to exist, and oscillators of periods 4, 8, 14, 15, 30, and a few others have been seen to arise from random initial conditions. Patterns which evolve for long periods before stabilizing are called Methuselahs, the first-discovered of which was the R-pentomino. Diehard is a pattern that disappears after 130 generations. Starting patterns of eight or more cells can be made to die after an arbitrarily long time. Acorn takes 5,206 generations to generate 633 cells, including 13 escaped gliders.
Conway originally conjectured that no pattern can grow indefinitely—i.e. that for any initial configuration with a finite number of living cells, the population cannot grow beyond some finite upper limit. In the game's original appearance in "Mathematical Games", Conway offered a prize of fifty dollars () to the first person who could prove or disprove the conjecture before the end of 1970. The prize was won in November by a team from the Massachusetts Institute of Technology, led by Bill Gosper; the "Gosper glider gun" produces its first glider on the 15th generation, and another glider every 30th generation from then on. For many years, this glider gun was the smallest one known. In 2015, a gun called the "Simkin glider gun", which releases a glider every 120th generation, was discovered that has fewer live cells but which is spread out across a larger bounding box at its extremities.
Smaller patterns were later found that also exhibit infinite growth. All three of the patterns shown below grow indefinitely. The first two create a single block-laying switch engine: a configuration that leaves behind two-by-two still life blocks as it translates itself across the game's universe. The third configuration creates two such patterns. The first has only ten live cells, which has been proven to be minimal. The second fits in a five-by-five square, and the third is only one cell high.
Later discoveries included other guns, which are stationary, and which produce gliders or other spaceships; puffer trains, which move along leaving behind a trail of debris; and rakes, which move and emit spaceships. Gosper also constructed the first pattern with an asymptotically optimal quadratic growth rate, called a breeder or lobster, which worked by leaving behind a trail of guns.
It is possible for gliders to interact with other objects in interesting ways. For example, if two gliders are shot at a block in a specific position, the block will move closer to the source of the gliders. If three gliders are shot in just the right way, the block will move farther away. This sliding block memory can be used to simulate a counter. It is possible to construct logic gates such as AND, OR, and NOT using gliders. It is possible to build a pattern that acts like a finite-state machine connected to two counters. This has the same computational power as a universal Turing machine, so the Game of Life is theoretically as powerful as any computer with unlimited memory and no time constraints; it is Turing complete. In fact, several different programmable computer architectures have been implemented in the Game of Life, including a pattern that simulates Tetris.
Oblique spaceships
Until the 2010s, all known spaceships could only move orthogonally or diagonally. Spaceships which move neither orthogonally nor diagonally are commonly referred to as oblique spaceships. On May 18, 2010, Andrew J. Wade announced the first oblique spaceship, dubbed "Gemini", that creates a copy of itself on (5,1) further while destroying its parent. This pattern replicates in 34 million generations, and uses an instruction tape made of gliders oscillating between two stable configurations made of Chapman–Greene construction arms. These, in turn, create new copies of the pattern, and destroy the previous copy. In December 2015, diagonal versions of the Gemini were built.
A more specific case is a knightship, a spaceship that moves two squares left for every one square it moves down (like a knight in chess), whose existence had been predicted by Elwyn Berlekamp since 1982. The first elementary knightship, Sir Robin, was discovered in 2018 by Adam P. Goucher. This is the first new spaceship movement pattern for an elementary spaceship found in forty-eight years. "Elementary" means that it cannot be decomposed into smaller interacting patterns such as gliders and still lifes.
Self-replication
A pattern can contain a collection of guns that fire gliders in such a way as to construct new objects, including copies of the original pattern. A universal constructor can be built which contains a Turing complete computer, and which can build many types of complex objects, including more copies of itself. On November 23, 2013, Dave Greene built the first replicator in the Game of Life that creates a complete copy of itself, including the instruction tape. In October 2018, Adam P. Goucher finished his construction of the 0E0P metacell, a metacell capable of self-replication. This differed from previous metacells, such as the OTCA metapixel by Brice Due, which only worked with already constructed copies near them. The 0E0P metacell works by using construction arms to create copies that simulate the programmed rule. The actual simulation of the Game of Life or other Moore neighbourhood rules is done by simulating an equivalent rule using the von Neumann neighbourhood with more states. The name 0E0P is short for "Zero Encoded by Zero Population", which indicates that instead of a metacell being in an "off" state simulating empty space, the 0E0P metacell removes itself when the cell enters that state, leaving a blank space.
Undecidability
Many patterns in the Game of Life eventually become a combination of still lifes, oscillators, and spaceships; other patterns may be called chaotic. A pattern may stay chaotic for a very long time until it eventually settles to such a combination.
The Game of Life is undecidable, which means that given an initial pattern and a later pattern, no algorithm exists that can tell whether the later pattern is ever going to appear. Given that the Game of Life is Turing-complete, this is a corollary of the halting problem: the problem of determining whether a given program will finish running or continue to run forever from an initial input.
Iteration
From most random initial patterns of living cells on the grid, observers will find the population constantly changing as the generations tick by. The patterns that emerge from the simple rules may be considered a form of mathematical beauty. Small isolated subpatterns with no initial symmetry tend to become symmetrical. Once this happens, the symmetry may increase in richness, but it cannot be lost unless a nearby subpattern comes close enough to disturb it. In a very few cases, the society eventually dies out, with all living cells vanishing, though this may not happen for a great many generations. Most initial patterns eventually burn out, producing either stable figures or patterns that oscillate forever between two or more states; many also produce one or more gliders or spaceships that travel indefinitely away from the initial location. Because of the nearest-neighbour based rules, no information can travel through the grid at a greater rate than one cell per unit time, so this velocity is said to be the cellular automaton speed of light and denoted c.
Algorithms
Early patterns with unknown futures, such as the R-pentomino, led computer programmers to write programs to track the evolution of patterns in the Game of Life. Most of the early algorithms were similar: they represented the patterns as two-dimensional arrays in computer memory. Typically, two arrays are used: one to hold the current generation, and one to calculate its successor. Often 0 and 1 represent dead and live cells, respectively. A nested for loop considers each element of the current array in turn, counting the live neighbours of each cell to decide whether the corresponding element of the successor array should be 0 or 1. The successor array is displayed. For the next iteration, the arrays may swap roles so that the successor array in the last iteration becomes the current array in the next iteration, or one may copy the values of the second array into the first array then update the second array from the first array again.
A variety of minor enhancements to this basic scheme are possible, and there are many ways to save unnecessary computation. A cell that did not change at the last time step, and none of whose neighbours changed, is guaranteed not to change at the current time step as well, so a program that keeps track of which areas are active can save time by not updating inactive zones.
To avoid decisions and branches in the counting loop, the rules can be rearranged from an egocentric approach of the inner field regarding its neighbours to a scientific observer's viewpoint: if the sum of all nine fields in a given neighbourhood is three, the inner field state for the next generation will be life; if the all-field sum is four, the inner field retains its current state; and every other sum sets the inner field to death.
To save memory, the storage can be reduced to one array plus two line buffers. One line buffer is used to calculate the successor state for a line, then the second line buffer is used to calculate the successor state for the next line. The first buffer is then written to its line and freed to hold the successor state for the third line. If a toroidal array is used, a third buffer is needed so that the original state of the first line in the array can be saved until the last line is computed.
In principle, the Game of Life field is infinite, but computers have finite memory. This leads to problems when the active area encroaches on the border of the array. Programmers have used several strategies to address these problems. The simplest strategy is to assume that every cell outside the array is dead. This is easy to program but leads to inaccurate results when the active area crosses the boundary. A more sophisticated trick is to consider the left and right edges of the field to be stitched together, and the top and bottom edges also, yielding a toroidal array. The result is that active areas that move across a field edge reappear at the opposite edge. Inaccuracy can still result if the pattern grows too large, but there are no pathological edge effects. Techniques of dynamic storage allocation may also be used, creating ever-larger arrays to hold growing patterns. The Game of Life on a finite field is sometimes explicitly studied; some implementations, such as Golly, support a choice of the standard infinite field, a field infinite only in one dimension, or a finite field, with a choice of topologies such as a cylinder, a torus, or a Möbius strip.
Alternatively, programmers may abandon the notion of representing the Game of Life field with a two-dimensional array, and use a different data structure, such as a vector of coordinate pairs representing live cells. This allows the pattern to move about the field unhindered, as long as the population does not exceed the size of the live-coordinate array. The drawback is that counting live neighbours becomes a hash-table lookup or search operation, slowing down simulation speed. With more sophisticated data structures this problem can also be largely solved.
For exploring large patterns at great time depths, sophisticated algorithms such as Hashlife may be useful. There is also a method for implementation of the Game of Life and other cellular automata using arbitrary asynchronous updates while still exactly emulating the behaviour of the synchronous game. Source code examples that implement the basic Game of Life scenario in various programming languages, including C, C++, Java and Python can be found at Rosetta Code.
Variations
Since the Game of Life's inception, new, similar cellular automata have been developed. The standard Game of Life is symbolized in rule-string notation as B3/S23. A cell is born if it has exactly three neighbours, survives if it has two or three living neighbours, and dies otherwise. The first number, or list of numbers, is what is required for a dead cell to be born. The second set is the requirement for a live cell to survive to the next generation. Hence B6/S16 means "a cell is born if there are six neighbours, and lives on if there are either one or six neighbours". Cellular automata on a two-dimensional grid that can be described in this way are known as cellular automata. Another common automaton, Highlife, is described by the rule B36/S23, because having six neighbours, in addition to the original game's B3/S23 rule, causes a birth. HighLife is best known for its frequently occurring replicators.
Additional Life-like cellular automata exist. The vast majority of these 218 different rules produce universes that are either too chaotic or too desolate to be of interest, but a large subset do display interesting behaviour. A further generalization produces the isotropic rulespace, with 2102 possible cellular automaton rules (the Game of Life again being one of them). These are rules that use the same square grid as the Life-like rules and the same eight-cell neighbourhood, and are likewise invariant under rotation and reflection. However, in isotropic rules, the positions of neighbour cells relative to each other may be taken into account in determining a cell's future state—not just the total number of those neighbours.
Some variations on the Game of Life modify the geometry of the universe as well as the rules. The above variations can be thought of as a two-dimensional square, because the world is two-dimensional and laid out in a square grid. One-dimensional square variations, known as elementary cellular automata, and three-dimensional square variations have been developed, as have two-dimensional hexagonal and triangular variations. A variant using aperiodic tiling grids has also been made.
Conway's rules may also be generalized such that instead of two states, live and dead, there are three or more. State transitions are then determined either by a weighting system or by a table specifying separate transition rules for each state; for example, Mirek's Cellebration's multi-coloured Rules Table and Weighted Life rule families each include sample rules equivalent to the Game of Life.
Patterns relating to fractals and fractal systems may also be observed in certain variations. For example, the automaton B1/S12 generates four very close approximations to the Sierpinski triangle when applied to a single live cell. The Sierpinski triangle can also be observed in the Game of Life by examining the long-term growth of an infinitely long single-cell-thick line of live cells, as well as in Highlife, Seeds (B2/S), and Wolfram's Rule 90.
Immigration is a variation that is very similar to the Game of Life, except that there are two on states, often expressed as two different colours. Whenever a new cell is born, it takes on the on state that is the majority in the three cells that gave it birth. This feature can be used to examine interactions between spaceships and other objects within the game. Another similar variation, called QuadLife, involves four different on states. When a new cell is born from three different on neighbours, it takes the fourth value, and otherwise, like Immigration, it takes the majority value. Except for the variation among on cells, both of these variations act identically to the Game of Life.
Music
Various musical composition techniques use the Game of Life, especially in MIDI sequencing. A variety of programs exist for creating sound from patterns generated in the Game of Life.
Notable programs
Computers have been used to follow and simulate the Game of Life since it was first publicized. When John Conway was first investigating how various starting configurations developed, he tracked them by hand using a go board with its black and white stones. This was tedious and prone to errors. The first interactive Game of Life program was written in an early version of ALGOL 68C for the PDP-7 by M. J. T. Guy and S. R. Bourne. The results were published in the October 1970 issue of Scientific American, along with the statement: "Without its help, some discoveries about the game would have been difficult to make."
A color version of the Game of Life was written by Ed Hall in 1976 for Cromemco microcomputers, and a display from that program filled the cover of the June 1976 issue of Byte. The advent of microcomputer-based color graphics from Cromemco has been credited with a revival of interest in the game.
Two early implementations of the Game of Life on home computers were by Malcolm Banthorpe written in BBC BASIC. The first was in the January 1984 issue of Acorn User magazine, and Banthorpe followed this with a three-dimensional version in the May 1984 issue. Susan Stepney, Professor of Computer Science at the University of York, followed this up in 1988 with Life on the Line, a program that generated one-dimensional cellular automata.
There are now thousands of Game of Life programs online, so a full list will not be provided here. The following is a small selection of programs with some special claim to notability, such as popularity or unusual features. Most of these programs incorporate a graphical user interface for pattern editing and simulation, the capability for simulating multiple rules including the Game of Life, and a large library of interesting patterns in the Game of Life and other cellular automaton rules.
Golly is a cross-platform (Windows, Macintosh, Linux, iOS, and Android) open-source simulation system for the Game of Life and other cellular automata (including all Life-like cellular automata, the Generations family of cellular automata from Mirek's Cellebration, and John von Neumann's 29-state cellular automaton) by Andrew Trevorrow and Tomas Rokicki. It includes the Hashlife algorithm for extremely fast generation, and Lua or Python scriptability for both editing and simulation.
Mirek's Cellebration is a freeware one- and two-dimensional cellular automata viewer, explorer, and editor for Windows. It includes powerful facilities for simulating and viewing a wide variety of cellular automaton rules, including the Game of Life, and a scriptable editor.
Xlife is a cellular-automaton laboratory by Jon Bennett. The standard UNIX X11 Game of Life simulation application for a long time, it has also been ported to Windows. It can handle cellular automaton rules with the same neighbourhood as the Game of Life, and up to eight possible states per cell.
Dr. Blob's Organism is a Shoot 'em up based on Conway's Life. In the game, Life continually generates on a group of cells within a "petri dish". The patterns formed are smoothed and rounded to look like a growing amoeba spewing smaller ones (actually gliders). Special "probes" zap the "blob" to keep it from overflowing the dish while destroying its nucleus.
Google implemented an easter egg of the Game of Life in 2012. Users who search for the term are shown an implementation of the game in the search results page.
The visual novel Anonymous;Code includes a basic implementation of the Game of Life in it, which is connected to the plot of the novel. Near the end of Anonymous;Code, a certain pattern that appears throughout the game as a tattoo on the heroine Momo Aizaki has to be entered into the Game of Life to complete the game (Kok's galaxy, the same pattern used as the logo for the open-source Game of Life program Golly).
| Mathematics | Automata theory | null |
37085 | https://en.wikipedia.org/wiki/Software%20bug | Software bug | A software bug is a design defect (bug) in computer software. A computer program with many or serious bugs may be described as buggy.
The effects of a software bug range from minor (such as a misspelled word in the user interface) to severe (such as frequent crashing).
In 2002, a study commissioned by the US Department of Commerce's National Institute of Standards and Technology concluded that "software bugs, or errors, are so prevalent and so detrimental that they cost the US economy an estimated $59 billion annually, or about 0.6 percent of the gross domestic product".
Since the 1950s, some computer systems have been designed to detect or auto-correct various software errors during operations.
History
Terminology
Mistake metamorphism (from Greek meta = "change", morph = "form") refers to the evolution of a defect in the final stage of software deployment. Transformation of a mistake committed by an analyst in the early stages of the software development lifecycle, which leads to a defect in the final stage of the cycle has been called mistake metamorphism.
Different stages of a mistake in the development cycle may be described as mistake,
anomaly,
fault,
failure,
error,
exception,
crash,
glitch,
bug,
defect,
incident,
or side effect.
Examples
Software bugs have been linked to disasters.
Software bugs in the Therac-25 radiation therapy machine were directly responsible for patient deaths in the 1980s.
In 1996, the European Space Agency's US$1 billion prototype Ariane 5 rocket was destroyed less than a minute after launch due to a bug in the on-board guidance computer program.
In 1994, an RAF Chinook helicopter crashed, killing 29; was initially blamed on pilot error, but was later thought to have been caused by a software bug in the engine-control computer.
Buggy software caused the early 21st century British Post Office scandal.
Controversy
Sometimes the use of bug to describe the behavior of software is contentious due to perception. Some suggest that the term should be abandoned; replaced with defect or error contending that bug implies that the defect arose on its own and push to use defect instead since it more clearly connotates caused by a human.
Some contend that bug may be used to coverup an intentional design decision. In 2011, after receiving scrutiny from US Senator Al Franken for recording and storing users' locations in unencrypted files,
Apple called the behavior a bug. However, Justin Brookman of the Center for Democracy and Technology directly challenged that portrayal, stating "I'm glad that they are fixing what they call bugs, but I take exception with their strong denial that they track users."
Prevention
Preventing bugs as early as possible in the software development process is a target of investment and innovation.
Language support
Newer programming languages tend to be designed to prevent common bugs based on vulnerabilities of existing languages. Lessons learned from older languages such as BASIC and C are used to inform the design of later languages such as C# and Rust.
Languages may include features such as a static type system, restricted namespaces and modular programming. For example, for a typed, compiled language (like C):
float num = "3";
is syntactically correct, but fails type checking since the right side, a string, cannot be assigned to a float variable. Compilation fails forcing this defect to be fixed before development progress can resume. With an interpreted language, a failure would not occur until later at runtime.
Some languages exclude features that easily lead to bugs, at the expense of slower performance the principle being that it is usually better to write simpler, slower correct code than complicated, buggy code. For example, the Java does not support pointer arithmetic which is generally fast, but is considered dangerous; relatively easy to cause a major bug.
Some languages include features that add runtime overhead in order to prevent some bugs. For example, many languages include runtime bounds checking and a way to handle out-of-bounds conditions instead of crashing.
A compiled language allows for detecting some typos (such as a misspelled identifier) before runtime which is earlier in the software development process than for an interpreted language.
Techniques
Programming techniques such as programming style and defensive programming are intended to prevent typos.
For example, a bug may be caused by a relatively minor, typographical error (typo) in the code. For example, this code executes function only if is true.
if (condition) foo();
But this code always executes :
if (condition); foo();
A convention that tends to prevent this particular issue is to require braces for a block even if it has just one line.
if (condition) {
foo();
}
Enforcement of conventions may be manual (i.e. via code review) or via automated tools.
Specification
Some contend that writing a program specification which states the behavior of a program, can prevent bugs.
Some contend that formal specifications are impractical for anything but the shortest programs, because of problems of combinatorial explosion and indeterminacy.
Software testing
One goal of software testing is to find bugs.
Measurements during testing can provide an estimate of the number of likely bugs remaining. This becomes more reliable the longer a product is tested and developed.
Agile practices
Agile software development may involve frequent software releases with relatively small changes. Defects are revealed by user feedback.
With test-driven development (TDD), unit tests are written while writing the production code, and the production code is not considered complete until all tests complete successfully.
Static analysis
Tools for static code analysis help developers by inspecting the program text beyond the compiler's capabilities to spot potential problems. Although in general the problem of finding all programming errors given a specification is not solvable (see halting problem), these tools exploit the fact that human programmers tend to make certain kinds of simple mistakes often when writing software.
Instrumentation
Tools to monitor the performance of the software as it is running, either specifically to find problems such as bottlenecks or to give assurance as to correct working, may be embedded in the code explicitly (perhaps as simple as a statement saying PRINT "I AM HERE"), or provided as tools. It is often a surprise to find where most of the time is taken by a piece of code, and this removal of assumptions might cause the code to be rewritten.
Open source
Open source development allows anyone to examine source code. A school of thought popularized by Eric S. Raymond as Linus's law says that popular open-source software has more chance of having few or no bugs than other software, because "given enough eyeballs, all bugs are shallow". This assertion has been disputed, however: computer security specialist Elias Levy wrote that "it is easy to hide vulnerabilities in complex, little understood and undocumented source code," because, "even if people are reviewing the code, that doesn't mean they're qualified to do so." An example of an open-source software bug was the 2008 OpenSSL vulnerability in Debian.
Debugging
Debugging can be a significant part of the software development lifecycle. Maurice Wilkes, an early computing pioneer, described his realization in the late 1940s that
“a good part of the remainder of my life was going to be spent in finding errors in my own programs”.
A program known as a debugger can help a programmer find faulty code by examining the inner workings of a program such as executing code line-by-line and viewing variable values.
As an alternative to using a debugger, code may be instrumented with logic to output debug information to trace program execution and view values. Output is typically to console, window, log file or a hardware output (i.e. LED).
Some contend that locating a bug is something of an art.
It is not uncommon for a bug in one section of a program to cause failures in a different section, thus making it difficult to track, in an apparently unrelated part of the system. For example, an error in a graphics rendering routine causing a file I/O routine to fail.
Sometimes, the most difficult part of debugging is finding the cause of the bug. Once found, correcting the problem is sometimes easy if not trivial.
Sometimes, a bug is not an isolated flaw, but represents an error of thinking or planning on the part of the programmers. Often, such a logic error requires a section of the program to be overhauled or rewritten.
Some contend that as a part of code review, stepping through the code and imagining or transcribing the execution process may often find errors without ever reproducing the bug as such.
Typically, the first step in locating a bug is to reproduce it reliably. If unable to reproduce the issue, a programmer cannot find the cause of the bug and therefore cannot fix it.
Some bugs are revealed by inputs that may be difficult for the programmer to re-create. One cause of the Therac-25 radiation machine deaths was a bug (specifically, a race condition) that occurred only when the machine operator very rapidly entered a treatment plan; it took days of practice to become able to do this, so the bug did not manifest in testing or when the manufacturer attempted to duplicate it. Other bugs may stop occurring whenever the setup is augmented to help find the bug, such as running the program with a debugger; these are called heisenbugs (humorously named after the Heisenberg uncertainty principle).
Since the 1990s, particularly following the Ariane 5 Flight 501 disaster, interest in automated aids to debugging rose, such as static code analysis by abstract interpretation.
Often, bugs come about during coding, but faulty design documentation may cause a bug.
In some cases, changes to the code may eliminate the problem even though the code then no longer matches the documentation.
In an embedded system, the software is often modified to work around a hardware bug since it's cheaper than modifying the hardware.
Management
Bugs are managed via activities like documenting, categorizing, assigning, reproducing, correcting and releasing the corrected code.
Tools are often used to track bugs and other issues with software. Typically, different tools
are used by the software development team to track their workload than by customer service to track user feedback.
A tracked item is often called bug, defect, ticket, issue, feature, or for agile software development, story or epic. Items are often categorized by aspects such as severity, priority and version number.
In a process sometimes called triage, choices are made for each bug about whether and when to fix it based on information such as the bug's severity and priority and external factors such as development schedules. Triage generally does not include investigation into cause. Triage may occur regularly. Triage generally consists of reviewing new bugs since the previous triage and maybe all open bugs. Attendees may include project manager, development manager, test manager, build manager, and technical experts.
Severity
Severity is a measure of impact the bug has. This impact may be data loss, financial, loss of goodwill and wasted effort. Severity levels are not standardized, but differ by context such as industry and tracking tool. For example, a crash in a video game has a different impact than a crash in a bank server. Severity levels might be crash or hang, no workaround (user cannot accomplish a task), has workaround (user can still accomplish the task), visual defect (a misspelling for example), or documentation error. Another example set of severities: critical, high, low, blocker, trivial. The severity of a bug may be a separate category to its priority for fixing, or the two may be quantified and managed separately.
A bug severe enough to delay the release of the product is called a show stopper.
Priority
Priority describes the importance of resolving the bug in relation to other bugs. Priorities might be numerical, such as 1 through 5, or named, such as critical, high, low, and deferred. The values might be similar or identical to severity ratings, even though priority is a different aspect.
Priority may be a combination of the bug's severity with the level of effort to fix. A bug with low severity but easy to fix may get a higher priority than a bug with moderate severity that requires significantly more effort to fix.
Patch
Bugs of sufficiently high priority may warrant a special release which is sometimes called a patch.
Maintenance release
A software release that emphasizes bug fixes may be called a maintenance release to differentiate it from a release that emphasizes new features or other changes.
Known issue
It is common practice to release software with known, low-priority bugs or other issues. Possible reasons include but are not limited to:
A deadline must be met and resources are insufficient to fix all bugs by the deadline
The bug is already fixed in an upcoming release, and it is not of high priority
The changes required to fix the bug are too costly or affect too many other components, requiring a major testing activity
It may be suspected, or known, that some users are relying on the existing buggy behavior; a proposed fix may introduce a breaking change
The problem is in an area that will be obsolete with an upcoming release; fixing it is unnecessary
"It's not a bug, it's a feature" A misunderstanding exists between expected and actual behavior or undocumented feature
Implications
The amount and type of damage a software bug may cause affects decision-making, processes and policy regarding software quality. In applications such as human spaceflight, aviation, nuclear power, health care, public transport or automotive safety, since software flaws have the potential to cause human injury or even death, such software will have far more scrutiny and quality control than, for example, an online shopping website. In applications such as banking, where software flaws have the potential to cause serious financial damage to a bank or its customers, quality control is also more important than, say, a photo editing application.
Other than the damage caused by bugs, some of their cost is due to the effort invested in fixing them. In 1978, Lientz et al. showed that the median of projects invest 17 percent of the development effort in bug fixing. In 2020, research on GitHub repositories showed the median is 20%.
Cost
In 1994, NASA's Goddard Space Flight Center managed to reduce their average number of errors from 4.5 per 1000 lines of code (SLOC) down to 1 per 1000 SLOC.
Another study in 1990 reported that exceptionally good software development processes can achieve deployment failure rates as low as 0.1 per 1000 SLOC. This figure is iterated in literature such as Code Complete by Steve McConnell, and the NASA study on Flight Software Complexity.
Some projects even attained zero defects: the firmware in the IBM Wheelwriter typewriter which consists of 63,000 SLOC, and the Space Shuttle software with 500,000 SLOC.
Benchmark
To facilitate reproducible research on testing and debugging, researchers use curated benchmarks of bugs:
the Siemens benchmark
ManyBugs is a benchmark of 185 C bugs in nine open-source programs.
Defects4J is a benchmark of 341 Java bugs from 5 open-source projects. It contains the corresponding patches, which cover a variety of patch type.
Types
Some notable types of bugs:
Design error
A bug can be caused by insufficient or incorrect design based on the specification. For example, given that the specification is to alphabetize a list of words, a design bug might occur if the design does not account for symbols; resulting in incorrect alphabetization of words with symbols.
Arithmetic
Numerical operations can result in unexpected output, slow processing, or crashing.
Such a bug can be from a lack of awareness of the qualities of the data storage such as a loss of precision due to rounding, numerically unstable algorithms, arithmetic overflow and underflow, or from lack of awareness of how calculations are handled by different software coding languages such as division by zero which in some languages may throw an exception, and in others may return a special value such as NaN or infinity.
Control flow
A control flow bug, a.k.a. logic error, is characterized by code that does not fail with an error, but does not have the expected behavior, such as infinite looping, infinite recursion, incorrect comparison in a conditional such as using the wrong comparison operator, and the off-by-one error.
Interfacing
Incorrect API usage.
Incorrect protocol implementation.
Incorrect hardware handling.
Incorrect assumptions of a particular platform.
Incompatible systems. A new API or communications protocol may seem to work when two systems use different versions, but errors may occur when a function or feature implemented in one version is changed or missing in another. In production systems which must run continually, shutting down the entire system for a major update may not be possible, such as in the telecommunication industry or the internet. In this case, smaller segments of a large system are upgraded individually, to minimize disruption to a large network. However, some sections could be overlooked and not upgraded, and cause compatibility errors which may be difficult to find and repair.
Incorrect code annotations.
Concurrency
Deadlock a task cannot continue until a second finishes, but at the same time, the second cannot continue until the first finishes.
Race condition multiple simultaneous tasks compete for resources.
Errors in critical sections, mutual exclusions and other features of concurrent processing. Time-of-check-to-time-of-use (TOCTOU) is a form of unprotected critical section.
Resourcing
Null pointer dereference.
Using an uninitialized variable.
Using an otherwise valid instruction on the wrong data type (see packed decimal/binary-coded decimal).
Access violations.
Resource leaks, where a finite system resource (such as memory or file handles) become exhausted by repeated allocation without release.
Buffer overflow, in which a program tries to store data past the end of allocated storage. This may or may not lead to an access violation or storage violation. These are frequently security bugs.
Excessive recursion which—though logically valid—causes stack overflow.
Use-after-free error, where a pointer is used after the system has freed the memory it references.
Double free error.
Syntax
Use of the wrong token, such as performing assignment instead of equality test. For example, in some languages x=5 will set the value of x to 5 while x==5 will check whether x is currently 5 or some other number. Interpreted languages allow such code to fail. Compiled languages can catch such errors before testing begins.
Teamwork
Unpropagated updates; e.g. programmer changes "myAdd" but forgets to change "mySubtract", which uses the same algorithm. These errors are mitigated by the Don't Repeat Yourself philosophy.
Comments out of date or incorrect: many programmers assume the comments accurately describe the code.
Differences between documentation and product.
In politics
"Bugs in the System" report
The Open Technology Institute, run by the group, New America, released a report "Bugs in the System" in August 2016 stating that U.S. policymakers should make reforms to help researchers identify and address software bugs. The report "highlights the need for reform in the field of software vulnerability discovery and disclosure." One of the report's authors said that Congress has not done enough to address cyber software vulnerability, even though Congress has passed a number of bills to combat the larger issue of cyber security.
Government researchers, companies, and cyber security experts are the people who typically discover software flaws. The report calls for reforming computer crime and copyright laws.
In popular culture
In video gaming, the term "glitch" is sometimes used to refer to a software bug. An example is the glitch and unofficial Pokémon species MissingNo.
In both the 1968 novel 2001: A Space Odyssey and the corresponding film of the same name, the spaceship's onboard computer, HAL 9000, attempts to kill all its crew members. In the follow-up 1982 novel, 2010: Odyssey Two, and the accompanying 1984 film, 2010: The Year We Make Contact, it is revealed that this action was caused by the computer having been programmed with two conflicting objectives: to fully disclose all its information, and to keep the true purpose of the flight secret from the crew; this conflict caused HAL to become paranoid and eventually homicidal.
In the English version of the Nena 1983 song 99 Luftballons (99 Red Balloons) as a result of "bugs in the software", a release of a group of 99 red balloons are mistaken for an enemy nuclear missile launch, requiring an equivalent launch response and resulting in catastrophe.
In the 1999 American comedy Office Space, three employees attempt (unsuccessfully) to exploit their company's preoccupation with the Y2K computer bug using a computer virus that sends rounded-off fractions of a penny to their bank account—a long-known technique described as salami slicing.
The 2004 novel The Bug, by Ellen Ullman, is about a programmer's attempt to find an elusive bug in a database application.
The 2008 Canadian film Control Alt Delete is about a computer programmer at the end of 1999 struggling to fix bugs at his company related to the year 2000 problem.
| Technology | Software development: General | null |
37096 | https://en.wikipedia.org/wiki/Floating-point%20unit | Floating-point unit | A floating-point unit (FPU), numeric processing unit (NPU), colloquially math coprocessor, is a part of a computer system specially designed to carry out operations on floating-point numbers. Typical operations are addition, subtraction, multiplication, division, and square root. Some FPUs can also perform various transcendental functions such as exponential or trigonometric calculations, but the accuracy can be low, so some systems prefer to compute these functions in software.
In general-purpose computer architectures, one or more FPUs may be integrated as execution units within the central processing unit; however, many embedded processors do not have hardware support for floating-point operations (while they increasingly have them as standard).
When a CPU is executing a program that calls for a floating-point operation, there are three ways to carry it out:
A floating-point unit emulator (a floating-point library in software)
Add-on FPU hardware
Integrated FPU (in hardware)
History
In 1954, the IBM 704 had floating-point arithmetic as a standard feature, one of its major improvements over its predecessor the IBM 701. This was carried forward to its successors the 709, 7090, and 7094.
In 1963, Digital announced the PDP-6, which had floating point as a standard feature.
In 1963, the GE-235 featured an "Auxiliary Arithmetic Unit" for floating point and double-precision calculations.
Historically, some systems implemented floating point with a coprocessor rather than as an integrated unit (but now in addition to the CPU, e.g. GPUsthat are coprocessors not always built into the CPUhave FPUs as a rule, while first generations of GPUs did not). This could be a single integrated circuit, an entire circuit board or a cabinet. Where floating-point calculation hardware has not been provided, floating-point calculations are done in software, which takes more processor time, but avoids the cost of the extra hardware. For a particular computer architecture, the floating-point unit instructions may be emulated by a library of software functions; this may permit the same object code to run on systems with or without floating-point hardware. Emulation can be implemented on any of several levels: in the CPU as microcode, as an operating system function, or in user-space code. When only integer functionality is available, the CORDIC methods are most commonly used for transcendental function evaluation.
In most modern computer architectures, there is some division of floating-point operations from integer operations. This division varies significantly by architecture; some have dedicated floating-point registers, while some, like Intel x86, go as far as independent clocking schemes.
CORDIC routines have been implemented in Intel x87 coprocessors (8087, 80287, 80387) up to the 80486 microprocessor series, as well as in the Motorola 68881 and 68882 for some kinds of floating-point instructions, mainly as a way to reduce the gate counts (and complexity) of the FPU subsystem.
Floating-point operations are often pipelined. In earlier superscalar architectures without general out-of-order execution, floating-point operations were sometimes pipelined separately from integer operations.
The modular architecture of Bulldozer microarchitecture uses a special FPU named FlexFPU, which uses simultaneous multithreading. Each physical integer core, two per module, is single-threaded, in contrast with Intel's Hyperthreading, where two virtual simultaneous threads share the resources of a single physical core.
Floating-point library
Some floating-point hardware only supports the simplest operations: addition, subtraction, and multiplication. But even the most complex floating-point hardware has a finite number of operations it can supportfor example, no FPUs directly support arbitrary-precision arithmetic.
When a CPU is executing a program that calls for a floating-point operation that is not directly supported by the hardware, the CPU uses a series of simpler floating-point operations. In systems without any floating-point hardware, the CPU emulates it using a series of simpler fixed-point arithmetic operations that run on the integer arithmetic logic unit.
The software that lists the necessary series of operations to emulate floating-point operations is often packaged in a floating-point library.
Integrated FPUs
In some cases, FPUs may be specialized, and divided between simpler floating-point operations (mainly addition and multiplication) and more complicated operations, like division. In some cases, only the simple operations may be implemented in hardware or microcode, while the more complex operations are implemented as software.
In some current architectures, the FPU functionality is combined with SIMD units to perform SIMD computation; an example of this is the augmentation of the x87 instructions set with SSE instruction set in the x86-64 architecture used in newer Intel and AMD processors.
Add-on FPUs
Several models of the PDP-11, such as the PDP-11/45, PDP-11/34a, PDP-11/44, and PDP-11/70, supported an add-on floating-point unit to support floating-point instructions. The PDP-11/60, MicroPDP-11/23 and several VAX models could execute floating-point instructions without an add-on FPU (the MicroPDP-11/23 required an add-on microcode option), and offered add-on accelerators to further speed the execution of those instructions.
In the 1980s, it was common in IBM PC/compatible microcomputers for the FPU to be entirely separate from the CPU, and typically sold as an optional add-on. It would only be purchased if needed to speed up or enable math-intensive programs.
The IBM PC, XT, and most compatibles based on the 8088 or 8086 had a socket for the optional 8087 coprocessor. The AT and 80286-based systems were generally socketed for the 80287, and 80386/80386SX-based machinesfor the 80387 and 80387SX respectively, although early ones were socketed for the 80287, since the 80387 did not exist yet. Other companies manufactured co-processors for the Intel x86 series. These included Cyrix and Weitek. Acorn Computers opted for the WE32206 to offer single, double and extended precision to its ARM powered Archimedes range, introducing a gate array to interface the ARM2 processor with the WE32206 to support the additional ARM floating-point instructions. Acorn later offered the FPA10 coprocessor, developed by ARM, for various machines fitted with the ARM3 processor.
Coprocessors were available for the Motorola 68000 family, the 68881 and 68882. These were common in Motorola 68020/68030-based workstations, like the Sun-3 series. They were also commonly added to higher-end models of Apple Macintosh and Commodore Amiga series, but unlike IBM PC-compatible systems, sockets for adding the coprocessor were not as common in lower-end systems.
There are also add-on FPU coprocessor units for microcontroller units (MCUs/μCs)/single-board computer (SBCs), which serve to provide floating-point arithmetic capability. These add-on FPUs are host-processor-independent, possess their own programming requirements (operations, instruction sets, etc.) and are often provided with their own integrated development environments (IDEs).
| Technology | Computer hardware | null |
37138 | https://en.wikipedia.org/wiki/Arabidopsis%20thaliana | Arabidopsis thaliana | Arabidopsis thaliana, the thale cress, mouse-ear cress or arabidopsis, is a small plant from the mustard family (Brassicaceae), native to Eurasia and Africa. Commonly found along the shoulders of roads and in disturbed land, it is generally considered a weed.
A winter annual with a relatively short lifecycle, A. thaliana is a popular model organism in plant biology and genetics. For a complex multicellular eukaryote, A. thaliana has a relatively small genome of around 135 megabase pairs. It was the first plant to have its genome sequenced, and is an important tool for understanding the molecular biology of many plant traits, including flower development and light sensing.
Description
Arabidopsis thaliana is an annual (rarely biennial) plant, usually growing to 20–25 cm tall. The leaves form a rosette at the base of the plant, with a few leaves also on the flowering stem. The basal leaves are green to slightly purplish in color, 1.5–5 cm long, and 2–10 mm broad, with an entire to coarsely serrated margin; the stem leaves are smaller and unstalked, usually with an entire margin. Leaves are covered with small, unicellular hairs called trichomes. The flowers are 3 mm in diameter, arranged in a corymb; their structure is that of the typical Brassicaceae. The fruit is a silique 5–20 mm long, containing 20–30 seeds. Roots are simple in structure, with a single primary root that grows vertically downward, later producing smaller lateral roots. These roots form interactions with rhizosphere bacteria such as Bacillus megaterium.
A. thaliana can complete its entire lifecycle in six weeks. The central stem that produces flowers grows after about 3 weeks, and the flowers naturally self-pollinate. In the lab, A. thaliana may be grown in Petri plates, pots, or hydroponics, under fluorescent lights or in a greenhouse.
Taxonomy
The plant was first described in 1577 in the Harz Mountains by (1542–1583), a physician from Nordhausen, Thüringen, Germany, who called it Pilosella siliquosa. In 1753, Carl Linnaeus renamed the plant Arabis thaliana in honor of Thal. In 1842, German botanist Gustav Heynhold erected the new genus Arabidopsis and placed the plant in that genus. The generic name, Arabidopsis, comes from Greek, meaning "resembling Arabis" (the genus in which Linnaeus had initially placed it).
Thousands of natural inbred accessions of A. thaliana have been collected from throughout its natural and introduced range. These accessions exhibit considerable genetic and phenotypic variation, which can be used to study the adaptation of this species to different environments.
Distribution and habitat
A. thaliana is native to Europe, Asia, and Africa, and its geographic distribution is rather continuous from the Mediterranean to Scandinavia and Spain to Greece. It also appears to be native in tropical alpine ecosystems in Africa and perhaps South Africa. It has been introduced and naturalized worldwide, including in North America around the 17th century.
A. thaliana readily grows and often pioneers rocky, sandy, and calcareous soils. It is generally considered a weed, due to its widespread distribution in agricultural fields, roadsides, railway lines, waste ground, and other disturbed habitats, but due to its limited competitive ability and small size, it is not categorized as a noxious weed. Like most Brassicaceae species, A. thaliana is edible by humans in a salad or cooked, but it does not enjoy widespread use as a spring vegetable.
Use as a model organism
Botanists and biologists began to research A. thaliana in the early 1900s, and the first systematic description of mutants was done around 1945. A. thaliana is now widely used for studying plant sciences, including genetics, evolution, population genetics, and plant development. Although A. thaliana the plant has little direct significance for agriculture, A. thaliana the model organism has revolutionized our understanding of the genetic, cellular, and molecular biology of flowering plants.
The first mutant in A. thaliana was documented in 1873 by Alexander Braun, describing a double flower phenotype (the mutated gene was likely Agamous, cloned and characterized in 1990). Friedrich Laibach (who had published the chromosome number in 1907) did not propose A. thaliana as a model organism, though, until 1943. His student, Erna Reinholz, published her thesis on A. thaliana in 1945, describing the first collection of A. thaliana mutants that they generated using X-ray mutagenesis. Laibach continued his important contributions to A. thaliana research by collecting a large number of accessions (often questionably referred to as "ecotypes"). With the help of Albert Kranz, these were organised into a large collection of 750 natural accessions of A. thaliana from around the world.
In the 1950s and 1960s, John Langridge and George Rédei played an important role in establishing A. thaliana as a useful organism for biological laboratory experiments. Rédei wrote several scholarly reviews instrumental in introducing the model to the scientific community. The start of the A. thaliana research community dates to a newsletter called Arabidopsis Information Service, established in 1964. The first International Arabidopsis Conference was held in 1965, in Göttingen, Germany.
In the 1980s, A. thaliana started to become widely used in plant research laboratories around the world. It was one of several candidates that included maize, petunia, and tobacco. The latter two were attractive, since they were easily transformable with the then-current technologies, while maize was a well-established genetic model for plant biology. The breakthrough year for A. thaliana as a model plant was 1986, in which T-DNA-mediated transformation and the first cloned A. thaliana gene were described.
Genomics
Nuclear genome
Due to the small size of its genome, and because it is diploid, Arabidopsis thaliana is useful for genetic mapping and sequencing — with about 157 megabase pairs and five chromosomes, A. thaliana has one of the smallest genomes among plants. It was long thought to have the smallest genome of all flowering plants, but that title is now considered to belong to plants in the genus Genlisea, order Lamiales, with Genlisea tuberosa, a carnivorous plant, showing a genome size of approximately 61 Mbp. It was the first plant genome to be sequenced, completed in 2000 by the Arabidopsis Genome Initiative. The most up-to-date version of the A. thaliana genome is maintained by the Arabidopsis Information Resource.
The genome encodes ~27,600 protein-coding genes and about 6,500 non-coding genes. However, the Uniprot database lists 39,342 proteins in their Arabidopsis reference proteome. Among the 27,600 protein-coding genes 25,402 (91.8%) are now annotated with "meaningful" product names, although a large fraction of these proteins is likely only poorly understood and only known in general terms (e.g. as "DNA-binding protein without known specificity"). Uniprot lists more than 3,000 proteins as "uncharacterized" as part of the reference proteome.
Chloroplast genome
The plastome of A. thaliana is a 154,478 base-pair-long DNA molecule, a size typically encountered in most flowering plants (see the list of sequenced plastomes). It comprises 136 genes coding for small subunit ribosomal proteins (rps, in yellow: see figure), large subunit ribosomal proteins (rpl, orange), hypothetical chloroplast open reading frame proteins (ycf, lemon), proteins involved in photosynthetic reactions (green) or in other functions (red), ribosomal RNAs (rrn, blue), and transfer RNAs (trn, black).
Mitochondrial genome
The mitochondrial genome of A. thaliana is 367,808 base pairs long and contains 57 genes. There are many repeated regions in the Arabidopsis mitochondrial genome. The largest repeats recombine regularly and isomerize the genome. Like most plant mitochondrial genomes, the Arabidopsis mitochondrial genome exists as a complex arrangement of overlapping branched and linear molecules in vivo.
Genetics
Genetic transformation of A. thaliana is routine, using Agrobacterium tumefaciens to transfer DNA into the plant genome. The current protocol, termed "floral dip", involves simply dipping flowers into a solution containing Agrobacterium carrying a plasmid of interest and a detergent. This method avoids the need for tissue culture or plant regeneration.
The A. thaliana gene knockout collections are a unique resource for plant biology made possible by the availability of high-throughput transformation and funding for genomics resources. The site of T-DNA insertions has been determined for over 300,000 independent transgenic lines, with the information and seeds accessible through online T-DNA databases. Through these collections, insertional mutants are available for most genes in A. thaliana.
Characterized accessions and mutant lines of A. thaliana serve as experimental material in laboratory studies. The most commonly used background lines are Ler (Landsberg erecta), and Col, or Columbia. Other background lines less-often cited in the scientific literature are Ws, or Wassilewskija, C24, Cvi, or Cape Verde Islands, Nossen, etc. (see for ex.) Sets of closely related accessions named Col-0, Col-1, etc., have been obtained and characterized; in general, mutant lines are available through stock centers, of which best-known are the Nottingham Arabidopsis Stock Center-NASC and the Arabidopsis Biological Resource Center-ABRC in Ohio, USA.
The Col-0 accession was selected by Rédei from within a (nonirradiated) population of seeds designated 'Landsberg' which he received from Laibach. Columbia (named for the location of Rédei's former institution, University of Missouri-Columbia) was the reference accession sequenced in the Arabidopsis Genome Initiative. The Later (Landsberg erecta) line was selected by Rédei (because of its short stature) from a Landsberg population he had mutagenized with X-rays. As the Ler collection of mutants is derived from this initial line, Ler-0 does not correspond to the Landsberg accessions, which designated La-0, La-1, etc.
Trichome formation is initiated by the GLABROUS1 protein. Knockouts of the corresponding gene lead to glabrous plants. This phenotype has already been used in gene editing experiments and might be of interest as visual marker for plant research to improve gene editing methods such as CRISPR/Cas9.
Non-Mendelian inheritance controversy
In 2005, scientists at Purdue University proposed that A. thaliana possessed an alternative to previously known mechanisms of DNA repair, producing an unusual pattern of inheritance, but the phenomenon observed (reversion of mutant copies of the HOTHEAD gene to a wild-type state) was later suggested to be an artifact because the mutants show increased outcrossing due to organ fusion.
Lifecycle
The plant's small size and rapid lifecycle are also advantageous for research. Having specialized as a spring ephemeral, it has been used to found several laboratory strains that take about 6 weeks from germination to mature seed. The small size of the plant is convenient for cultivation in a small space, and it produces many seeds. Further, the selfing nature of this plant assists genetic experiments. Also, as an individual plant can produce several thousand seeds, each of the above criteria leads to A. thaliana being valued as a genetic model organism.
Cellular biology
Arabidopsis is often the model for study of SNAREs in plants. This has shown SNAREs to be heavily involved in vesicle trafficking. Zheng et al. 1999 found an Arabidopsis SNARE called is probably essential to Golgi-vacuole trafficking. This is still a wide open field and plant SNAREs' role in trafficking remains understudied.
DNA repair
The DNA of plants is vulnerable to ultraviolet light, and DNA repair mechanisms have evolved to avoid or repair genome damage caused by UV. Kaiser et al. showed that in A. thaliana cyclobutane pyrimidine dimers (CPDs) induced by UV light can be repaired by expression of CPD photolyase.
Germination in lunar regolith
On May 12, 2022, NASA announced that specimens of Arabidopsis thaliana had been successfully germinated and grown in samples of lunar regolith. While the plants successfully germinated and grew into seedlings, they were not as robust as specimens that had been grown in volcanic ash as a control group, although the experiments also found some variation in the plants grown in regolith based on the location the samples were taken from, as A. thaliana grown in regolith gathered during Apollo 12 & Apollo 17 were more robust than those grown in samples taken during Apollo 11.
Development
Flower development
A. thaliana has been extensively studied as a model for flower development. The developing flower has four basic organs - sepals, petals, stamens, and carpels (which go on to form pistils). These organs are arranged in a series of whorls, four sepals on the outer whorl, followed by four petals inside this, six stamens, and a central carpel region. Homeotic mutations in A. thaliana result in the change of one organ to another—in the case of the agamous mutation, for example, stamens become petals and carpels are replaced with a new flower, resulting in a recursively repeated sepal-petal-petal pattern.
Observations of homeotic mutations led to the formulation of the ABC model of flower development by E. Coen and E. Meyerowitz. According to this model, floral organ identity genes are divided into three classes - class A genes (which affect sepals and petals), class B genes (which affect petals and stamens), and class C genes (which affect stamens and carpels). These genes code for transcription factors that combine to cause tissue specification in their respective regions during development. Although developed through study of A. thaliana flowers, this model is generally applicable to other flowering plants.
Leaf development
Studies of A. thaliana have provided considerable insights with regards to the genetics of leaf morphogenesis, particularly in dicotyledon-type plants. Much of the understanding has come from analyzing mutants in leaf development, some of which were identified in the 1960s, but were not analysed with genetic and molecular techniques until the mid-1990s. A. thaliana leaves are well suited to studies of leaf development because they are relatively simple and stable.
Using A. thaliana, the genetics behind leaf shape development have become more clear and have been broken down into three stages: The initiation of the leaf primordium, the establishment of dorsiventrality, and the development of a marginal meristem. Leaf primordia are initiated by the suppression of the genes and proteins of class I KNOX family (such as SHOOT APICAL MERISTEMLESS). These class I KNOX proteins directly suppress gibberellin biosynthesis in the leaf primordium. Many genetic factors were found to be involved in the suppression of these class I KNOX genes in leaf primordia (such as ASYMMETRIC LEAVES1, BLADE-ON-PETIOLE1, SAWTOOTH1, etc.). Thus, with this suppression, the levels of gibberellin increase and leaf primordium initiate growth.
The establishment of leaf dorsiventrality is important since the dorsal (adaxial) surface of the leaf is different from the ventral (abaxial) surface.
Microscopy
A. thaliana is well suited for light microscopy analysis. Young seedlings on the whole, and their roots in particular, are relatively translucent. This, together with their small size, facilitates live cell imaging using both fluorescence and confocal laser scanning microscopy. By wet-mounting seedlings in water or in culture media, plants may be imaged uninvasively, obviating the need for fixation and sectioning and allowing time-lapse measurements. Fluorescent protein constructs can be introduced through transformation. The developmental stage of each cell can be inferred from its location in the plant or by using fluorescent protein markers, allowing detailed developmental analysis.
Physiology
Light sensing, light emission, and circadian biology
The photoreceptors phytochromes A, B, C, D, and E mediate red light-based phototropic response. Understanding the function of these receptors has helped plant biologists understand the signaling cascades that regulate photoperiodism, germination, de-etiolation, and shade avoidance in plants. The genes FCA, fy, fpa, LUMINIDEPENDENS (ld), fly, fve and FLOWERING LOCUS C (FLC) are involved in photoperiod triggering of flowering and vernalization. Specifically Lee et al 1994 find ld produces a homeodomain and Blazquez et al 2001 that fve produces a WD40 repeat.
The UVR8 protein detects UV-B light and mediates the response to this DNA-damaging wavelength.
A. thaliana was used extensively in the study of the genetic basis of phototropism, chloroplast alignment, and stomal aperture and other blue light-influenced processes. These traits respond to blue light, which is perceived by the phototropin light receptors. Arabidopsis has also been important in understanding the functions of another blue light receptor, cryptochrome, which is especially important for light entrainment to control the plants' circadian rhythms. When the onset of darkness is unusually early, A. thaliana reduces its metabolism of starch by an amount that effectively requires division.
Light responses were even found in roots, previously thought to be largely insensitive to light. While the gravitropic response of A. thaliana root organs is their predominant tropic response, specimens treated with mutagens and selected for the absence of gravitropic action showed negative phototropic response to blue or white light, and positive response to red light, indicating that the roots also show positive phototropism.
In 2000, Dr. Janet Braam of Rice University genetically engineered A. thaliana to glow in the dark when touched. The effect was visible to ultrasensitive cameras.
Multiple efforts, including the Glowing Plant project, have sought to use A. thaliana to increase plant luminescence intensity towards commercially viable levels.
Thigmomorphogenesis (Touch response)
In 1990, Janet Braam and Ronald W. Davis determined that A. thaliana exhibits thigmomorphogenesis in response to wind, rain and touch. Four or more touch induced genes in A. thaliana were found to be regulated by such stimuli. In 2002, Massimo Pigliucci found that A. thaliana developed different patterns of branching in response to sustained exposure to wind, a display of phenotypic plasticity.
On the Moon
On January 2, 2019, China's Chang'e-4 lander brought A. thaliana to the moon. A small microcosm 'tin' in the lander contained A. thaliana, seeds of potatoes, and silkworm eggs. As plants would support the silkworms with oxygen, and the silkworms would in turn provide the plants with necessary carbon dioxide and nutrients through their waste, researchers will evaluate whether plants successfully perform photosynthesis, and grow and bloom in the lunar environment.
Secondary metabolites
is an Arabidopsis root triterpene. Potter et al., 2018 finds synthesis is induced by a combination of at least 2 facts, cell-specific transcription factors (TFs) and the accessibility of the chromatin.
Plant–pathogen interactions
Understanding how plants achieve resistance is important to protect the world's food production, and the agriculture industry. Many model systems have been developed to better understand interactions between plants and bacterial, fungal, oomycete, viral, and nematode pathogens. A. thaliana has been a powerful tool for the study of the subdiscipline of plant pathology, that is, the interaction between plants and disease-causing pathogens.
The use of A. thaliana has led to many breakthroughs in the advancement of knowledge of how plants manifest plant disease resistance. The reason most plants are resistant to most pathogens is through nonhost resistance - not all pathogens will infect all plants. An example where A. thaliana was used to determine the genes responsible for nonhost resistance is Blumeria graminis, the causal agent of powdery mildew of grasses. A. thaliana mutants were developed using the mutagen ethyl methanesulfonate and screened to identify mutants with increased infection by B. graminis. The mutants with higher infection rates are referred to as PEN mutants due to the ability of B. graminis to penetrate A. thaliana to begin the disease process. The PEN genes were later mapped to identify the genes responsible for nonhost resistance to B. graminis.
In general, when a plant is exposed to a pathogen, or nonpathogenic microbe, an initial response, known as PAMP-triggered immunity (PTI), occurs because the plant detects conserved motifs known as pathogen-associated molecular patterns (PAMPs). These PAMPs are detected by specialized receptors in the host known as pattern recognition receptors (PRRs) on the plant cell surface.
The best-characterized PRR in A. thaliana is FLS2 (Flagellin-Sensing2), which recognizes bacterial flagellin, a specialized organelle used by microorganisms for the purpose of motility, as well as the ligand flg22, which comprises the 22 amino acids recognized by FLS2. Discovery of FLS2 was facilitated by the identification of an A. thaliana ecotype, Ws-0, that was unable to detect flg22, leading to the identification of the gene encoding FLS2. FLS2 shows striking similarity to rice XA21, the first PRR isolated in 1995. Both flagellin and UV-C act similarly to increase homologous recombination in A. thaliana, as demonstrated by Molinier et al. 2006. Beyond this somatic effect, they found this to extend to subsequent generations of the plant.
A second PRR, EF-Tu receptor (EFR), identified in A. thaliana, recognizes the bacterial EF-Tu protein, the prokaryotic elongation factor used in protein synthesis, as well as the laboratory-used ligand elf18. Using Agrobacterium-mediated transformation, a technique that takes advantage of the natural process by which Agrobacterium transfers genes into host plants, the EFR gene was transformed into Nicotiana benthamiana, tobacco plant that does not recognize EF-Tu, thereby permitting recognition of bacterial EF-Tu thereby confirming EFR as the receptor of EF-Tu.
Both FLS2 and EFR use similar signal transduction pathways to initiate PTI. A. thaliana has been instrumental in dissecting these pathways to better understand the regulation of immune responses, the most notable one being the mitogen-activated protein kinase (MAP kinase) cascade. Downstream responses of PTI include callose deposition, the oxidative burst, and transcription of defense-related genes.
PTI is able to combat pathogens in a nonspecific manner. A stronger and more specific response in plants is that of effector-triggered immunity (ETI), which is dependent upon the recognition of pathogen effectors, proteins secreted by the pathogen that alter functions in the host, by plant resistance genes (R-genes), often described as a gene-for-gene relationship. This recognition may occur directly or indirectly via a guardee protein in a hypothesis known as the guard hypothesis. The first R-gene cloned in A. thaliana was RPS2 (resistance to Pseudomonas syringae 2), which is responsible for recognition of the effector avrRpt2. The bacterial effector avrRpt2 is delivered into A. thaliana via the Type III secretion system of P. syringae pv. tomato strain DC3000. Recognition of avrRpt2 by RPS2 occurs via the guardee protein RIN4, which is cleaved. Recognition of a pathogen effector leads to a dramatic immune response known as the hypersensitive response, in which the infected plant cells undergo cell death to prevent the spread of the pathogen.
Systemic acquired resistance (SAR) is another example of resistance that is better understood in plants because of research done in A. thaliana. Benzothiadiazol (BTH), a salicylic acid (SA) analog, has been used historically as an antifungal compound in crop plants. BTH, as well as SA, has been shown to induce SAR in plants. The initiation of the SAR pathway was first demonstrated in A. thaliana in which increased SA levels are recognized by nonexpresser of PR genes 1 (NPR1) due to redox change in the cytosol, resulting in the reduction of NPR1. NPR1, which usually exists in a multiplex (oligomeric) state, becomes monomeric (a single unit) upon reduction. When NPR1 becomes monomeric, it translocates to the nucleus, where it interacts with many TGA transcription factors, and is able to induce pathogen-related genes such as PR1. Another example of SAR would be the research done with transgenic tobacco plants, which express bacterial salicylate hydroxylase, nahG gene, requires the accumulation of SA for its expression
Although not directly immunological, intracellular transport affects susceptibility by incorporating - or being tricked into incorporating - pathogen particles. For example, the Dynamin-related protein 2b/drp2b gene helps to move invaginated material into cells, with some mutants increasing PstDC3000 virulence even further.
Evolutionary aspect of plant-pathogen resistance
Plants are affected by multiple pathogens throughout their lifetimes. In response to the presence of pathogens, plants have evolved receptors on their cell surfaces to detect and respond to pathogens. Arabidopsis thaliana is a model organism used to determine specific defense mechanisms of plant-pathogen resistance. These plants have special receptors on their cell surfaces that allow for detection of pathogens and initiate mechanisms to inhibit pathogen growth. They contain two receptors, FLS2 (bacterial flagellin receptor) and EF-Tu (bacterial EF-Tu protein), which use signal transduction pathways to initiate the disease response pathway. The pathway leads to the recognition of the pathogen causing the infected cells to undergo cell death to stop the spread of the pathogen. Plants with FLS2 and EF-Tu receptors have shown to have increased fitness in the population. This has led to the belief that plant-pathogen resistance is an evolutionary mechanism that has built up over generations to respond to dynamic environments, such as increased predation and extreme temperatures.
A. thaliana has also been used to study SAR.
This pathway uses benzothiadiazol, a chemical inducer, to induce transcription factors, mRNA, of SAR genes. This accumulation of transcription factors leads to inhibition of pathogen-related genes.
Plant-pathogen interactions are important for an understanding of how plants have evolved to combat different types of pathogens that may affect them. Variation in resistance of plants across populations is due to variation in environmental factors. Plants that have evolved resistance, whether it be the general variation or the SAR variation, have been able to live longer and hold off necrosis of their tissue (premature death of cells), which leads to better adaptation and fitness for populations that are in rapidly changing environments. In the future, comparisons of the pathosystems of wild populations + their coevolved pathogens with wild-wild hybrids of known parentage may reveal new mechanisms of balancing selection. In life history theory we may find that A. thaliana maintains certain alleles due to pleitropy between plant-pathogen effects and other traits, as in livestock.
Research in A. thaliana suggests that the immunity regulator protein family EDS1 in general co-evolved with the CC family of nucleotide-bindingleucine-rich-repeat-receptors (NLRs). Xiao et al. 2005 have shown that the powdery mildew immunity mediated by A. thalianas RPW8 (which has a CC domain) is dependent on two members of this family: EDS1 itself and PAD4.
RESISTANCE TO PSEUDOMONAS SYRINGAE 5/RPS5 is a disease resistance protein which guards AvrPphB SUSCEPTIBLE 1/PBS1. PBS1, as the name would suggest, is the target of AvrPphB, an effector produced by Pseudomonas syringae pv. phaseolicola.
Other research
Ongoing research on A. thaliana is being performed on the International Space Station by the European Space Agency. The goals are to study the growth and reproduction of plants from seed to seed in microgravity.
Plant-on-a-chip devices in which A. thaliana tissues can be cultured in semi-in vitro conditions have been described. Use of these devices may aid understanding of pollen-tube guidance and the mechanism of sexual reproduction in A. thaliana.
Researchers at the University of Florida were able to grow the plant in lunar soil originating from the Sea of Tranquillity.
Self-pollination
A. thaliana is a predominantly self-pollinating plant with an outcrossing rate estimated at less than 0.3%. An analysis of the genome-wide pattern of linkage disequilibrium suggested that self-pollination evolved roughly a million years ago or more. Meioses that lead to self-pollination are unlikely to produce significant beneficial genetic variability. However, these meioses can provide the adaptive benefit of recombinational repair of DNA damages during formation of germ cells at each generation. Such a benefit may have been sufficient to allow the long-term persistence of meioses even when followed by self-fertilization. A physical mechanism for self-pollination in A. thaliana is through pre-anthesis autogamy, such that fertilisation takes place largely before flower opening.
Databases and other resources
TAIR and NASC: curated sources for diverse genetic and molecular biology information, links to gene expression databases etc.
Arabidopsis Biological Resource Center (seed and DNA stocks)
Nottingham Arabidopsis Stock Centre (seed and DNA stocks)
Artade database
AraDiv: a dataset of functional traits and leaf hyperspectral reflectance of Arabidopsis thaliana: see data repository
| Biology and health sciences | Brassicales | null |
37149 | https://en.wikipedia.org/wiki/Cranial%20nerves | Cranial nerves | Cranial nerves are the nerves that emerge directly from the brain (including the brainstem), of which there are conventionally considered twelve pairs. Cranial nerves relay information between the brain and parts of the body, primarily to and from regions of the head and neck, including the special senses of vision, taste, smell, and hearing.
The cranial nerves emerge from the central nervous system above the level of the first vertebra of the vertebral column. Each cranial nerve is paired and is present on both sides.
There are conventionally twelve pairs of cranial nerves, which are described with Roman numerals I–XII. Some considered there to be thirteen pairs of cranial nerves, including the non-paired cranial nerve zero. The numbering of the cranial nerves is based on the order in which they emerge from the brain and brainstem, from front to back.
The terminal nerves (0), olfactory nerves (I) and optic nerves (II) emerge from the cerebrum, and the remaining ten pairs arise from the brainstem, which is the lower part of the brain.
The cranial nerves are considered components of the peripheral nervous system (PNS), although on a structural level the olfactory (I), optic (II), and trigeminal (V) nerves are more accurately considered part of the central nervous system (CNS).
The cranial nerves are in contrast to spinal nerves, which emerge from segments of the spinal cord.
Anatomy
Most typically, humans are considered to have twelve pairs of cranial nerves (I–XII), with the terminal nerve (0) more recently canonized. The nerves are: the olfactory nerve (I), the optic nerve (II), oculomotor nerve (III), trochlear nerve (IV), trigeminal nerve (V), abducens nerve (VI), facial nerve (VII), vestibulocochlear nerve (VIII), glossopharyngeal nerve (IX), vagus nerve (X), accessory nerve (XI), and the hypoglossal nerve (XII).
Terminology
Cranial nerves are generally named according to their structure or function. For example, the olfactory nerve (I) supplies smell, and the facial nerve (VII) supplies the muscles of the face. Because Latin was the lingua franca of the study of anatomy when the nerves were first documented, recorded, and discussed, many nerves maintain Latin or Greek names, including the trochlear nerve (IV), named according to its structure, as it supplies a muscle that attaches to a pulley (). The trigeminal nerve (V) is named in accordance with its three components ( meaning triplets), and the vagus nerve (X) is named for its wandering course ().
Cranial nerves are numbered based on their position from front to back (rostral-caudal) of their position on the brain, as, when viewing the forebrain and brainstem from below, they are often visible in their numeric order. For example, the olfactory nerves (I) and optic nerves (II) arise from the base of the forebrain, and the other nerves, III to XII, arise from the brainstem.
Cranial nerves have paths within and outside the skull. The paths within the skull are called "intracranial" and the paths outside the skull are called "extracranial". There are many holes in the skull called "foramina" by which the nerves can exit the skull. All cranial nerves are paired, which means they occur on both the right and left sides of the body. The muscle, skin, or additional function supplied by a nerve, on the same side of the body as the side it originates from, is an ipsilateral function. If the function is on the opposite side to the origin of the nerve, this is known as a contralateral function.
Intracranial course
Nuclei
Grossly, all cranial nerves have a nucleus. With the exception of the olfactory nerve (I) and optic nerve (II), all the nuclei are present in the brainstem.
The midbrain has the nuclei of the oculomotor nerve (III) and trochlear nerve (IV); the pons has the nuclei of the trigeminal nerve (V), abducens nerve (VI), facial nerve (VII) and vestibulocochlear nerve (VIII); and the medulla has the nuclei of the glossopharyngeal nerve (IX), vagus nerve (X), accessory nerve (XI) and hypoglossal nerve (XII). The olfactory nerve (I) emerges from the olfactory bulb, and depending slightly on division the optic nerve (II) is considered to emerge from the lateral geniculate nuclei.
Because each nerve may have several functions, the nerve fibres that make up the nerve may collect in more than one nucleus. For example, the trigeminal nerve (V), which has a sensory and a motor role, has at least four nuclei.
Exiting the brainstem
With the exception of the olfactory nerve (I) and optic nerve (II), the cranial nerves emerge from the brainstem. The oculomotor nerve (III) and trochlear nerve (IV) emerge from the midbrain, the trigeminal (V), abducens (VI), facial (VII) and vestibulocochlear (VIII) from the pons, and the glossopharyngeal (IX), vagus (X), accessory (XI) and hypoglossal (XII) emerge from the medulla.
The olfactory nerve (I) and optic nerve (II) emerge separately. The olfactory nerves emerge from the olfactory bulbs on either side of the crista galli, a bony projection below the frontal lobe, and the optic nerves (II) emerge from the lateral colliculus, swellings on either side of the temporal lobes of the brain.
Ganglia
The cranial nerves give rise to a number of ganglia, collections of the cell bodies of neurons in the nerves that are outside of the brain. These ganglia are both parasympathetic and sensory ganglia.
The sensory ganglia of the cranial nerves, directly correspond to the dorsal root ganglia of spinal nerves and are known as cranial nerve ganglia. Sensory ganglia exist for nerves with sensory function: V, VII, VIII, IX, X. There are also a number of parasympathetic cranial nerve ganglia. Sympathetic ganglia supplying the head and neck reside in the upper regions of the sympathetic trunk, and do not belong to the cranial nerves.
The ganglion of the sensory nerves, which are similar in structure to the dorsal root ganglion of the spinal cord, include:
The trigeminal ganglia of the trigeminal nerve (V), which occupies a space in the dura mater called Meckel's cave. This ganglion contains only the sensory fibres of the trigeminal nerve.
The geniculate ganglion of the facial nerve (VII), which occurs just after the nerve enters the facial canal.
A superior and inferior ganglia of the glossopharyngeal nerve (IX), which occurs just after it passes through the jugular foramen.
Additional ganglia for nerves with parasympathetic function exist, and include the ciliary ganglion of the oculomotor nerve (III), the pterygopalatine ganglion of the maxillary nerve (V2), the submandibular ganglion of the lingual nerve, a branch of the facial nerve (VII), and the otic ganglion of the glossopharyngeal nerve (IX).
Exiting the skull and extracranial course
After emerging from the brain, the cranial nerves travel within the skull, and some must leave it in order to reach their destinations. Often the nerves pass through holes in the skull, called foramina, as they travel to their destinations. Other nerves pass through bony canals, longer pathways enclosed by bone. These foramina and canals may contain more than one cranial nerve and may also contain blood vessels.
The terminal nerve (0) is a thin network of fibers associated with the dura and lamina terminalis running rostral to the olfactory nerve, with projections through the cribriform plate.
The olfactory nerve (I) passes through perforations in the cribriform plate part of the ethmoid bone. The nerve fibres end in the upper nasal cavity.
The optic nerve (II) passes through the optic foramen in the sphenoid bone as it travels to the eye.
The oculomotor nerve (III), trochlear nerve (IV), abducens nerve (VI) and the ophthalmic branch of the trigeminal nerve (V1) travel through the cavernous sinus into the superior orbital fissure, passing out of the skull into the orbit.
The maxillary division of the trigeminal nerve (V2) passes through foramen rotundum in the sphenoid bone.
The mandibular division of the trigeminal nerve (V3) passes through foramen ovale of the sphenoid bone.
The facial nerve (VII) and vestibulocochlear nerve (VIII) both enter the internal auditory canal in the temporal bone. The facial nerve then reaches the side of the face by using the stylomastoid foramen, also in the temporal bone. Its fibers then spread out to reach and control all of the muscles of facial expression. The vestibulocochlear nerve reaches the organs that control balance and hearing in the temporal bone and therefore does not reach the external surface of the skull.
The glossopharyngeal (IX), vagus (X) and accessory nerve (XI) all leave the skull via the jugular foramen to enter the neck. The glossopharyngeal nerve provides sensation to the upper throat and the back of the tongue, the vagus supplies the muscles in the larynx and continues downward to supply parasympathetic supply to the chest and abdomen. The accessory nerve controls the trapezius and sternocleidomastoid muscles in the neck and shoulder.
The hypoglossal nerve (XII) exits the skull using the hypoglossal canal in the occipital bone.
Development
The cranial nerves are formed from the contribution of two specialized embryonic cell populations, cranial neural crest and ectodermal placodes. The components of the sensory nervous system of the head are derived from the neural crest and from an embryonic cell population developing in close proximity, the cranial sensory placodes (the olfactory, lens, otic, trigeminal, epibranchial and paratympanic placodes). The dual origin cranial nerves are summarized in the following table:
Contributions of neural crest cells and placodes to ganglia and cranial nerves
Abbreviations: CN, cranial nerve; m, purely motor nerve; mix, mixed nerve (sensory and motor); NC, neural crest; PA, pharyngeal (branchial) arch; r, rhombomere; s, purely sensory nerve. * There is no known ganglion of the accessory nerve. The cranial part of the accessory nerve sends occasional branches to the superior ganglion of the vagus nerve.
Function
The cranial nerves provide motor and sensory supply mainly to the structures within the head and neck. The sensory supply includes both "general" sensation such as temperature and touch, and "special" senses such as taste, vision, smell, balance and hearing. The vagus nerve (X) provides sensory and autonomic (parasympathetic) supply to structures in the neck and also to most of the organs in the chest and abdomen.
Terminal nerve (0)
The terminal nerve (0) may not have a role in humans, although it has been implicated in hormonal responses to smell, sexual response and mate selection.
Smell (I)
The olfactory nerve (I) conveys information giving rise to the sense of smell.
Damage to the olfactory nerve (I) can cause an inability to smell (anosmia), a distortion in the sense of smell (parosmia), or a distortion or lack of taste.
Vision (II)
The optic nerve (II) transmits visual information.
Damage to the optic nerve (II) affects specific aspects of vision that depend on the location of the damage. A person may not be able to see objects on their left or right sides (homonymous hemianopsia), or may have difficulty seeing objects from their outer visual fields (bitemporal hemianopsia) if the optic chiasm is involved. Inflammation (optic neuritis) may impact the sharpness of vision or color detection
Eye movement (III, IV, VI)
The oculomotor nerve (III), trochlear nerve (IV) and abducens nerve (VI) coordinate eye movement. The oculomotor nerve (III) controls all muscles of the eye except for the superior oblique muscle controlled by the trochlear nerve (IV), and the lateral rectus muscle controlled by the abducens nerve (VI). This means the ability of the eye to look down and inwards is controlled by the trochlear nerve (IV), the ability to look outwards is controlled by the abducens nerve (VI), and all other movements are controlled by the oculomotor nerve (III)
Damage to these nerves may affect the movement of the eye. Damage may result in double vision (diplopia) because the movements of the eyes are not synchronized. Abnormalities of visual movement may also be seen on examination, such as jittering (nystagmus).
Damage to the oculomotor nerve (III) can cause double vision and inability to coordinate the movements of both eyes (strabismus), also eyelid drooping (ptosis) and pupil dilation (mydriasis). Lesions may also lead to inability to open the eye due to paralysis of the levator palpebrae muscle. Individuals suffering from a lesion to the oculomotor nerve, may compensate by tilting their heads to alleviate symptoms due to paralysis of one or more of the eye muscles it controls.
Damage to the trochlear nerve (IV) can also cause double vision with the eye adducted and elevated. The result will be an eye which can not move downwards properly (especially downwards when in an inward position). This is due to impairment in the superior oblique muscle.
Damage to the abducens nerve (VI) can also result in double vision. This is due to impairment in the lateral rectus muscle, supplied by the abducens nerve.
Trigeminal nerve (V)
The trigeminal nerve (V) and its three main branches the ophthalmic (V1), maxillary (V2), and mandibular (V3) provide sensation to the skin of the face and also controls the muscles of chewing.
Damage to the trigeminal nerve leads to loss of sensation in an affected area. Other conditions affecting the trigeminal nerve (V) include trigeminal neuralgia, herpes zoster, sinusitis pain, presence of a dental abscess, and cluster headaches.
Facial expression (VII)
The facial nerve (VII) controls most muscles of facial expression, supplies the sensation of taste from the front two-thirds of the tongue, and controls the stapedius muscle. Most muscles are supplied by the cortex on the opposite side of the brain; the exception is the frontalis muscle of the forehead, in which the left and the right side of the muscle both receive inputs from both sides of the brain.
Damage to the facial nerve (VII) may cause facial palsy. This is where a person is unable to move the muscles on one or both sides of their face. The most common cause of this is Bell's palsy, the ultimate cause of which is unknown. Patients with Bell's palsy often have a drooping mouth on the affected side and often have trouble chewing because the buccinator muscle is affected. The facial nerve is also the most commonly affected cranial nerve in blunt trauma.
Hearing and balance (VIII)
The vestibulocochlear nerve (VIII) supplies information relating to balance and hearing via its two branches, the vestibular and cochlear nerves. The vestibular part is responsible for supplying sensation from the vestibules and semicircular canal of the inner ear, including information about balance, and is an important component of the vestibuloocular reflex, which keeps the head stable and allows the eyes to track moving objects. The cochlear nerve transmits information from the cochlea, allowing sound to be heard.
When damaged, the vestibular nerve may give rise to the sensation of spinning and dizziness (vertigo). Function of the vestibular nerve may be tested by putting cold and warm water in the ears and watching eye movements caloric stimulation. Damage to the vestibulocochlear nerve can also present as repetitive and involuntary eye movements (nystagmus), particularly when the eye is moving horizontally. Damage to the cochlear nerve will cause partial or complete deafness in the affected ear.
Oral sensation, taste, and salivation (IX)
The glossopharyngeal nerve (IX) supplies the stylopharyngeus muscle and provides sensation to the oropharynx and back of the tongue. The glossopharyngeal nerve also provides parasympathetic input to the parotid gland.
Damage to the nerve may cause failure of the gag reflex; a failure may also be seen in damage to the vagus nerve (X).
Vagus nerve (X)
The vagus nerve (X) provides sensory and parasympathetic supply to structures in the neck and also to most of the organs in the chest and abdomen.
Loss of function of the vagus nerve (X) will lead to a loss of parasympathetic supply to a very large number of structures. Major effects of damage to the vagus nerve may include a rise in blood pressure and heart rate. Isolated dysfunction of only the vagus nerve is rare, but – if the lesion is located above the point at which the vagus first branches off – can be indicated by a hoarse voice, due to dysfunction of one of its branches, the recurrent laryngeal nerve.
Damage to this nerve may result in difficulties swallowing.
Shoulder elevation and head-turning (XI)
The accessory nerve (XI) supplies the sternocleidomastoid and trapezius muscles.
Damage to the accessory nerve (XI) will lead to weakness in the trapezius muscle on the same side as the damage. The trapezius lifts the shoulder when shrugging, so the affected shoulder will not be able to shrug and the shoulder blade (scapula) will protrude into a winged position. Depending on the location of the lesion there may also be weakness present in the sternocleidomastoid muscle, which acts to turn the head so that the face points to the opposite side.
Tongue movement (XII)
The hypoglossal nerve (XII) supplies the intrinsic muscles of the tongue, controlling tongue movement. The hypoglossal nerve (XII) is unique in that it is supplied by the motor cortices of both hemispheres of the brain.
Damage to the nerve may lead to fasciculations or wasting (atrophy) of the muscles of the tongue. This will lead to weakness of tongue movement on that side. When damaged and extended, the tongue will move towards the weaker or damaged side, as shown in the image. The fasciculations of the tongue are sometimes said to look like a "bag of worms". Damage to the nerve tract or nucleus will not lead to atrophy or fasciculations, but only weakness of the muscles on the same side as the damage.
Clinical significance
Examination
Doctors, neurologists and other medical professionals may conduct a cranial nerve examination as part of a neurological examination to examine the cranial nerves. This is a highly formalised series of steps involving specific tests for each nerve. Dysfunction of a nerve identified during testing may point to a problem with the nerve or of a part of the brain.
A cranial nerve exam starts with observation of the patient, as some cranial nerve lesions may affect the symmetry of the eyes or face. Vision may be tested by examining the visual fields, or by examining the retina with an ophthalmoscope, using a process known as funduscopy. Visual field testing may be used to pin-point structural lesions in the optic nerve, or further along the visual pathways. Eye movement is tested and abnormalities such as nystagmus are observed for. The sensation of the face is tested, and patients are asked to perform different facial movements, such as puffing out of the cheeks. Hearing is checked by voice and tuning forks. The patient's uvula is examined. After performing a shrug and head turn, the patient's tongue function is assessed by various tongue movements.
Smell is not routinely tested, but if there is suspicion of a change in the sense of smell, each nostril is tested with substances of known odors such as coffee or soap. Intensely smelling substances, for example ammonia, may lead to the activation of pain receptors of the trigeminal nerve (V) located in the nasal cavity and this can confound olfactory testing.
Damage
Compression
Nerves may be compressed because of increased intracranial pressure, a mass effect of an intracerebral haemorrhage, or tumour that presses against the nerves and interferes with the transmission of impulses along the nerve. Loss of function of a cranial nerve may sometimes be the first symptom of an intracranial or skull base cancer.
An increase in intracranial pressure may lead to impairment of the optic nerves (II) due to compression of the surrounding veins and capillaries, causing swelling of the eyeball (papilloedema). A cancer, such as an optic nerve glioma, may also impact the optic nerve (II). A pituitary tumour may compress the optic tracts or the optic chiasm of the optic nerve (II), leading to visual field loss. A pituitary tumour may also extend into the cavernous sinus, compressing the oculomotor nerve (III), trochlear nerve (IV) and abducens nerve (VI), leading to double-vision and strabismus. These nerves may also be affected by herniation of the temporal lobes of the brain through the falx cerebri.
The cause of trigeminal neuralgia, in which one side of the face is exquisitely painful, is thought to be compression of the nerve by an artery as the nerve emerges from the brain stem. An acoustic neuroma, particularly at the junction between the pons and medulla, may compress the facial nerve (VII) and vestibulocochlear nerve (VIII), leading to hearing and sensory loss on the affected side.
Stroke
Occlusion of blood vessels that supply the nerves or their nuclei, an ischemic stroke, may cause specific signs and symptoms relating to the damaged area. If there is a stroke of the midbrain, pons or medulla, various cranial nerves may be damaged, resulting in dysfunction and symptoms of a number of different syndromes. Thrombosis, such as a cavernous sinus thrombosis, refers to a clot (thrombus) affecting the venous drainage from the cavernous sinus, affects the optic (II), oculomotor (III), trochlear (IV), ophthalmic branch of the trigeminal nerve (V1) and the abducens nerve (VI).
Inflammation
Inflammation of a cranial nerve can occur as a result of infection, such as viral causes like reactivated herpes simplex virus, or can occur spontaneously. Inflammation of the facial nerve (VII) may result in Bell's palsy.
Multiple sclerosis, an inflammatory process resulting in a loss of the myelin sheathes which surround the cranial nerves, may cause a variety of shifting symptoms affecting multiple cranial nerves. Inflammation may also affect other cranial nerves. Other rarer inflammatory causes affecting the function of multiple cranial nerves include sarcoidosis, miliary tuberculosis, and inflammation of arteries, such as granulomatosis with polyangiitis.
Other
Trauma to the skull, disease of bone, such as Paget's disease, and injury to nerves during surgery are other causes of nerve damage.
History
The Graeco-Roman anatomist Galen (AD 129–210) named seven pairs of cranial nerves. Much later, in 1664, English anatomist Sir Thomas Willis suggested that there were actually 9 pairs of nerves. Finally, in 1778, German anatomist Samuel Soemmering named the 12 pairs of nerves that are generally accepted today. However, because many of the nerves emerge from the brain stem as rootlets, there is continual debate as to how many nerves there actually are, and how they should be grouped. For example, there is reason to consider both the olfactory (I) and optic (II) nerves to be brain tracts, rather than cranial nerves.
Other animals
Cranial nerves are also present in other vertebrates. Other amniotes (non-amphibian tetrapods) have cranial nerves similar to those of humans. In anamniotes (fishes and amphibians), the accessory nerve (XI) and hypoglossal nerve (XII) do not exist, with the accessory nerve (XI) being an integral part of the vagus nerve (X); the hypoglossal nerve (XII) is represented by a variable number of spinal nerves emerging from vertebral segments fused into the occiput. These two nerves only became discrete nerves in the ancestors of amniotes. The very small terminal nerve (nerve N or O) exists in humans but may not be functional. In other animals, it appears to be important to sexual receptivity based on perceptions of pheromones.
| Biology and health sciences | Nervous system | Biology |
37153 | https://en.wikipedia.org/wiki/Supercomputer | Supercomputer | A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2022, supercomputers have existed which can perform over 1018 FLOPS, so called exascale supercomputers. For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.
Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). They have been essential in the field of cryptanalysis.
Supercomputers were introduced in the 1960s, and for several decades the fastest was made by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran more quickly than their more general-purpose contemporaries. Through the decade, increasing amounts of parallelism were added, with one to four processors being typical. In the 1970s, vector processors operating on large arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector computers remained the dominant design into the 1990s. From then until today, massively parallel supercomputers with tens of thousands of off-the-shelf processors became the norm.
The US has long been the leader in the supercomputer field, first through Cray's almost uninterrupted dominance of the field, and later through a variety of technology companies. Japan made major strides in the field in the 1980s and 1990s, with China becoming increasingly active in the field. , Lawrence Livermore National Laboratory's El Capitan is the world's fastest supercomputer. The US has five of the top 10; Japan, Finland, Switzerland, Italy and Spain have one each. In June 2018, all combined supercomputers on the TOP500 list broke the 1 exaFLOPS mark.
History
In 1960, UNIVAC built the Livermore Atomic Research Computer (LARC), today considered among the first supercomputers, for the US Navy Research and Development Center. It still used high-speed drum memory, rather than the newly emerging disk drive technology. Also, among the first supercomputers was the IBM 7030 Stretch. The IBM 7030 was built by IBM for the Los Alamos National Laboratory, which then in 1955 had requested a computer 100 times faster than any existing computer. The IBM 7030 used transistors, magnetic core memory, pipelined instructions, prefetched data through a memory controller and included pioneering random access disk drives. The IBM 7030 was completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by the Los Alamos National Laboratory. Customers in England and France also bought the computer, and it became the basis for the IBM 7950 Harvest, a supercomputer built for cryptanalysis.
The third pioneering supercomputer project in the early 1960s was the Atlas at the University of Manchester, built by a team led by Tom Kilburn. He designed the Atlas to have memory space for up to a million words of 48 bits, but because magnetic storage with such a capacity was unaffordable, the actual core memory of the Atlas was only 16,000 words, with a drum providing memory for a further 96,000 words. The Atlas Supervisor swapped data in the form of pages between the magnetic core and the drum. The Atlas operating system also introduced time-sharing to supercomputing, so that more than one program could be executed on the supercomputer at any one time. Atlas was a joint venture between Ferranti and Manchester University and was designed to operate at processing speeds approaching one microsecond per instruction, about one million instructions per second.
The CDC 6600, designed by Seymour Cray, was finished in 1964 and marked the transition from germanium to silicon transistors. Silicon transistors could run more quickly and the overheating problem was solved by introducing refrigeration to the supercomputer design. Thus, the CDC6600 became the fastest computer in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed a supercomputer and defined the supercomputing market, when one hundred computers were sold at $8 million each.
Cray left CDC in 1972 to form his own company, Cray Research. Four years after leaving CDC, Cray delivered the 80 MHz Cray-1 in 1976, which became one of the most successful supercomputers in history. The Cray-2 was released in 1985. It had eight central processing units (CPUs), liquid cooling and the electronics coolant liquid Fluorinert was pumped through the supercomputer architecture. It reached 1.9 gigaFLOPS, making it the first supercomputer to break the gigaflop barrier.
Massively parallel designs
The only computer to seriously challenge the Cray-1's performance in the 1970s was the ILLIAC IV. This machine was the first realized example of a true massively parallel computer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as quickly as possible, in this concept, the computer instead feeds separate parts of the data to entirely different processors and then recombines the results. The ILLIAC's design was finalized in 1966 with 256 processors and offer speed up to 1 GFLOPS, compared to the 1970s Cray-1's peak of 250 MFLOPS. However, development problems led to only 64 processors being built, and the system could never operate more quickly than about 200 MFLOPS while being much larger and more complex than the Cray. Another problem was that writing software for the system was difficult, and getting peak performance from it was a matter of serious effort.
But the partial success of the ILLIAC IV was widely seen as pointing the way to the future of supercomputing. Cray argued against this, famously quipping that "If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?" But by the early 1980s, several teams were working on parallel designs with thousands of processors, notably the Connection Machine (CM) that developed from research at MIT. The CM-1 used as many as 65,536 simplified custom microprocessors connected together in a network to share data. Several updated versions followed; the CM-5 supercomputer is a massively parallel processing computer capable of many billions of arithmetic operations per second.
In 1982, Osaka University's LINKS-1 Computer Graphics System used a massively parallel processing architecture, with 514 microprocessors, including 257 Zilog Z8001 control processors and 257 iAPX 86/20 floating-point processors. It was mainly used for rendering realistic 3D computer graphics. Fujitsu's VPP500 from 1992 is unusual since, to achieve higher speeds, its processors used GaAs, a material normally reserved for microwave applications due to its toxicity. Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaFLOPS (GFLOPS) per processor. The Hitachi SR2201 obtained a peak performance of 600 GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensional crossbar network. The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes, communicating via the Message Passing Interface.
Software development remained a problem, but the CM series sparked off considerable research into this issue. Similar designs using custom hardware were made by many companies, including the Evans & Sutherland ES-1, MasPar, nCUBE, Intel iPSC and the Goodyear MPP. But by the mid-1990s, general-purpose CPU performance had improved so much in that a supercomputer could be built using them as the individual processing units, instead of using custom chips. By the turn of the 21st century, designs featuring tens of thousands of commodity CPUs were the norm, with later machines adding graphic units to the mix.
In 1998, David Bader developed the first Linux supercomputer using commodity parts. While at the University of New Mexico, Bader sought to build a supercomputer running Linux using consumer off-the-shelf parts and a high-speed low-latency interconnection network. The prototype utilized an Alta Technologies "AltaCluster" of eight dual, 333 MHz, Intel Pentium II computers running a modified Linux kernel. Bader ported a significant amount of software to provide Linux support for necessary components as well as code from members of the National Computational Science Alliance (NCSA) to ensure interoperability, as none of it had been run on Linux previously. Using the successful prototype design, he led the development of "RoadRunner," the first Linux supercomputer for open use by the national science and engineering community via the National Science Foundation's National Technology Grid. RoadRunner was put into production use in April 1999. At the time of its deployment, it was considered one of the 100 fastest supercomputers in the world. Though Linux-based clusters using consumer-grade parts, such as Beowulf, existed prior to the development of Bader's prototype and RoadRunner, they lacked the scalability, bandwidth, and parallel computing capabilities to be considered "true" supercomputers.
Systems with a massive number of processors generally take one of two paths. In the grid computing approach, the processing power of many computers, organized as distributed, diverse administrative domains, is opportunistically used whenever a computer is available. In another approach, many processors are used in proximity to each other, e.g. in a computer cluster. In such a centralized massively parallel system the speed and flexibility of the becomes very important and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to three-dimensional torus interconnects. The use of multi-core processors combined with centralization is an emerging direction, e.g. as in the Cyclops64 system.
As the price, performance and energy efficiency of general-purpose graphics processing units (GPGPUs) have improved, a number of petaFLOPS supercomputers such as Tianhe-I and Nebulae have started to rely on them. However, other systems such as the K computer continue to use conventional processors such as SPARC-based designs and the overall applicability of GPGPUs in general-purpose high-performance computing applications has been the subject of debate, in that while a GPGPU may be tuned to score well on specific benchmarks, its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application to it. However, GPUs are gaining ground, and in 2012 the Jaguar supercomputer was transformed into Titan by retrofitting CPUs with GPUs.
High-performance computers have an expected life cycle of about three years before requiring an upgrade. The Gyoukou supercomputer is unique in that it uses both a massively parallel design and liquid immersion cooling.
Special purpose supercomputers
A number of special-purpose systems have been designed, dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom ASICs, allowing better price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers include Belle, Deep Blue, and Hydra for playing chess, Gravity Pipe for astrophysics, MDGRAPE-3 for protein structure prediction and molecular dynamics, and Deep Crack for breaking the DES cipher.
Energy usage and heat management
Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers. The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components. There have been diverse approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures. A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04 megawatts (MW) of electricity. The cost to power and cool the system can be significant, e.g. 4 MW at $0.10/kWh is $400 an hour or about $3.5 million per year.
Heat management is a major issue in complex electronic devices and affects powerful computer systems in various ways. The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer cooling technologies. The supercomputing awards for green computing reflect this issue.
The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with. The Cray-2 was liquid cooled, and used a Fluorinert "cooling waterfall" which was forced through the modules under pressure. However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company.
In the Blue Gene system, IBM deliberately used low power processors to deal with heat density. The IBM Power 775, released in 2011, has closely packed elements that require water cooling. The IBM Aquasar system uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well.
The energy efficiency of computer systems is generally measured in terms of "FLOPS per watt". In 2008, Roadrunner by IBM operated at 376 MFLOPS/W. In November 2010, the Blue Gene/Q reached 1,684 MFLOPS/W and in June 2011 the top two spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375 MFLOPS/W.
Because copper wires can transfer energy into a supercomputer with much higher power densities than forced air or circulating refrigerants can remove waste heat, the ability of the cooling systems to remove waste heat is a limiting factor. , many existing supercomputers have more infrastructure capacity than the actual peak demand of the machine designers generally conservatively design the power and cooling infrastructure to handle more than the theoretical peak electrical power consumed by the supercomputer. Designs for future supercomputers are power-limited the thermal design power of the supercomputer as a whole, the amount that the power and cooling infrastructure can handle, is somewhat more than the expected normal power consumption, but less than the theoretical peak power consumption of the electronic hardware.
Software and system management
Operating systems
Since the end of the 20th century, supercomputer operating systems have undergone major transformations, based on the changes in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such as Linux.
Since modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run different operating systems on different nodes, e.g. using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger system such as a full Linux distribution on server and I/O nodes.
While in a traditional multi-user computer system job scheduling is, in effect, a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully deal with inevitable hardware failures when tens of thousands of processors are present.
Although most modern supercomputers use Linux-based operating systems, each manufacturer has its own specific Linux distribution, and no industry standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware design.
Software tools and message passing
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source software such as Beowulf.
In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA or OpenCL.
Moreover, it is quite difficult to debug and test parallel programs. Special techniques need to be used for testing and debugging such applications.
Distributed supercomputing
Opportunistic approaches
Opportunistic supercomputing is a form of networked grid computing whereby a "super virtual computer" of many loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However, basic grid and cloud computing approaches that rely on volunteer computing cannot handle traditional supercomputing tasks such as fluid dynamic simulations.
The fastest grid computing system is the volunteer computing project Folding@home (F@h). , F@h reported 2.5 exaFLOPS of x86 processing power. Of this, over 100 PFLOPS are contributed by clients running on various GPUs, and the rest from various CPU systems.
The Berkeley Open Infrastructure for Network Computing (BOINC) platform hosts a number of volunteer computing projects. , BOINC recorded a processing power of over 166 petaFLOPS through over 762 thousand active Computers (Hosts) on the network.
, Great Internet Mersenne Prime Search's (GIMPS) distributed Mersenne Prime search achieved about 0.313 PFLOPS through over 1.3 million computers. The PrimeNet server has supported GIMPS's grid computing approach, one of the earliest volunteer computing projects, since 1997.
Quasi-opportunistic approaches
Quasi-opportunistic supercomputing is a form of distributed computing whereby the "super virtual computer" of many networked geographically disperse computers performs computing tasks that demand huge processing power. Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through the implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.
High-performance computing clouds
Cloud computing with its recent and rapid expansions and development have grabbed the attention of high-performance computing (HPC) users and developers in recent years. Cloud computing attempts to provide HPC-as-a-service exactly like other forms of services available in the cloud such as software as a service, platform as a service, and infrastructure as a service. HPC users may benefit from the cloud in different angles such as scalability, resources being on-demand, fast, and inexpensive. On the other hand, moving HPC applications have a set of challenges too. Good examples of such challenges are virtualization overhead in the cloud, multi-tenancy of resources, and network latency issues. Much research is currently being done to overcome these challenges and make HPC in the cloud a more realistic possibility.
In 2016, Penguin Computing, Parallel Works, R-HPC, Amazon Web Services, Univa, Silicon Graphics International, Rescale, Sabalcore, and Gomput started to offer HPC cloud computing. The Penguin On Demand (POD) cloud is a bare-metal compute model to execute code, but each user is given virtualized login node. POD computing nodes are connected via non-virtualized 10 Gbit/s Ethernet or QDR InfiniBand networks. User connectivity to the POD data center ranges from 50 Mbit/s to 1 Gbit/s. Citing Amazon's EC2 Elastic Compute Cloud, Penguin Computing argues that virtualization of compute nodes is not suitable for HPC. Penguin Computing has also criticized that HPC clouds may have allocated computing nodes to customers that are far apart, causing latency that impairs performance for some HPC applications.
Performance measurement
Capability versus capacity
Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g. a very complex weather simulation application.
Capacity computing, in contrast, is typically thought of as using efficient cost-effective computing power to solve a few somewhat large problems or many small problems. Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity but are not typically considered supercomputers, given that they do not solve a single very complex problem.
Performance metrics
In general, the speed of supercomputers is measured and benchmarked in FLOPS (floating-point operations per second), and not in terms of MIPS (million instructions per second), as is the case with general-purpose computers. These measurements are commonly used with an SI prefix such as tera-, combined into the shorthand TFLOPS (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand PFLOPS (1015 FLOPS, pronounced petaflops.) Petascale supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaFLOPS (EFLOPS) range. An EFLOPS is one quintillion (1018) FLOPS (one million TFLOPS). However, The performance of a supercomputer can be severely impacted by fluctuation brought on by elements like system load, network traffic, and concurrent processes, as mentioned by Brehm and Bruhwiler (2015).
No single number can reflect the overall performance of a computer system, yet the goal of the Linpack benchmark is to approximate how fast the computer solves numerical problems and it is widely used in the industry. The FLOPS measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer's processor specifications and shown as "Rpeak" in the TOP500 lists), which is generally unachievable when running real workloads, or the achievable throughput, derived from the LINPACK benchmarks and shown as "Rmax" in the TOP500 list. The LINPACK benchmark typically performs LU decomposition of a large matrix. The LINPACK performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance.
The TOP500 list
Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.
This is a list of the computers which appeared at the top of the TOP500 list since June 1993, and the "Peak speed" is given as the "Rmax" rating. In 2018, Lenovo became the world's largest provider for the TOP500 supercomputers with 117 units produced.
Applications
The stages of supercomputer application are summarized in the following table:
The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain.
Modern weather forecasting relies on supercomputers. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.
In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale project.
The Advanced Simulation and Computing Program currently uses supercomputers to maintain and simulate the United States nuclear stockpile.
In early 2020, COVID-19 was front and center in the world. Supercomputers used different simulations to find compounds that could potentially stop the spread. These computers run for tens of hours using multiple paralleled running CPU's to model different processes.
Development and trends
In the 2010s, China, the United States, the European Union, and others competed to be the first to create a 1 exaFLOP (1018 or one quintillion FLOPS) supercomputer. Erik P. DeBenedictis of Sandia National Laboratories has theorized that a zettaFLOPS (1021 or one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two-week time span accurately. Such systems might be built around 2030.
Many Monte Carlo simulations use the same algorithm to process a randomly generated data set; particularly, integro-differential equations describing physical transport processes, the random paths, collisions, and energy and momentum depositions of neutrons, photons, ions, electrons, etc. The next step for microprocessors may be into the third dimension; and specializing to Monte Carlo, the many layers could be identical, simplifying the design and manufacture process.
The cost of operating high performance supercomputers has risen, mainly due to increasing power consumption. In the mid-1990s a top 10 supercomputer required in the range of 100 kilowatts, in 2010 the top 10 supercomputers required between 1 and 2 megawatts. A 2010 study commissioned by DARPA identified power consumption as the most pervasive challenge in achieving Exascale computing. At the time a megawatt per year in energy consumption cost about 1 million dollars. Supercomputing facilities were constructed to efficiently remove the increasing amount of heat produced by modern multi-core central processing units. Based on the energy consumption of the Green 500 list of supercomputers between 2007 and 2011, a supercomputer with 1 exaFLOPS in 2011 would have required nearly 500 megawatts. Operating systems were developed for existing hardware to conserve energy whenever possible. CPU cores not in use during the execution of a parallelized application were put into low-power states, producing energy savings for some supercomputing applications.
The increasing cost of operating supercomputers has been a driving factor in a trend toward bundling of resources through a distributed supercomputer infrastructure. National supercomputing centers first emerged in the US, followed by Germany and Japan. The European Union launched the Partnership for Advanced Computing in Europe (PRACE) with the aim of creating a persistent pan-European supercomputer infrastructure with services to support scientists across the European Union in porting, scaling and optimizing supercomputing applications. Iceland built the world's first zero-emission supercomputer. Located at the Thor Data Center in Reykjavík, Iceland, this supercomputer relies on completely renewable sources for its power rather than fossil fuels. The colder climate also reduces the need for active cooling, making it one of the greenest facilities in the world of computers.
Funding supercomputer hardware also became increasingly difficult. In the mid-1990s a top 10 supercomputer cost about 10 million euros, while in 2010 the top 10 supercomputers required an investment of between 40 and 50 million euros. In the 2000s national governments put in place different strategies to fund supercomputers. In the UK the national government funded supercomputers entirely and high performance computing was put under the control of a national funding agency. Germany developed a mixed funding model, pooling local state funding and federal funding.
In fiction
Examples of supercomputers in fiction include HAL 9000, Multivac, The Machine Stops, GLaDOS, The Evitable Conflict, Vulcan's Hammer, Colossus, WOPR, AM, and Deep Thought. A supercomputer from Thinking Machines was mentioned as the supercomputer used to sequence the DNA extracted from preserved parasites in the Jurassic Park series.
| Technology | Computer hardware | null |
37207 | https://en.wikipedia.org/wiki/Nuclear%20engineering | Nuclear engineering | Nuclear engineering is the engineering discipline concerned with designing and applying systems that utilize the energy released by nuclear processes.
The most prominent application of nuclear engineering is the generation of electricity. Worldwide, some 440 nuclear reactors in 32 countries generate 10 percent of the world's energy through nuclear fission. In the future, it is expected that nuclear fusion will add another nuclear means of generating energy. Both reactions make use of the nuclear binding energy released when atomic nucleons are either separated (fission) or brought together (fusion). The energy available is given by the binding energy curve, and the amount generated is much greater than that generated through chemical reactions. Fission of 1 gram of uranium yields as much energy as burning 3 tons of coal or 600 gallons of fuel oil, without adding carbon dioxide to the atmosphere.
History
Nuclear engineering was born in 1938, with the discovery of nuclear fission. The first artificial nuclear reactor, CP-1, was designed by a team of physicists who were concerned that Nazi Germany might also be seeking to build a bomb based on nuclear fission. (The earliest known nuclear reaction on Earth occurred naturally, 1.7 billion years ago, in Oklo, Gabon, Africa.) The second artificial nuclear reactor, the X-10 Graphite Reactor, was also a part of the Manhattan Project, as were the plutonium-producing reactors of the Hanford Engineer Works. The first nuclear bomb was code named Gadget which was used in the Trinity Nuclear Test. The weapon was believed to have a yield of around 20 kilotons of TNT.
The first nuclear reactor to generate electricity was Experimental Breeder Reactor I (EBR-I), which did so near Arco, Idaho, in 1951. EBR-I was a standalone facility, not connected to a grid, but a later Idaho research reactor in the BORAX series did briefly supply power to the town of Arco in 1955.
The first commercial nuclear power plant, built to be connected to an electrical grid, is the Obninsk Nuclear Power Plant, which began operation in 1954. The second appears to be the Shippingport Atomic Power Station, which produced electricity in 1957.
For a brief chronology, from the discovery of uranium to the current era, see Outline History of Nuclear Energy or History of Nuclear Power.
See List of Commercial Nuclear Reactors for a comprehensive listing of nuclear power reactors and IAEA Power Reactor Information System (PRIS) for worldwide and country-level statistics on nuclear power generation.
Sub-disciplines
Nuclear engineers work in such areas as the following:
Nuclear reactor design, which has evolved from the Generation I, proof-of concept, reactors of the 1950s and 1960s, to Generation II, Generation III, and Generation IV concepts
Thermal hydraulics and heat transfer. In a typical nuclear power plant, heat generates steam that drives a steam turbine and a generator that produces electricity
Materials science as it relates to nuclear power applications
Managing the nuclear fuel cycle, in which fissile material is obtained, formed into fuel, removed when depleted, and safely stored or reprocessed
Nuclear propulsion, mainly for military naval vessels, but there have been concepts for aircraft and missiles. Nuclear power has been used in space since the 1960s
Plasma physics, which is integral to the development of fusion power
Weapons development and management
Generation of radionuclides, which have applications in industry, medicine, and many other areas
Nuclear waste management
Health physics
Nuclear medicine and Medical Physics
Health and safety
Instrumentation and control engineering
Process engineering
Project Management
Quality engineering
Reactor operations
Nuclear security (detection of clandestine nuclear materials)
Nuclear engineering even has a role in criminal investigation, and agriculture.
Many chemical, electrical and mechanical and other types of engineers also work in the nuclear industry, as do many scientists and support staff. In the U.S., nearly 100,000 people directly work in the nuclear industry. Including secondary sector jobs, the number of people supported by the U.S. nuclear industry is 475,000.
Employment
In the United States, nuclear engineers are employed as follows:
Electric power generation 25%
Federal government 18%
Scientific research and development 15%
Engineering services 5%
Manufacturing 10%
Other areas 27%
Worldwide, job prospects for nuclear engineers are likely best in those countries that are active in or exploring nuclear technologies:
Education
Organizations that provide study and training in nuclear engineering include the following:
Organizations
American Nuclear Society
Asian Network for Education in Nuclear Technology (ANENT) https://www.iaea.org/services/networks/anent
Canadian Nuclear Association
Chinese Nuclear Society
International Atomic Energy Agency
International Energy Agency (IEA)
Japan Atomic Industrial Forum (JAIF)
Korea Nuclear Energy Agency (KNEA)
Latin American Network for Education in Nuclear Technology (LANENT) https://www.iaea.org/services/networks/lanent
Minerals Council of Australia
Nucleareurope
Nuclear Institute
Nuclear Energy Institute (NEI)
Nuclear Industry Association of South Africa (NIASA)
Nuclear Technology Education Consortion https://www.ntec.ac.uk/
OECD Nuclear Energy Agency (NEA)
Regional Network for Education and Training in Nuclear Technology (STAR-NET) https://www.iaea.org/services/networks/star-net
World Nuclear Association
World Nuclear Transport Institute
| Technology | Disciplines | null |
37208 | https://en.wikipedia.org/wiki/Landslide | Landslide | Landslides, also known as landslips, or rockslides, are several forms of mass wasting that may include a wide range of ground movements, such as rockfalls, mudflows, shallow or deep-seated slope failures and debris flows. Landslides occur in a variety of environments, characterized by either steep or gentle slope gradients, from mountain ranges to coastal cliffs or even underwater, in which case they are called submarine landslides.
Gravity is the primary driving force for a landslide to occur, but there are other factors affecting slope stability that produce specific conditions that make a slope prone to failure. In many cases, the landslide is triggered by a specific event (such as a heavy rainfall, an earthquake, a slope cut to build a road, and many others), although this is not always identifiable.
Landslides are frequently made worse by human development (such as urban sprawl) and resource exploitation (such as mining and deforestation). Land degradation frequently leads to less stabilization of soil by vegetation. Additionally, global warming caused by climate change and other human impact on the environment, can increase the frequency of natural events (such as extreme weather) which trigger landslides. Landslide mitigation describes the policy and practices for reducing the risk of human impacts of landslides, reducing the risk of natural disaster.
Causes
Landslides occur when the slope (or a portion of it) undergoes some processes that change its condition from stable to unstable. This is essentially due to a decrease in the shear strength of the slope material, an increase in the shear stress borne by the material, or a combination of the two. A change in the stability of a slope can be caused by a number of factors, acting together or alone. Natural causes of landslides include:
increase in water content (loss of suction) or saturation by rain water infiltration, snow melting, or glaciers melting;
rising of groundwater or increase of pore water pressure (e.g. due to aquifer recharge in rainy seasons, or by rain water infiltration);
increase of hydrostatic pressure in cracks and fractures;
loss or absence of vertical vegetative structure, soil nutrients, and soil structure (e.g. after a wildfire);
erosion of the top of a slope by rivers or sea waves;
physical and chemical weathering (e.g. by repeated freezing and thawing, heating and cooling, salt leaking in the groundwater or mineral dissolution);
ground shaking caused by earthquakes, which can destabilize the slope directly (e.g., by inducing soil liquefaction) or weaken the material and cause cracks that will eventually produce a landslide;
volcanic eruptions;
changes in pore fluid composition;
changes in temperature (seasonal or induced by climate change).
Landslides are aggravated by human activities, such as:
deforestation, cultivation and construction;
vibrations from machinery or traffic;
blasting and mining;
earthwork (e.g. by altering the shape of a slope, or imposing new loads);
in shallow soils, the removal of deep-rooted vegetation that binds colluvium to bedrock;
agricultural or forestry activities (logging), and urbanization, which change the amount of water infiltrating the soil.
temporal variation in land use and land cover (LULC): it includes the human abandonment of farming areas, e.g. due to the economic and social transformations which occurred in Europe after the Second World War. Land degradation and extreme rainfall can increase the frequency of erosion and landslide phenomena.
Types
Hungr-Leroueil-Picarelli classification
In traditional usage, the term landslide has at one time or another been used to cover almost all forms of mass movement of rocks and regolith at the Earth's surface. In 1978, geologist David Varnes noted this imprecise usage and proposed a new, much tighter scheme for the classification of mass movements and subsidence processes. This scheme was later modified by Cruden and Varnes in 1996, and refined by Hutchinson (1988), Hungr et al. (2001), and finally by Hungr, Leroueil and Picarelli (2014). The classification resulting from the latest update is provided below.
Under this classification, six types of movement are recognized. Each type can be seen both in rock and in soil. A fall is a movement of isolated blocks or chunks of soil in free-fall. The term topple refers to blocks coming away by rotation from a vertical face. A slide is the movement of a body of material that generally remains intact while moving over one or several inclined surfaces or thin layers of material (also called shear zones) in which large deformations are concentrated. Slides are also sub-classified by the form of the surface(s) or shear zone(s) on which movement happens. The planes may be broadly parallel to the surface ("planar slides") or spoon-shaped ("rotational slides"). Slides can occur catastrophically, but movement on the surface can also be gradual and progressive. Spreads are a form of subsidence, in which a layer of material cracks, opens up, and expands laterally. Flows are the movement of fluidised material, which can be both dry or rich in water (such as in mud flows). Flows can move imperceptibly for years, or accelerate rapidly and cause disasters. Slope deformations are slow, distributed movements that can affect entire mountain slopes or portions of it. Some landslides are complex in the sense that they feature different movement types in different portions of the moving body, or they evolve from one movement type to another over time. For example, a landslide can initiate as a rock fall or topple and then, as the blocks disintegrate upon the impact, transform into a debris slide or flow. An avalanching effect can also be present, in which the moving mass entrains additional material along its path.
Flows
Slope material that becomes saturated with water may produce a debris flow or mud flow. However, also dry debris can exhibit flow-like movement. Flowing debris or mud may pick up trees, houses and cars, and block bridges and rivers causing flooding along its path. This phenomenon is particularly hazardous in alpine areas, where narrow gorges and steep valleys are conducive of faster flows. Debris and mud flows may initiate on the slopes or result from the fluidization of landslide material as it gains speed or incorporates further debris and water along its path. River blockages as the flow reaches a main stream can generate temporary dams. As the impoundments fail, a domino effect may be created, with a remarkable growth in the volume of the flowing mass, and in its destructive power.
An earthflow is the downslope movement of mostly fine-grained material. Earthflows can move at speeds within a very wide range, from as low as 1 mm/yr to many km/h. Though these are a lot like mudflows, overall they are more slow-moving and are covered with solid material carried along by the flow from within. Clay, fine sand and silt, and fine-grained, pyroclastic material are all susceptible to earthflows. These flows are usually controlled by the pore water pressures within the mass, which should be high enough to produce a low shearing resistance. On the slopes, some earthflow may be recognized by their elongated shape, with one or more lobes at their toes. As these lobes spread out, drainage of the mass increases and the margins dry out, lowering the overall velocity of the flow. This process also causes the flow to thicken. Earthflows occur more often during periods of high precipitation, which saturates the ground and builds up water pressures. However, earthflows that keep advancing also during dry seasons are not uncommon. Fissures may develop during the movement of clayey materials, which facilitate the intrusion of water into the moving mass and produce faster responses to precipitation.
A rock avalanche, sometimes referred to as sturzstrom, is a large and fast-moving landslide of the flow type. It is rarer than other types of landslides but it is often very destructive. It exhibits typically a long runout, flowing very far over a low-angle, flat, or even slightly uphill terrain. The mechanisms favoring the long runout can be different, but they typically result in the weakening of the sliding mass as the speed increases. The causes of this weakening are not completely understood. Especially for the largest landslides, it may involve the very quick heating of the shear zone due to friction, which may even cause the water that is present to vaporize and build up a large pressure, producing a sort of hovercraft effect. In some cases, the very high temperature may even cause some of the minerals to melt. During the movement, the rock in the shear zone may also be finely ground, producing a nanometer-size mineral powder that may act as a lubricant, reducing the resistance to motion and promoting larger speeds and longer runouts. The weakening mechanisms in large rock avalanches are similar to those occurring in seismic faults.
Slides
Slides can occur in any rock or soil material and are characterized by the movement of a mass over a planar or curvilinear surface or shear zone.
A debris slide is a type of slide characterized by the chaotic movement of material mixed with water and/or ice. It is usually triggered by the saturation of thickly vegetated slopes which results in an incoherent mixture of broken timber, smaller vegetation and other debris. Debris flows and avalanches differ from debris slides because their movement is fluid-like and generally much more rapid. This is usually a result of lower shear resistances and steeper slopes. Typically, debris slides start with the detachment of large rock fragments high on the slopes, which break apart as they descend.
Clay and silt slides are usually slow but can experience episodic acceleration in response to heavy rainfall or rapid snowmelt. They are often seen on gentle slopes and move over planar surfaces, such as over the underlying bedrock. Failure surfaces can also form within the clay or silt layer itself, and they usually have concave shapes, resulting in rotational slides
Shallow and deep-seated landslides
Slope failure mechanisms often contain large uncertainties and could be significantly affected by heterogeneity of soil properties. A landslide in which the sliding surface is located within the soil mantle or weathered bedrock (typically to a depth from few decimeters to some meters) is called a shallow landslide. Debris slides and debris flows are usually shallow. Shallow landslides can often happen in areas that have slopes with high permeable soils on top of low permeable soils. The low permeable soil traps the water in the shallower soil generating high water pressures. As the top soil is filled with water, it can become unstable and slide downslope.
Deep-seated landslides are those in which the sliding surface is mostly deeply located, for instance well below the maximum rooting depth of trees. They usually involve deep regolith, weathered rock, and/or bedrock and include large slope failures associated with translational, rotational, or complex movements. They tend to form along a plane of weakness such as a fault or bedding plane. They can be visually identified by concave scarps at the top and steep areas at the toe. Deep-seated landslides also shape landscapes over geological timescales and produce sediment that strongly alters the course of fluvial streams.
Related phenomena
An avalanche, similar in mechanism to a landslide, involves a large amount of ice, snow and rock falling quickly down the side of a mountain.
A pyroclastic flow is caused by a collapsing cloud of hot ash, gas and rocks from a volcanic explosion that moves rapidly down an erupting volcano.
Extreme precipitation and flow can cause gully formation in flatter environments not susceptible to landslides.
Resulting tsunamis
Landslides that occur undersea, or have impact into water e.g. significant rockfall or volcanic collapse into the sea, can generate tsunamis. Massive landslides can also generate megatsunamis, which are usually hundreds of meters high. In 1958, one such tsunami occurred in Lituya Bay in Alaska.
Landslide prediction mapping
Landslide hazard analysis and mapping can provide useful information for catastrophic loss reduction, and assist in the development of guidelines for sustainable land-use planning. The analysis is used to identify the factors that are related to landslides, estimate the relative contribution of factors causing slope failures, establish a relation between the factors and landslides, and to predict the landslide hazard in the future based on such a relationship. The factors that have been used for landslide hazard analysis can usually be grouped into geomorphology, geology, land use/land cover, and hydrogeology. Since many factors are considered for landslide hazard mapping, GIS is an appropriate tool because it has functions of collection, storage, manipulation, display, and analysis of large amounts of spatially referenced data which can be handled fast and effectively. Cardenas reported evidence on the exhaustive use of GIS in conjunction of uncertainty modelling tools for landslide mapping. Remote sensing techniques are also highly employed for landslide hazard assessment and analysis. Before and after aerial photographs and satellite imagery are used to gather landslide characteristics, like distribution and classification, and factors like slope, lithology, and land use/land cover to be used to help predict future events. Before and after imagery also helps to reveal how the landscape changed after an event, what may have triggered the landslide, and shows the process of regeneration and recovery.
Using satellite imagery in combination with GIS and on-the-ground studies, it is possible to generate maps of likely occurrences of future landslides. Such maps should show the locations of previous events as well as clearly indicate the probable locations of future events. In general, to predict landslides, one must assume that their occurrence is determined by certain geologic factors, and that future landslides will occur under the same conditions as past events. Therefore, it is necessary to establish a relationship between the geomorphologic conditions in which the past events took place and the expected future conditions.
Natural disasters are a dramatic example of people living in conflict with the environment. Early predictions and warnings are essential for the reduction of property damage and loss of life. Because landslides occur frequently and can represent some of the most destructive forces on earth, it is imperative to have a good understanding as to what causes them and how people can either help prevent them from occurring or simply avoid them when they do occur. Sustainable land management and development is also an essential key to reducing the negative impacts felt by landslides.
GIS offers a superior method for landslide analysis because it allows one to capture, store, manipulate, analyze, and display large amounts of data quickly and effectively. Because so many variables are involved, it is important to be able to overlay the many layers of data to develop a full and accurate portrayal of what is taking place on the Earth's surface. Researchers need to know which variables are the most important factors that trigger landslides in any given location. Using GIS, extremely detailed maps can be generated to show past events and likely future events which have the potential to save lives, property, and money.
Since the ‘90s, GIS have been also successfully used in conjunction to decision support systems, to show on a map real-time risk evaluations based on monitoring data gathered in the area of the Val Pola disaster (Italy).
Prehistoric landslides
Storegga Slide, some 8,000 years ago off the western coast of Norway. Caused massive tsunamis in Doggerland and other areas connected to the North Sea. A total volume of debris was involved; comparable to a thick area the size of Iceland. The landslide is thought to be among the largest in history.
Landslide which moved Heart Mountain to its current location, the largest continental landslide discovered so far. In the 48 million years since the slide occurred, erosion has removed most of the portion of the slide.
Flims Rockslide, about , Switzerland, some 10,000 years ago in post-glacial Pleistocene/Holocene, the largest so far described in the Alps and on dry land that can be easily identified in a modestly eroded state.
The landslide around 200 BC which formed Lake Waikaremoana on the North Island of New Zealand, where a large block of the Ngamoko Range slid and dammed a gorge of Waikaretaheke River, forming a natural reservoir up to deep.
Cheekye Fan, British Columbia, Canada, about , Late Pleistocene in age.
The Manang-Braga rock avalanche/debris flow may have formed Marsyangdi Valley in the Annapurna Region, Nepal, during an interstadial period belonging to the last glacial period. Over of material are estimated to have been moved in the single event, making it one of the largest continental landslides.
Tsergo Ri landslide, a massive slope failure north of Kathmandu, Nepal, involving an estimated . Prior to this landslide the mountain may have been the world's 15th mountain above .
Historical landslides
The 1806 Goldau landslide on 2 September 1806
The Cap Diamant Québec rockslide on 19 September 1889
Frank Slide, Turtle Mountain, Alberta, Canada, on 29 April 1903
Khait landslide, Khait, Tajikistan, Soviet Union, on 10 July 1949
A magnitude 7.5 earthquake in Yellowstone Park (17 August 1959) caused a landslide that blocked the Madison River, and created Quake Lake.
Monte Toc landslide () falling into the Vajont Dam basin in Italy, causing a megatsunami and about 2000 deaths, on 9 October 1963
Hope Slide landslide () near Hope, British Columbia on 9 January 1965.
The 1966 Aberfan disaster
Tuve landslide in Gothenburg, Sweden on 30 November 1977.
The 1979 Abbotsford landslip, Dunedin, New Zealand on 8 August 1979.
The eruption of Mount St. Helens (18 May 1980) caused an enormous landslide when the top 1300 feet of the volcano suddenly gave way.
Val Pola landslide during Valtellina disaster (1987) Italy
Thredbo landslide, Australia on 30 July 1997, destroyed hostel.
Vargas mudslides, due to heavy rains in Vargas State, Venezuela, in December, 1999, causing tens of thousands of deaths.
2005 La Conchita landslide in Ventura, California causing 10 deaths.
2006 Southern Leyte mudslide in Saint Bernard, Southern Leyte, causing 1,126 deaths and buried the village of Guinsaugon.
2007 Chittagong mudslide, in Chittagong, Bangladesh, on 11 June 2007.
2008 Cairo landslide on 6 September 2008.
The 2009 Peloritani Mountains disaster caused 37 deaths, on October 1.
The 2010 Uganda landslide caused over 100 deaths following heavy rain in Bududa region.
Zhouqu county mudslide in Gansu, China on 8 August 2010.
Devil's Slide, an ongoing landslide in San Mateo County, California
2011 Rio de Janeiro landslide in Rio de Janeiro, Brazil on 11 January 2011, causing 610 deaths.
2014 Pune landslide, in Pune, India.
2014 Oso mudslide, in Oso, Washington
2017 Mocoa landslide, in Mocoa, Colombia
2022 Ischia landslide
2024 Gofa landslides, in Gofa, Ethiopia
2024 Wayanad landslides, in Wayanad, Kerala, India
Extraterrestrial landslides
Evidence of past landslides has been detected on many bodies in the solar system, but since most observations are made by probes that only observe for a limited time and most bodies in the solar system appear to be geologically inactive not many landslides are known to have happened in recent times. Both Venus and Mars have been subject to long-term mapping by orbiting satellites, and examples of landslides have been observed on both planets.
Landslide mitigation
Landslide monitoring
The monitoring of landslides is essential for estimating the dangerous situations, making it possible to issue alerts on time, to avoid loses of lives and property, and to have proper planning and reducing measures in place. Currently, there exist different type of techniques aimed to monitor landslides:
Remote sensing techniques
InSAR (Interferometric Synthetic Aperture Radar): This remote sensing technique measures ground displacement over time with high precision. It is ideal for large-scale monitoring.
LiDAR (Light Detection and Ranging): Provides detailed 3D models of terrain to detect changes over time by comparison of different point clouds acquired over time.
Optical satellite imagery: Useful for identifying surface changes, geomorphological features (e.g. cracks and scarps) and mapping landslide-prone areas.
UAVs (Unmanned Aerial Vehicles): This technique captures high-resolution images and topographic data in inaccessible areas.
Thermal imaging: Thermal images enable to detects temperature variations that may indicate water movement or stress in the slope.
Ground-based techniques
GPS (Global Positioning System): Tracks ground movements at specific points over time using a constellation of satellites orbiting around the Earth.
Topographic surveys: Measures displacements of marked targets on a slope.
Ground-based radar (GB-SAR): Continuously monitors surface deformation using a SAR sensor and detects movement in real-time. It follows the same principle than InSAR.
Geotechnical instrumentation
Piezometers: Monitors groundwater levels and pore water pressure, which are critical triggers for landslides.
Load cells: Measures stress changes in retaining structures or anchors.
Tiltmeters: Detects small angular changes in the slope surface or retaining walls.
Extensometers: Measures displacement along cracks or tension zones.
Inclinometers: Detects subsurface movements by monitoring changes in the inclination of a borehole.
Seismic techniques
•Geophones and accelerometers: Detect seismic vibrations or movements that might indicate slope instability.
Climate-change impact on landslides
Climate-change impact on temperature, both average rainfall and rainfall extremes, and evapotranspiration may affect landslide distribution, frequency and intensity (62). However, this impact shows strong variability in different areas (63). Therefore, the effects of climate change on landslides need to be studied on a regional scale.
Climate change can have both positive and negative impacts on landslides
Temperature rise may increase evapotranspiration, leading to a reduction in soil moisture and stimulate vegetation growth, also due to a CO2 increase in the atmosphere. Both effects may reduce landslides in some conditions.
On the other side, temperature rise causes an increase of landslides due to
the acceleration of snowmelt and an increase of rain on snow during spring, leading to strong infiltration events (64).
Permafrost degradation that reduces the cohesion of soils and rock masses due to the loss of interstitial ice (65). This mainly occurs at high elevation.
Glacier retreat that has the dual effect of relieving mountain slopes and increasing their steepness.
Since the average precipitation is expected to decrease or increase regionally (63), rainfall induced landslides may change accordingly, due to changes in infiltration, groundwater levels and river bank erosion.
Weather extremes are expected to increase due to climate change including heavy precipitation (63). This yields negative effects on landslides due to focused infiltration in soil and rock (66) and an increase of runoff events, which may trigger debris flows.
| Physical sciences | Natural disasters | null |
37220 | https://en.wikipedia.org/wiki/Infection | Infection | An infection is the invasion of tissues by pathogens, their multiplication, and the reaction of host tissues to the infectious agent and the toxins they produce. An infectious disease, also known as a transmissible disease or communicable disease, is an illness resulting from an infection.
Infections can be caused by a wide range of pathogens, most prominently bacteria and viruses. Hosts can fight infections using their immune systems. Mammalian hosts react to infections with an innate response, often involving inflammation, followed by an adaptive response.
Treatment for infections depends on the type of pathogen involved. Common medications include:
Antibiotics for bacterial infections.
Antivirals for viral infections.
Antifungals for fungal infections.
Antiprotozoals for protozoan infections.
Antihelminthics for infections caused by parasitic worms.
Infectious diseases remain a significant global health concern, causing approximately 9.2 million deaths in 2013 (17% of all deaths). The branch of medicine that focuses on infections is referred to as infectious diseases.
Types
Infections are caused by infectious agents (pathogens) including:
Bacteria (e.g. Mycobacterium tuberculosis, Staphylococcus aureus, Escherichia coli, Clostridium botulinum, and Salmonella spp.)
Viruses and related agents such as viroids. (E.g. HIV, Rhinovirus, Lyssaviruses such as Rabies virus, Ebolavirus and Severe acute respiratory syndrome coronavirus 2)
Fungi, further subclassified into:
Ascomycota, including yeasts such as Candida (the most common fungal infection); filamentous fungi such as Aspergillus; Pneumocystis species; and dermatophytes, a group of organisms causing infection of skin and other superficial structures in humans.
Basidiomycota, including the human-pathogenic genus Cryptococcus.
Parasites, which are usually divided into:
Unicellular organisms (e.g. malaria, Toxoplasma, Babesia)
Macroparasites (worms or helminths) including nematodes such as parasitic roundworms and pinworms, tapeworms (cestodes), and flukes (trematodes, such as schistosomes). Diseases caused by helminths are sometimes termed infestations, but are sometimes called infections.
Arthropods such as ticks, mites, fleas, and lice, can also cause human disease, which conceptually are similar to infections, but invasion of a human or animal body by these macroparasites is usually termed infestation.
Prions (although they do not secrete toxins)
Signs and symptoms
The signs and symptoms of an infection depend on the type of disease. Some signs of infection affect the whole body generally, such as fatigue, loss of appetite, weight loss, fevers, night sweats, chills, aches and pains. Others are specific to individual body parts, such as skin rashes, coughing, or a runny nose.
In certain cases, infectious diseases may be asymptomatic for much or even all of their course in a given host. In the latter case, the disease may only be defined as a "disease" (which by definition means an illness) in hosts who secondarily become ill after contact with an asymptomatic carrier. An infection is not synonymous with an infectious disease, as some infections do not cause illness in a host.
Bacterial or viral
As bacterial and viral infections can both cause the same kinds of symptoms, it can be difficult to distinguish which is the cause of a specific infection. Distinguishing the two is important, since viral infections cannot be cured by antibiotics whereas bacterial infections can.
Pathophysiology
There is a general chain of events that applies to infections, sometimes called the chain of infection or transmission chain. The chain of events involves several stepswhich include the infectious agent, reservoir, entering a susceptible host, exit and transmission to new hosts. Each of the links must be present in a chronological order for an infection to develop. Understanding these steps helps health care workers target the infection and prevent it from occurring in the first place.
Colonization
Infection begins when an organism successfully enters the body, grows and multiplies. This is referred to as colonization. Most humans are not easily infected. Those with compromised or weakened immune systems have an increased susceptibility to chronic or persistent infections. Individuals who have a suppressed immune system are particularly susceptible to opportunistic infections. Entrance to the host at host–pathogen interface, generally occurs through the mucosa in orifices like the oral cavity, nose, eyes, genitalia, anus, or the microbe can enter through open wounds. While a few organisms can grow at the initial site of entry, many migrate and cause systemic infection in different organs. Some pathogens grow within the host cells (intracellular) whereas others grow freely in bodily fluids.
Wound colonization refers to non-replicating microorganisms within the wound, while in infected wounds, replicating organisms exist and tissue is injured. All multicellular organisms are colonized to some degree by extrinsic organisms, and the vast majority of these exist in either a mutualistic or commensal relationship with the host. An example of the former is the anaerobic bacteria species, which colonizes the mammalian colon, and an example of the latter are the various species of staphylococcus that exist on human skin. Neither of these colonizations are considered infections. The difference between an infection and a colonization is often only a matter of circumstance. Non-pathogenic organisms can become pathogenic given specific conditions, and even the most virulent organism requires certain circumstances to cause a compromising infection. Some colonizing bacteria, such as Corynebacteria sp. and Viridans streptococci, prevent the adhesion and colonization of pathogenic bacteria and thus have a symbiotic relationship with the host, preventing infection and speeding wound healing.
The variables involved in the outcome of a host becoming inoculated by a pathogen and the ultimate outcome include:
the route of entry of the pathogen and the access to host regions that it gains
the intrinsic virulence of the particular organism
the quantity or load of the initial inoculant
the immune status of the host being colonized
As an example, several staphylococcal species remain harmless on the skin, but, when present in a normally sterile space, such as in the capsule of a joint or the peritoneum, multiply without resistance and cause harm.
An interesting fact that gas chromatography–mass spectrometry, 16S ribosomal RNA analysis, omics, and other advanced technologies have made more apparent to humans in recent decades is that microbial colonization is very common even in environments that humans think of as being nearly sterile. Because it is normal to have bacterial colonization, it is difficult to know which chronic wounds can be classified as infected and how much risk of progression exists. Despite the huge number of wounds seen in clinical practice, there are limited quality data for evaluated symptoms and signs. A review of chronic wounds in the Journal of the American Medical Association's "Rational Clinical Examination Series" quantified the importance of increased pain as an indicator of infection. The review showed that the most useful finding is an increase in the level of pain [likelihood ratio (LR) range, 11–20] makes infection much more likely, but the absence of pain (negative likelihood ratio range, 0.64–0.88) does not rule out infection (summary LR 0.64–0.88).
Disease
Disease can arise if the host's protective immune mechanisms are compromised and the organism inflicts damage on the host. Microorganisms can cause tissue damage by releasing a variety of toxins or destructive enzymes. For example, Clostridium tetani releases a toxin that paralyzes muscles, and staphylococcus releases toxins that produce shock and sepsis. Not all infectious agents cause disease in all hosts. For example, less than 5% of individuals infected with polio develop disease. On the other hand, some infectious agents are highly virulent. The prion causing mad cow disease and Creutzfeldt–Jakob disease invariably kills all animals and people that are infected.
Persistent infections occur because the body is unable to clear the organism after the initial infection. Persistent infections are characterized by the continual presence of the infectious organism, often as latent infection with occasional recurrent relapses of active infection. There are some viruses that can maintain a persistent infection by infecting different cells of the body. Some viruses once acquired never leave the body. A typical example is the herpes virus, which tends to hide in nerves and become reactivated when specific circumstances arise.
Persistent infections cause millions of deaths globally each year. Chronic infections by parasites account for a high morbidity and mortality in many underdeveloped countries.
Transmission
For infecting organisms to survive and repeat the infection cycle in other hosts, they (or their progeny) must leave an existing reservoir and cause infection elsewhere. Infection transmission can take place via many potential routes:
Droplet contact, also known as the respiratory route, and the resultant infection can be termed airborne disease. If an infected person coughs or sneezes on another person the microorganisms, suspended in warm, moist droplets, may enter the body through the nose, mouth or eye surfaces.
Fecal-oral transmission, wherein foodstuffs or water become contaminated (by people not washing their hands before preparing food, or untreated sewage being released into a drinking water supply) and the people who eat and drink them become infected. Common fecal-oral transmitted pathogens include Vibrio cholerae, Giardia species, rotaviruses, Entamoeba histolytica, Escherichia coli, and tape worms. Most of these pathogens cause gastroenteritis.
Sexual transmission, with the result being called sexually transmitted infection.
Oral transmission, diseases that are transmitted primarily by oral means may be caught through direct oral contact such as kissing, or by indirect contact such as by sharing a drinking glass or a cigarette.
Transmission by direct contact, Some diseases that are transmissible by direct contact include athlete's foot, impetigo and warts.
Vehicle transmission, transmission by an inanimate reservoir (food, water, soil).
Vertical transmission, directly from the mother to an embryo, fetus or baby during pregnancy or childbirth. It can occur as a result of a pre-existing infection or one acquired during pregnancy.
Iatrogenic transmission, due to medical procedures such as injection or transplantation of infected material.
Vector-borne transmission, transmitted by a vector, which is an organism that does not cause disease itself but that transmits infection by conveying pathogens from one host to another.
The relationship between virulence versus transmissibility is complex; with studies have shown that there were no clear relationship between the two. There is still a small number of evidence that partially suggests a link between virulence and transmissibility.
Diagnosis
Diagnosis of infectious disease sometimes involves identifying an infectious agent either directly or indirectly. In practice most minor infectious diseases such as warts, cutaneous abscesses, respiratory system infections and diarrheal diseases are diagnosed by their clinical presentation and treated without knowledge of the specific causative agent. Conclusions about the cause of the disease are based upon the likelihood that a patient came in contact with a particular agent, the presence of a microbe in a community, and other epidemiological considerations. Given sufficient effort, all known infectious agents can be specifically identified.
Diagnosis of infectious disease is nearly always initiated by medical history and physical examination. More detailed identification techniques involve the culture of infectious agents isolated from a patient. Culture allows identification of infectious organisms by examining their microscopic features, by detecting the presence of substances produced by pathogens, and by directly identifying an organism by its genotype.
Many infectious organisms are identified without culture and microscopy. This is especially true for viruses, which cannot grow in culture. For some suspected pathogens, doctors may conduct tests that examine a patient's blood or other body fluids for antigens or antibodies that indicate presence of a specific pathogen that the doctor suspects.
Other techniques (such as X-rays, CAT scans, PET scans or NMR) are used to produce images of internal abnormalities resulting from the growth of an infectious agent. The images are useful in detection of, for example, a bone abscess or a spongiform encephalopathy produced by a prion.
The benefits of identification, however, are often greatly outweighed by the cost, as often there is no specific treatment, the cause is obvious, or the outcome of an infection is likely to be benign.
Symptomatic diagnostics
The diagnosis is aided by the presenting symptoms in any individual with an infectious disease, yet it usually needs additional diagnostic techniques to confirm the suspicion. Some signs are specifically characteristic and indicative of a disease and are called pathognomonic signs; but these are rare. Not all infections are symptomatic.
In children the presence of cyanosis, rapid breathing, poor peripheral perfusion, or a petechial rash increases the risk of a serious infection by greater than 5 fold. Other important indicators include parental concern, clinical instinct, and temperature greater than 40 °C.
Microbial culture
Many diagnostic approaches depend on microbiological culture to isolate a pathogen from the appropriate clinical specimen. In a microbial culture, a growth medium is provided for a specific agent. A sample taken from potentially diseased tissue or fluid is then tested for the presence of an infectious agent able to grow within that medium. Many pathogenic bacteria are easily grown on nutrient agar, a form of solid medium that supplies carbohydrates and proteins necessary for growth, along with copious amounts of water. A single bacterium will grow into a visible mound on the surface of the plate called a colony, which may be separated from other colonies or melded together into a "lawn". The size, color, shape and form of a colony is characteristic of the bacterial species, its specific genetic makeup (its strain), and the environment that supports its growth. Other ingredients are often added to the plate to aid in identification. Plates may contain substances that permit the growth of some bacteria and not others, or that change color in response to certain bacteria and not others. Bacteriological plates such as these are commonly used in the clinical identification of infectious bacterium. Microbial culture may also be used in the identification of viruses: the medium, in this case, being cells grown in culture that the virus can infect, and then alter or kill. In the case of viral identification, a region of dead cells results from viral growth, and is called a "plaque". Eukaryotic parasites may also be grown in culture as a means of identifying a particular agent.
In the absence of suitable plate culture techniques, some microbes require culture within live animals. Bacteria such as Mycobacterium leprae and Treponema pallidum can be grown in animals, although serological and microscopic techniques make the use of live animals unnecessary. Viruses are also usually identified using alternatives to growth in culture or animals. Some viruses may be grown in embryonated eggs. Another useful identification method is Xenodiagnosis, or the use of a vector to support the growth of an infectious agent. Chagas disease is the most significant example, because it is difficult to directly demonstrate the presence of the causative agent, Trypanosoma cruzi in a patient, which therefore makes it difficult to definitively make a diagnosis. In this case, xenodiagnosis involves the use of the vector of the Chagas agent T. cruzi, an uninfected triatomine bug, which takes a blood meal from a person suspected of having been infected. The bug is later inspected for growth of T. cruzi within its gut.
Microscopy
Another principal tool in the diagnosis of infectious disease is microscopy. Virtually all of the culture techniques discussed above rely, at some point, on microscopic examination for definitive identification of the infectious agent. Microscopy may be carried out with simple instruments, such as the compound light microscope, or with instruments as complex as an electron microscope. Samples obtained from patients may be viewed directly under the light microscope, and can often rapidly lead to identification. Microscopy is often also used in conjunction with biochemical staining techniques, and can be made exquisitely specific when used in combination with antibody based techniques. For example, the use of antibodies made artificially fluorescent (fluorescently labeled antibodies) can be directed to bind to and identify a specific antigens present on a pathogen. A fluorescence microscope is then used to detect fluorescently labeled antibodies bound to internalized antigens within clinical samples or cultured cells. This technique is especially useful in the diagnosis of viral diseases, where the light microscope is incapable of identifying a virus directly.
Other microscopic procedures may also aid in identifying infectious agents. Almost all cells readily stain with a number of basic dyes due to the electrostatic attraction between negatively charged cellular molecules and the positive charge on the dye. A cell is normally transparent under a microscope, and using a stain increases the contrast of a cell with its background. Staining a cell with a dye such as Giemsa stain or crystal violet allows a microscopist to describe its size, shape, internal and external components and its associations with other cells. The response of bacteria to different staining procedures is used in the taxonomic classification of microbes as well. Two methods, the Gram stain and the acid-fast stain, are the standard approaches used to classify bacteria and to diagnosis of disease. The Gram stain identifies the bacterial groups Bacillota and Actinomycetota, both of which contain many significant human pathogens. The acid-fast staining procedure identifies the Actinomycetota genera Mycobacterium and Nocardia.
Biochemical tests
Biochemical tests used in the identification of infectious agents include the detection of metabolic or enzymatic products characteristic of a particular infectious agent. Since bacteria ferment carbohydrates in patterns characteristic of their genus and species, the detection of fermentation products is commonly used in bacterial identification. Acids, alcohols and gases are usually detected in these tests when bacteria are grown in selective liquid or solid media.
The isolation of enzymes from infected tissue can also provide the basis of a biochemical diagnosis of an infectious disease. For example, humans can make neither RNA replicases nor reverse transcriptase, and the presence of these enzymes are characteristic., of specific types of viral infections. The ability of the viral protein hemagglutinin to bind red blood cells together into a detectable matrix may also be characterized as a biochemical test for viral infection, although strictly speaking hemagglutinin is not an enzyme and has no metabolic function.
Serological methods are highly sensitive, specific and often extremely rapid tests used to identify microorganisms. These tests are based upon the ability of an antibody to bind specifically to an antigen. The antigen, usually a protein or carbohydrate made by an infectious agent, is bound by the antibody. This binding then sets off a chain of events that can be visibly obvious in various ways, dependent upon the test. For example, "Strep throat" is often diagnosed within minutes, and is based on the appearance of antigens made by the causative agent, S. pyogenes, that is retrieved from a patient's throat with a cotton swab. Serological tests, if available, are usually the preferred route of identification, however the tests are costly to develop and the reagents used in the test often require refrigeration. Some serological methods are extremely costly, although when commonly used, such as with the "strep test", they can be inexpensive.
Complex serological techniques have been developed into what are known as immunoassays. Immunoassays can use the basic antibody – antigen binding as the basis to produce an electro-magnetic or particle radiation signal, which can be detected by some form of instrumentation. Signal of unknowns can be compared to that of standards allowing quantitation of the target antigen. To aid in the diagnosis of infectious diseases, immunoassays can detect or measure antigens from either infectious agents or proteins generated by an infected organism in response to a foreign agent. For example, immunoassay A may detect the presence of a surface protein from a virus particle. Immunoassay B on the other hand may detect or measure antibodies produced by an organism's immune system that are made to neutralize and allow the destruction of the virus.
Instrumentation can be used to read extremely small signals created by secondary reactions linked to the antibody – antigen binding. Instrumentation can control sampling, reagent use, reaction times, signal detection, calculation of results, and data management to yield a cost-effective automated process for diagnosis of infectious disease.
PCR-based diagnostics
Technologies based upon the polymerase chain reaction (PCR) method will become nearly ubiquitous gold standards of diagnostics of the near future, for several reasons. First, the catalog of infectious agents has grown to the point that virtually all of the significant infectious agents of the human population have been identified. Second, an infectious agent must grow within the human body to cause disease; essentially it must amplify its own nucleic acids to cause a disease. This amplification of nucleic acid in infected tissue offers an opportunity to detect the infectious agent by using PCR. Third, the essential tools for directing PCR, primers, are derived from the genomes of infectious agents, and with time those genomes will be known if they are not already.
Thus, the technological ability to detect any infectious agent rapidly and specifically is currently available. The only remaining blockades to the use of PCR as a standard tool of diagnosis are in its cost and application, neither of which is insurmountable. The diagnosis of a few diseases will not benefit from the development of PCR methods, such as some of the clostridial diseases (tetanus and botulism). These diseases are fundamentally biological poisonings by relatively small numbers of infectious bacteria that produce extremely potent neurotoxins. A significant proliferation of the infectious agent does not occur, this limits the ability of PCR to detect the presence of any bacteria.
Metagenomic sequencing
Given the wide range of bacterial, viral, fungal, protozoal, and helminthic pathogens that cause debilitating and life-threatening illnesses, the ability to quickly identify the cause of infection is important yet often challenging. For example, more than half of cases of encephalitis, a severe illness affecting the brain, remain undiagnosed, despite extensive testing using the standard of care (microbiological culture) and state-of-the-art clinical laboratory methods. Metagenomic sequencing-based diagnostic tests are currently being developed for clinical use and show promise as a sensitive, specific, and rapid way to diagnose infection using a single all-encompassing test. This test is similar to current PCR tests; however, an untargeted whole genome amplification is used rather than primers for a specific infectious agent. This amplification step is followed by next-generation sequencing or third-generation sequencing, alignment comparisons, and taxonomic classification using large databases of thousands of pathogen and commensal reference genomes. Simultaneously, antimicrobial resistance genes within pathogen and plasmid genomes are sequenced and aligned to the taxonomically classified pathogen genomes to generate an antimicrobial resistance profile – analogous to antibiotic sensitivity testing – to facilitate antimicrobial stewardship and allow for the optimization of treatment using the most effective drugs for a patient's infection.
Metagenomic sequencing could prove especially useful for diagnosis when the patient is immunocompromised. An ever-wider array of infectious agents can cause serious harm to individuals with immunosuppression, so clinical screening must often be broader. Additionally, the expression of symptoms is often atypical, making a clinical diagnosis based on presentation more difficult. Thirdly, diagnostic methods that rely on the detection of antibodies are more likely to fail. A rapid, sensitive, specific, and untargeted test for all known human pathogens that detects the presence of the organism's DNA rather than antibodies is therefore highly desirable.
Indication of tests
There is usually an indication for a specific identification of an infectious agent only when such identification can aid in the treatment or prevention of the disease, or to advance knowledge of the course of an illness prior to the development of effective therapeutic or preventative measures. For example, in the early 1980s, prior to the appearance of AZT for the treatment of AIDS, the course of the disease was closely followed by monitoring the composition of patient blood samples, even though the outcome would not offer the patient any further treatment options. In part, these studies on the appearance of HIV in specific communities permitted the advancement of hypotheses as to the route of transmission of the virus. By understanding how the disease was transmitted, resources could be targeted to the communities at greatest risk in campaigns aimed at reducing the number of new infections. The specific serological diagnostic identification, and later genotypic or molecular identification, of HIV also enabled the development of hypotheses as to the temporal and geographical origins of the virus, as well as a myriad of other hypothesis. The development of molecular diagnostic tools have enabled physicians and researchers to monitor the efficacy of treatment with anti-retroviral drugs. Molecular diagnostics are now commonly used to identify HIV in healthy people long before the onset of illness and have been used to demonstrate the existence of people who are genetically resistant to HIV infection. Thus, while there still is no cure for AIDS, there is great therapeutic and predictive benefit to identifying the virus and monitoring the virus levels within the blood of infected individuals, both for the patient and for the community at large.
Classification
Subclinical versus clinical (latent versus apparent)
Symptomatic infections are apparent and clinical, whereas an infection that is active but does not produce noticeable symptoms may be called inapparent, silent, subclinical, or occult. An infection that is inactive or dormant is called a latent infection. An example of a latent bacterial infection is latent tuberculosis. Some viral infections can also be latent, examples of latent viral infections are any of those from the Herpesviridae family.
The word infection can denote any presence of a particular pathogen at all (no matter how little) but also is often used in a sense implying a clinically apparent infection (in other words, a case of infectious disease). This fact occasionally creates some ambiguity or prompts some usage discussion; to get around this it is common for health professionals to speak of colonization (rather than infection) when they mean that some of the pathogens are present but that no clinically apparent infection (no disease) is present.
Course of infection
Different terms are used to describe how and where infections present over time. In an acute infection, symptoms develop rapidly; its course can either be rapid or protracted. In chronic infection, symptoms usually develop gradually over weeks or months and are slow to resolve. In subacute infections, symptoms take longer to develop than in acute infections but arise more quickly than those of chronic infections. A focal infection is an initial site of infection from which organisms travel via the bloodstream to another area of the body.
Primary versus opportunistic
Among the many varieties of microorganisms, relatively few cause disease in otherwise healthy individuals. Infectious disease results from the interplay between those few pathogens and the defenses of the hosts they infect. The appearance and severity of disease resulting from any pathogen depend upon the ability of that pathogen to damage the host as well as the ability of the host to resist the pathogen. However, a host's immune system can also cause damage to the host itself in an attempt to control the infection. Clinicians, therefore, classify infectious microorganisms or microbes according to the status of host defenses – either as primary pathogens or as opportunistic pathogens.
Primary pathogens
Primary pathogens cause disease as a result of their presence or activity within the normal, healthy host, and their intrinsic virulence (the severity of the disease they cause) is, in part, a necessary consequence of their need to reproduce and spread. Many of the most common primary pathogens of humans only infect humans, however, many serious diseases are caused by organisms acquired from the environment or that infect non-human hosts.
Opportunistic pathogens
Opportunistic pathogens can cause an infectious disease in a host with depressed resistance (immunodeficiency) or if they have unusual access to the inside of the body (for example, via trauma). Opportunistic infection may be caused by microbes ordinarily in contact with the host, such as pathogenic bacteria or fungi in the gastrointestinal or the upper respiratory tract, and they may also result from (otherwise innocuous) microbes acquired from other hosts (as in Clostridioides difficile colitis) or from the environment as a result of traumatic introduction (as in surgical wound infections or compound fractures). An opportunistic disease requires impairment of host defenses, which may occur as a result of genetic defects (such as chronic granulomatous disease), exposure to antimicrobial drugs or immunosuppressive chemicals (as might occur following poisoning or cancer chemotherapy), exposure to ionizing radiation, or as a result of an infectious disease with immunosuppressive activity (such as with measles, malaria or HIV disease). Primary pathogens may also cause more severe disease in a host with depressed resistance than would normally occur in an immunosufficient host.
Secondary infection
While a primary infection can practically be viewed as the root cause of an individual's current health problem, a secondary infection is a sequela or complication of that root cause. For example, an infection due to a burn or penetrating trauma (the root cause) is a secondary infection. Primary pathogens often cause primary infection and often cause secondary infection. Usually, opportunistic infections are viewed as secondary infections (because immunodeficiency or injury was the predisposing factor).
Other types of infection
Other types of infection consist of mixed, iatrogenic, nosocomial, and community-acquired infection. A mixed infection is an infection that is caused by two or more pathogens. An example of this is appendicitis, which is caused by Bacteroides fragilis and Escherichia coli. The second is an iatrogenic infection. This type of infection is one that is transmitted from a health care worker to a patient. A nosocomial infection is also one that occurs in a health care setting. Nosocomial infections are those that are acquired during a hospital stay. Lastly, a community-acquired infection is one in which the infection is acquired from a whole community.
Infectious or not
One manner of proving that a given disease is infectious, is to satisfy Koch's postulates (first proposed by Robert Koch), which require that first, the infectious agent be identifiable only in patients who have the disease, and not in healthy controls, and second, that patients who contract the infectious agent also develop the disease. These postulates were first used in the discovery that Mycobacteria species cause tuberculosis.
However, Koch's postulates cannot usually be tested in modern practice for ethical reasons. Proving them would require experimental infection of a healthy individual with a pathogen produced as a pure culture. Conversely, even clearly infectious diseases do not always meet the infectious criteria; for example, Treponema pallidum, the causative spirochete of syphilis, cannot be cultured in vitro – however the organism can be cultured in rabbit testes. It is less clear that a pure culture comes from an animal source serving as host than it is when derived from microbes derived from plate culture.
Epidemiology, or the study and analysis of who, why and where disease occurs, and what determines whether various populations have a disease, is another important tool used to understand infectious disease. Epidemiologists may determine differences among groups within a population, such as whether certain age groups have a greater or lesser rate of infection; whether groups living in different neighborhoods are more likely to be infected; and by other factors, such as gender and race. Researchers also may assess whether a disease outbreak is sporadic, or just an occasional occurrence; endemic, with a steady level of regular cases occurring in a region; epidemic, with a fast arising, and unusually high number of cases in a region; or pandemic, which is a global epidemic. If the cause of the infectious disease is unknown, epidemiology can be used to assist with tracking down the sources of infection.
Contagiousness
Infectious diseases are sometimes called contagious diseases when they are easily transmitted by contact with an ill person or their secretions (e.g., influenza). Thus, a contagious disease is a subset of infectious disease that is especially infective or easily transmitted. All contagious diseases are infectious, but not vice versa. Other types of infectious, transmissible, or communicable diseases with more specialized routes of infection, such as vector transmission or sexual transmission, are usually not regarded as "contagious", and often do not require medical isolation (sometimes loosely called quarantine) of those affected. However, this specialized connotation of the word "contagious" and "contagious disease" (easy transmissibility) is not always respected in popular use.
Infectious diseases are commonly transmitted from person to person through direct contact. The types of direct contact are through person to person and droplet spread. Indirect contact such as airborne transmission, contaminated objects, food and drinking water, animal person contact, animal reservoirs, insect bites, and environmental reservoirs are another way infectious diseases are transmitted. The basic reproduction number of an infectious disease measures how easily it spreads through direct or indirect contact.
By anatomic location
Infections can be classified by the anatomic location or organ system infected, including:
Urinary tract infection
Skin infection
Respiratory tract infection
Odontogenic infection (an infection that originates within a tooth or in the closely surrounding tissues)
Vaginal infections
Intra-amniotic infection
In addition, locations of inflammation where infection is the most common cause include pneumonia, meningitis and salpingitis.
Prevention
Techniques like hand washing, wearing gowns, and wearing face masks can help prevent infections from being passed from one person to another. Aseptic technique was introduced in medicine and surgery in the late 19th century and greatly reduced the incidence of infections caused by surgery. Frequent hand washing remains the most important defense against the spread of unwanted organisms. There are other forms of prevention such as avoiding the use of illicit drugs, using a condom, wearing gloves, and having a healthy lifestyle with a balanced diet and regular exercise. Cooking foods well and avoiding foods that have been left outside for a long time is also important.
Antimicrobial substances used to prevent transmission of infections include:
antiseptics, which are applied to living tissue/skin
disinfectants, which destroy microorganisms found on non-living objects.
antibiotics, called prophylactic when given as prevention rather as treatment of infection. However, long term use of antibiotics leads to resistance of bacteria. While humans do not become immune to antibiotics, the bacteria does. Thus, avoiding using antibiotics longer than necessary helps preventing bacteria from forming mutations that aide in antibiotic resistance.
One of the ways to prevent or slow down the transmission of infectious diseases is to recognize the different characteristics of various diseases. Some critical disease characteristics that should be evaluated include virulence, distance traveled by those affected, and level of contagiousness. The human strains of Ebola virus, for example, incapacitate those infected extremely quickly and kill them soon after. As a result, those affected by this disease do not have the opportunity to travel very far from the initial infection zone. Also, this virus must spread through skin lesions or permeable membranes such as the eye. Thus, the initial stage of Ebola is not very contagious since its victims experience only internal hemorrhaging. As a result of the above features, the spread of Ebola is very rapid and usually stays within a relatively confined geographical area. In contrast, the human immunodeficiency virus (HIV) kills its victims very slowly by attacking their immune system. As a result, many of its victims transmit the virus to other individuals before even realizing that they are carrying the disease. Also, the relatively low virulence allows its victims to travel long distances, increasing the likelihood of an epidemic.
Another effective way to decrease the transmission rate of infectious diseases is to recognize the effects of small-world networks. In epidemics, there are often extensive interactions within hubs or groups of infected individuals and other interactions within discrete hubs of susceptible individuals. Despite the low interaction between discrete hubs, the disease can jump and spread in a susceptible hub via a single or few interactions with an infected hub. Thus, infection rates in small-world networks can be reduced somewhat if interactions between individuals within infected hubs are eliminated (Figure 1). However, infection rates can be drastically reduced if the main focus is on the prevention of transmission jumps between hubs. The use of needle exchange programs in areas with a high density of drug users with HIV is an example of the successful implementation of this treatment method. Another example is the use of ring culling or vaccination of potentially susceptible livestock in adjacent farms to prevent the spread of the foot-and-mouth virus in 2001.
A general method to prevent transmission of vector-borne pathogens is pest control.
In cases where infection is merely suspected, individuals may be quarantined until the incubation period has passed and the disease manifests itself or the person remains healthy. Groups may undergo quarantine, or in the case of communities, a cordon sanitaire may be imposed to prevent infection from spreading beyond the community, or in the case of protective sequestration, into a community. Public health authorities may implement other forms of social distancing, such as school closings, lockdowns or temporary restrictions (e.g. circuit breakers) to control an epidemic.
Immunity
Infection with most pathogens does not result in death of the host and the offending organism is ultimately cleared after the symptoms of the disease have waned. This process requires immune mechanisms to kill or inactivate the inoculum of the pathogen. Specific acquired immunity against infectious diseases may be mediated by antibodies and/or T lymphocytes. Immunity mediated by these two factors may be manifested by:
a direct effect upon a pathogen, such as antibody-initiated complement-dependent bacteriolysis, opsonoization, phagocytosis and killing, as occurs for some bacteria,
neutralization of viruses so that these organisms cannot enter cells,
or by T lymphocytes, which will kill a cell parasitized by a microorganism.
The immune system response to a microorganism often causes symptoms such as a high fever and inflammation, and has the potential to be more devastating than direct damage caused by a microbe.
Resistance to infection (immunity) may be acquired following a disease, by asymptomatic carriage of the pathogen, by harboring an organism with a similar structure (crossreacting), or by vaccination. Knowledge of the protective antigens and specific acquired host immune factors is more complete for primary pathogens than for opportunistic pathogens. There is also the phenomenon of herd immunity which offers a measure of protection to those otherwise vulnerable people when a large enough proportion of the population has acquired immunity from certain infections.
Immune resistance to an infectious disease requires a critical level of either antigen-specific antibodies and/or T cells when the host encounters the pathogen. Some individuals develop natural serum antibodies to the surface polysaccharides of some agents although they have had little or no contact with the agent, these natural antibodies confer specific protection to adults and are passively transmitted to newborns.
Host genetic factors
The organism that is the target of an infecting action of a specific infectious agent is called the host. The host harbouring an agent that is in a mature or sexually active stage phase is called the definitive host. The intermediate host comes in contact during the larvae stage. A host can be anything living and can attain to asexual and sexual reproduction.
The clearance of the pathogens, either treatment-induced or spontaneous, it can be influenced by the genetic variants carried by the individual patients. For instance, for genotype 1 hepatitis C treated with Pegylated interferon-alpha-2a or Pegylated interferon-alpha-2b (brand names Pegasys or PEG-Intron) combined with ribavirin, it has been shown that genetic polymorphisms near the human IL28B gene, encoding interferon lambda 3, are associated with significant differences in the treatment-induced clearance of the virus. This finding, originally reported in Nature, showed that genotype 1 hepatitis C patients carrying certain genetic variant alleles near the IL28B gene are more possibly to achieve sustained virological response after the treatment than others. Later report from Nature demonstrated that the same genetic variants are also associated with the natural clearance of the genotype 1 hepatitis C virus.
Treatments
When infection attacks the body, anti-infective drugs can suppress the infection. Several broad types of anti-infective drugs exist, depending on the type of organism targeted; they include antibacterial (antibiotic; including antitubercular), antiviral, antifungal and antiparasitic (including antiprotozoal and antihelminthic) agents. Depending on the severity and the type of infection, the antibiotic may be given by mouth or by injection, or may be applied topically. Severe infections of the brain are usually treated with intravenous antibiotics. Sometimes, multiple antibiotics are used in case there is resistance to one antibiotic. Antibiotics only work for bacteria and do not affect viruses. Antibiotics work by slowing down the multiplication of bacteria or killing the bacteria. The most common classes of antibiotics used in medicine include penicillin, cephalosporins, aminoglycosides, macrolides, quinolones and tetracyclines.
Not all infections require treatment, and for many self-limiting infections the treatment may cause more side-effects than benefits. Antimicrobial stewardship is the concept that healthcare providers should treat an infection with an antimicrobial that specifically works well for the target pathogen for the shortest amount of time and to only treat when there is a known or highly suspected pathogen that will respond to the medication.
Susceptibility to infection
Pandemics such as COVID-19 show that people dramatically differ in their susceptibility to infection. This may be because of general health, age, or their immune status, e.g. when they have been infected previously. However, it also has become clear that there are genetic factor which determine susceptibility to infection. For instance, up to 40% of SARS-CoV-2 infections may be asymptomatic, suggesting that many people are naturally protected from disease. Large genetic studies have defined risk factors for severe SARS-CoV-2 infections, and genome sequences from 659 patients with severe COVID-19 revealed genetic variants that appear to be associated with life-threatening disease. One gene identified in these studies is type I interferon (IFN). Autoantibodies against type I IFNs were found in up to 13.7% of patients with life-threatening COVID-19, indicating that a complex interaction between genetics and the immune system is important for natural resistance to Covid.
Similarly, mutations in the ERAP2 gene, encoding endoplasmic reticulum aminopeptidase 2, seem to increase the susceptibility to the plague, the disease caused by an infection with the bacteria Yersinia pestis. People who inherited two copies of a complete variant of the gene were twice as likely to have survived the plague as those who inherited two copies of a truncated variant.
Susceptibility also determined the epidemiology of infection, given that different populations have different genetic and environmental conditions that affect infections.
Epidemiology
An estimated 1,680 million people died of infectious diseases in the 20th century and about 10 million in 2010.
The World Health Organization collects information on global deaths by International Classification of Disease (ICD) code categories. The following table lists the top infectious disease by number of deaths in 2002. 1993 data is included for comparison.
The top three single agent/disease killers are HIV/AIDS, TB and malaria. While the number of deaths due to nearly every disease have decreased, deaths due to HIV/AIDS have increased fourfold. Childhood diseases include pertussis, poliomyelitis, diphtheria, measles and tetanus. Children also make up a large percentage of lower respiratory and diarrheal deaths. In 2012, approximately 3.1 million people have died due to lower respiratory infections, making it the number 4 leading cause of death in the world.
Historic pandemics
With their potential for unpredictable and explosive impacts, infectious diseases have been major actors in human history. A pandemic (or global epidemic) is a disease that affects people over an extensive geographical area. For example:
Plague of Justinian, from 541 to 542, killed between 50% and 60% of Europe's population.
The Black Death of 1347 to 1352 killed 25 million in Europe over five years. The plague reduced the old world population from an estimated 450 million to between 350 and 375 million in the 14th century.
The introduction of smallpox, measles, and typhus to the areas of Central and South America by European explorers during the 15th and 16th centuries caused pandemics among the native inhabitants. Between 1518 and 1568 disease pandemics are said to have caused the population of Mexico to fall from 20 million to 3 million.
The first European influenza epidemic occurred between 1556 and 1560, with an estimated mortality rate of 20%.
Smallpox killed an estimated 60 million Europeans during the 18th century (approximately 400,000 per year). Up to 30% of those infected, including 80% of the children under 5 years of age, died from the disease, and one-third of the survivors went blind.
In the 19th century, tuberculosis killed an estimated one-quarter of the adult population of Europe; by 1918 one in six deaths in France were still caused by TB.
The Influenza Pandemic of 1918 (or the Spanish flu) killed 25–50 million people (about 2% of world population of 1.7 billion). Today Influenza kills about 250,000 to 500,000 worldwide each year.
In 2021, COVID-19 emerged as a major global health crisis, directly causing 8.7 million deaths, making it one of the leading causes of mortality worldwide.
Emerging diseases
In most cases, microorganisms live in harmony with their hosts via mutual or commensal interactions. Diseases can emerge when existing parasites become pathogenic or when new pathogenic parasites enter a new host.
Coevolution between parasite and host can lead to hosts becoming resistant to the parasites or the parasites may evolve greater virulence, leading to immunopathological disease.
Human activity is involved with many emerging infectious diseases, such as environmental change enabling a parasite to occupy new niches. When that happens, a pathogen that had been confined to a remote habitat has a wider distribution and possibly a new host organism. Parasites jumping from nonhuman to human hosts are known as zoonoses. Under disease invasion, when a parasite invades a new host species, it may become pathogenic in the new host.
Several human activities have led to the emergence of zoonotic human pathogens, including viruses, bacteria, protozoa, and rickettsia, and spread of vector-borne diseases, see also globalization and disease and wildlife disease:
Encroachment on wildlife habitats. The construction of new villages and housing developments in rural areas force animals to live in dense populations, creating opportunities for microbes to mutate and emerge.
Changes in agriculture. The introduction of new crops attracts new crop pests and the microbes they carry to farming communities, exposing people to unfamiliar diseases.
The destruction of rain forests. As countries make use of their rain forests, by building roads through forests and clearing areas for settlement or commercial ventures, people encounter insects and other animals harboring previously unknown microorganisms.
Uncontrolled urbanization. The rapid growth of cities in many developing countries tends to concentrate large numbers of people into crowded areas with poor sanitation. These conditions foster transmission of contagious diseases.
Modern transport. Ships and other cargo carriers often harbor unintended "passengers", that can spread diseases to faraway destinations. While with international jet-airplane travel, people infected with a disease can carry it to distant lands, or home to their families, before their first symptoms appear.
Germ theory of disease
In Antiquity, the Greek historian Thucydides ( – ) was the first person to write, in his account of the plague of Athens, that diseases could spread from an infected person to others. In his On the Different Types of Fever (), the Greco-Roman physician Galen speculated that plagues were spread by "certain seeds of plague", which were present in the air. In the Sushruta Samhita, the ancient Indian physician Sushruta theorized: "Leprosy, fever, consumption, diseases of the eye, and other infectious diseases spread from one person to another by sexual union, physical contact, eating together, sleeping together, sitting together, and the use of same clothes, garlands and pastes." This book has been dated to about the sixth century BC.
A basic form of contagion theory was proposed by Persian physician Ibn Sina (known as Avicenna in Europe) in The Canon of Medicine (1025), which later became the most authoritative medical textbook in Europe up until the 16th century. In Book IV of the Canon, Ibn Sina discussed epidemics, outlining the classical miasma theory and attempting to blend it with his own early contagion theory. He mentioned that people can transmit disease to others by breath, noted contagion with tuberculosis, and discussed the transmission of disease through water and dirt. The concept of invisible contagion was later discussed by several Islamic scholars in the Ayyubid Sultanate who referred to them as najasat ("impure substances"). The fiqh scholar Ibn al-Haj al-Abdari (–1336), while discussing Islamic diet and hygiene, gave warnings about how contagion can contaminate water, food, and garments, and could spread through the water supply, and may have implied contagion to be unseen particles.
When the Black Death bubonic plague reached Al-Andalus in the 14th century, the Arab physicians Ibn Khatima () and Ibn al-Khatib (1313–1374) hypothesised that infectious diseases were caused by "minute bodies" and described how they can be transmitted through garments, vessels and earrings. Ideas of contagion became more popular in Europe during the Renaissance, particularly through the writing of the Italian physician Girolamo Fracastoro. Anton van Leeuwenhoek (1632–1723) advanced the science of microscopy by being the first to observe microorganisms, allowing for easy visualization of bacteria.
In the mid-19th century John Snow and William Budd did important work demonstrating the contagiousness of typhoid and cholera through contaminated water. Both are credited with decreasing epidemics of cholera in their towns by implementing measures to prevent contamination of water. Louis Pasteur proved beyond doubt that certain diseases are caused by infectious agents, and developed a vaccine for rabies. Robert Koch provided the study of infectious diseases with a scientific basis known as Koch's postulates. Edward Jenner, Jonas Salk and Albert Sabin developed effective vaccines for smallpox and polio, which would later result in the eradication and near-eradication of these diseases, respectively. Alexander Fleming discovered the world's first antibiotic, penicillin, which Florey and Chain then developed. Gerhard Domagk developed sulphonamides, the first broad spectrum synthetic antibacterial drugs.
Medical specialists
The medical treatment of infectious diseases falls into the medical field of Infectious Disease and in some cases the study of propagation pertains to the field of Epidemiology. Generally, infections are initially diagnosed by primary care physicians or internal medicine specialists. For example, an "uncomplicated" pneumonia will generally be treated by the internist or the pulmonologist (lung physician). The work of the infectious diseases specialist therefore entails working with both patients and general practitioners, as well as laboratory scientists, immunologists, bacteriologists and other specialists.
An infectious disease team may be alerted when:
The disease has not been definitively diagnosed after an initial workup
The patient is immunocompromised (for example, in AIDS or after chemotherapy);
The infectious agent is of an uncommon nature (e.g. tropical diseases);
The disease has not responded to first line antibiotics;
The disease might be dangerous to other patients, and the patient might have to be isolated
Society and culture
Several studies have reported associations between pathogen load in an area and human behavior. Higher pathogen load is associated with decreased size of ethnic and religious groups in an area. This may be due high pathogen load favoring avoidance of other groups, which may reduce pathogen transmission, or a high pathogen load preventing the creation of large settlements and armies that enforce a common culture. Higher pathogen load is also associated with more restricted sexual behavior, which may reduce pathogen transmission. It also associated with higher preferences for health and attractiveness in mates. Higher fertility rates and shorter or less parental care per child is another association that may be a compensation for the higher mortality rate. There is also an association with polygyny which may be due to higher pathogen load, making selecting males with a high genetic resistance increasingly important. Higher pathogen load is also associated with more collectivism and less individualism, which may limit contacts with outside groups and infections. There are alternative explanations for at least some of the associations although some of these explanations may in turn ultimately be due to pathogen load. Thus, polygyny may also be due to a lower male: female ratio in these areas but this may ultimately be due to male infants having increased mortality from infectious diseases. Another example is that poor socioeconomic factors may ultimately in part be due to high pathogen load preventing economic development.
Fossil record
Evidence of infection in fossil remains is a subject of interest for paleopathologists, scientists who study occurrences of injuries and illness in extinct life forms. Signs of infection have been discovered in the bones of carnivorous dinosaurs. When present, however, these infections seem to tend to be confined to only small regions of the body. A skull attributed to the early carnivorous dinosaur Herrerasaurus ischigualastensis exhibits pit-like wounds surrounded by swollen and porous bone. The unusual texture of the bone around the wounds suggests they were affected by a short-lived, non-lethal infection. Scientists who studied the skull speculated that the bite marks were received in a fight with another Herrerasaurus. Other carnivorous dinosaurs with documented evidence of infection include Acrocanthosaurus, Allosaurus, Tyrannosaurus and a tyrannosaur from the Kirtland Formation. The infections from both tyrannosaurs were received by being bitten during a fight, like the Herrerasaurus specimen.
Outer space
A 2006 Space Shuttle experiment found that Salmonella typhimurium, a bacterium that can cause food poisoning, became more virulent when cultivated in space. On April 29, 2013, scientists in Rensselaer Polytechnic Institute, funded by NASA, reported that, during spaceflight on the International Space Station, microbes seem to adapt to the space environment in ways "not observed on Earth" and in ways that "can lead to increases in growth and virulence". More recently, in 2017, bacteria were found to be more resistant to antibiotics and to thrive in the near-weightlessness of space. Microorganisms have been observed to survive the vacuum of outer space.
| Biology and health sciences | Illness and injury | null |
37232 | https://en.wikipedia.org/wiki/Fermat%27s%20principle | Fermat's principle | Fermat's principle, also known as the principle of least time, is the link between ray optics and wave optics. Fermat's principle states that the path taken by a ray between two given points is the path that can be traveled in the least time.
First proposed by the French mathematician Pierre de Fermat in 1662, as a means of explaining the ordinary law of refraction of light (Fig.1), Fermat's principle was initially controversial because it seemed to ascribe knowledge and intent to nature. Not until the 19th century was it understood that nature's ability to test alternative paths is merely a fundamental property of waves. If points A and B are given, a wavefront expanding from A sweeps all possible ray paths radiating from A, whether they pass through B or not. If the wavefront reaches point B, it sweeps not only the ray path(s) from A to B, but also an infinitude of nearby paths with the same endpoints. Fermat's principle describes any ray that happens to reach point B; there is no implication that the ray "knew" the quickest path or "intended" to take that path.
In its original "strong" form, Fermat's principle states that the path taken by a ray between two given points is the path that can be traveled in the least time. In order to be true in all cases, this statement must be weakened by replacing the "least" time with a time that is "stationary" with respect to variations of the path – so that a deviation in the path causes, at most, a second-order change in the traversal time. To put it loosely, a ray path is surrounded by close paths that can be traversed in very close times. It can be shown that this technical definition corresponds to more intuitive notions of a ray, such as a line of sight or the path of a narrow beam.
For the purpose of comparing traversal times, the time from one point to the next nominated point is taken as if the first point were a point-source. Without this condition, the traversal time would be ambiguous; for example, if the propagation time from to were reckoned from an arbitrary wavefront W containing (Fig.2), that time could be made arbitrarily small by suitably angling the wavefront.
Treating a point on the path as a source is the minimum requirement of Huygens' principle, and is part of the explanation of Fermat's principle. But it can also be shown that the geometric construction by which Huygens tried to apply his own principle (as distinct from the principle itself) is simply an invocation of Fermat's principle. Hence all the conclusions that Huygens drew from that construction – including, without limitation, the laws of rectilinear propagation of light, ordinary reflection, ordinary refraction, and the extraordinary refraction of "Iceland crystal" (calcite) – are also consequences of Fermat's principle.
Derivation
Sufficient conditions
Let us suppose that:
A disturbance propagates sequentially through a medium (a vacuum or some material, not necessarily homogeneous or isotropic), without action at a distance;
During propagation, the influence of the disturbance at any intermediate point P upon surrounding points has a non-zero angular spread (as if P were a source), so that a disturbance originating from any point A arrives at any other point B via an infinitude of paths, by which B receives an infinitude of delayed versions of the disturbance at A; and
These delayed versions of the disturbance will reinforce each other at B if they are synchronized within some tolerance.
Then the various propagation paths from A to B will help each other, or interfere constructively, if their traversal times agree within the said tolerance. For a small tolerance (in the limiting case), the permissible range of variations of the path is maximized if the path is such that its traversal time is stationary with respect to the variations, so that a variation of the path causes at most a second-order change in the traversal time.
The most obvious example of a stationarity in traversal time is a (local or global) minimum – that is, a path of least time, as in the "strong" form of Fermat's principle. But that condition is not essential to the argument.
Having established that a path of stationary traversal time is reinforced by a maximally wide corridor of neighboring paths, we still need to explain how this reinforcement corresponds to intuitive notions of a ray. But, for brevity in the explanations, let us first define a ray path as a path of stationary traversal time.
A ray as a signal path (line of sight)
If the corridor of paths reinforcing a ray path from A to B is substantially obstructed, this will significantly alter the disturbance reaching B from A – unlike a similar-sized obstruction outside any such corridor, blocking paths that do not reinforce each other. The former obstruction will significantly disrupt the signal reaching B from A, while the latter will not; thus the ray path marks a signal path. If the signal is visible light, the former obstruction will significantly affect the appearance of an object at A as seen by an observer at B, while the latter will not; so the ray path marks a line of sight.
In optical experiments, a line of sight is routinely assumed to be a ray path.
A ray as an energy path (beam)
If the corridor of paths reinforcing a ray path from A to B is substantially obstructed, this will significantly affect the energy reaching B from A – unlike a similar-sized obstruction outside any such corridor. Thus the ray path marks an energy path – as does a beam.
Suppose that a wavefront expanding from point A passes point P, which lies on a ray path from point A to point B. By definition, all points on the wavefront have the same propagation time from A. Now let the wavefront be blocked except for a window, centered on P, and small enough to lie within the corridor of paths that reinforce the ray path from A to B. Then all points on the unobstructed portion of the wavefront will have, nearly enough, equal propagation times to B, but not to points in other directions, so that B will be in the direction of peak intensity of the beam admitted through the window. So the ray path marks the beam. And in optical experiments, a beam is routinely considered as a collection of rays or (if it is narrow) as an approximation to a ray (Fig.3).
Analogies
According to the "strong" form of Fermat's principle, the problem of finding the path of a light ray from point A in a medium of faster propagation, to point B in a medium of slower propagation (Fig.1), is analogous to the problem faced by a lifeguard in deciding where to enter the water in order to reach a drowning swimmer as soon as possible, given that the lifeguard can run faster than (s)he can swim. But that analogy falls short of explaining the behavior of the light, because the lifeguard can think about the problem (even if only for an instant) whereas the light presumably cannot. The discovery that ants are capable of similar calculations does not bridge the gap between the animate and the inanimate.
In contrast, the above assumptions (1) to (3) hold for any wavelike disturbance and explain Fermat's principle in purely mechanistic terms, without any imputation of knowledge or purpose.
The principle applies to waves in general, including (e.g.) sound waves in fluids and elastic waves in solids. In a modified form, it even works for matter waves: in quantum mechanics, the classical path of a particle is obtainable by applying Fermat's principle to the associated wave – except that, because the frequency may vary with the path, the stationarity is in the phase shift (or number of cycles) and not necessarily in the time.
Fermat's principle is most familiar, however, in the case of visible light: it is the link between geometrical optics, which describes certain optical phenomena in terms of rays, and the wave theory of light, which explains the same phenomena on the hypothesis that light consists of waves.
Equivalence to Huygens' construction
In this article we distinguish between Huygens' principle, which states that every point crossed by a traveling wave becomes the source of a secondary wave, and Huygens' construction, which is described below.
Let the surface be a wavefront at time , and let the surface be the same wavefront at the later time (Fig.4). Let be a general point on . Then, according to Huygens' construction,
is the envelope (common tangent surface), on the forward side of , of all the secondary wavefronts each of which would expand in time from a point on , and
if the secondary wavefront expanding from point in time touches the surface at point , then and lie on a ray.
The construction may be repeated in order to find successive positions of the primary wavefront, and successive points on the ray.
The ray direction given by this construction is the radial direction of the secondary wavefront, and may differ from the normal of the secondary wavefront (cf. Fig.2), and therefore from the normal of the primary wavefront at the point of tangency. Hence the ray velocity, in magnitude and direction, is the radial velocity of an infinitesimal secondary wavefront, and is generally a function of location and direction.
Now let be a point on close to , and let be a point on close to . Then, by the construction,
the time taken for a secondary wavefront from to reach has at most a second-order dependence on the displacement , and
the time taken for a secondary wavefront to reach from has at most a second-order dependence on the displacement .
By (i), the ray path is a path of stationary traversal time from to ; and by (ii), it is a path of stationary traversal time from a point on to .
So Huygens' construction implicitly defines a ray path as a path of stationary traversal time between successive positions of a wavefront, the time being reckoned from a point-source on the earlier wavefront. This conclusion remains valid if the secondary wavefronts are reflected or refracted by surfaces of discontinuity in the properties of the medium, provided that the comparison is restricted to the affected paths and the affected portions of the wavefronts.
Fermat's principle, however, is conventionally expressed in point-to-point terms, not wavefront-to-wavefront terms. Accordingly, let us modify the example by supposing that the wavefront which becomes surface at time , and which becomes surface at the later time , is emitted from point at time . Let be a point on (as before), and a point on . And let , , , and be given, so that the problem is to find .
If satisfies Huygens' construction, so that the secondary wavefront from is tangential to at , then is a path of stationary traversal time from to . Adding the fixed time from to , we find that is the path of stationary traversal time from to (possibly with a restricted domain of comparison, as noted above), in accordance with Fermat's principle. The argument works just as well in the converse direction, provided that has a well-defined tangent plane at . Thus Huygens' construction and Fermat's principle are geometrically equivalent.
Through this equivalence, Fermat's principle sustains Huygens' construction and thence all the conclusions that Huygens was able to draw from that construction. In short, "The laws of geometrical optics may be derived from Fermat's principle". With the exception of the Fermat–Huygens principle itself, these laws are special cases in the sense that they depend on further assumptions about the media. Two of them are mentioned under the next heading.
Special cases
Isotropic media: rays normal to wavefronts
In an isotropic medium, because the propagation speed is independent of direction, the secondary wavefronts that expand from points on a primary wavefront in a given infinitesimal time are spherical, so that their radii are normal to their common tangent surface at the points of tangency. But their radii mark the ray directions, and their common tangent surface is a general wavefront. Thus the rays are normal (orthogonal) to the wavefronts.
Because much of the teaching of optics concentrates on isotropic media, treating anisotropic media as an optional topic, the assumption that the rays are normal to the wavefronts can become so pervasive that even Fermat's principle is explained under that assumption, although in fact Fermat's principle is more general.
Homogeneous media: rectilinear propagation
In a homogeneous medium (also called a uniform medium), all the secondary wavefronts that expand from a given primary wavefront in a given time are congruent and similarly oriented, so that their envelope may be considered as the envelope of a single secondary wavefront which preserves its orientation while its center (source) moves over . If is its center while is its point of tangency with , then moves parallel to , so that the plane tangential to at is parallel to the plane tangential to at . Let another (congruent and similarly orientated) secondary wavefront be centered on , moving with , and let it meet its envelope at point . Then, by the same reasoning, the plane tangential to at is parallel to the other two planes. Hence, due to the congruence and similar orientations, the ray directions and are the same (but not necessarily normal to the wavefronts, since the secondary wavefronts are not necessarily spherical). This construction can be repeated any number of times, giving a straight ray of any length. Thus a homogeneous medium admits rectilinear rays.
Modern version
Formulation in terms of refractive index
Let a path extend from point to point . Let be the arc length measured along the path from , and let be the time taken to traverse that arc length at the ray speed (that is, at the radial speed of the local secondary wavefront, for each location and direction on the path). Then the traversal time of the entire path is
(where and simply denote the endpoints and are not to be construed as values of or ). The condition for to be a ray path is that the first-order change in due to a change in is zero; that is,
Now let us define the optical length of a given path (optical path length, OPL) as the distance traversed by a ray in a homogeneous isotropic reference medium (e.g., a vacuum) in the same time that it takes to traverse the given path at the local ray velocity. Then, if denotes the propagation speed in the reference medium (e.g., the speed of light in vacuum), the optical length of a path traversed in time is , and the optical length of a path traversed in time is . So, multiplying equation (1) through by , we obtain
where is the ray index – that is, the refractive index calculated on the ray velocity instead of the usual phase velocity (wave-normal velocity). For an infinitesimal path, we have indicating that the optical length is the physical length multiplied by the ray index: the OPL is a notional geometric quantity, from which time has been factored out. In terms of OPL, the condition for to be a ray path (Fermat's principle) becomes
This has the form of Maupertuis's principle in classical mechanics (for a single particle), with the ray index in optics taking the role of momentum or velocity in mechanics.
In an isotropic medium, for which the ray velocity is also the phase velocity, we may substitute the usual refractive index for .
Relation to Hamilton's principle
If , , are Cartesian coordinates and an overdot denotes differentiation with respect to , Fermat's principle (2) may be written
In the case of an isotropic medium, we may replace with the normal refractive index , which is simply a scalar field. If we then define the optical Lagrangian as
Fermat's principle becomes
If the direction of propagation is always such that we can use instead of as the parameter of the path (and the overdot to denote differentiation w.r.t. instead of ), the optical Lagrangian can instead be written
so that Fermat's principle becomes
This has the form of Hamilton's principle in classical mechanics, except that the time dimension is missing: the third spatial coordinate in optics takes the role of time in mechanics. The optical Lagrangian is the function which, when integrated w.r.t. the parameter of the path, yields the OPL; it is the foundation of Lagrangian and Hamiltonian optics.
History
If a ray follows a straight line, it obviously takes the path of least length. Hero of Alexandria, in his Catoptrics (1st century CE), showed that the ordinary law of reflection off a plane surface follows from the premise that the total length of the ray path is a minimum. Ibn al-Haytham, an 11th-century polymath later extended this principle to refraction, hence giving an early version of the Fermat's principle.
Fermat vs. the Cartesians
In 1657, Pierre de Fermat received from Marin Cureau de la Chambre a copy of newly published treatise, in which La Chambre noted Hero's principle and complained that it did not work for refraction.
Fermat replied that refraction might be brought into the same framework by supposing that light took the path of least resistance, and that different media offered different resistances. His eventual solution, described in a letter to La Chambre dated 1 January 1662, construed "resistance" as inversely proportional to speed, so that light took the path of least time. That premise yielded the ordinary law of refraction, provided that light traveled more slowly in the optically denser medium.
Fermat's solution was a landmark in that it unified the then-known laws of geometrical optics under a variational principle or action principle, setting the precedent for the principle of least action in classical mechanics and the corresponding principles in other fields (see History of variational principles in physics). It was the more notable because it used the method of adequality, which may be understood in retrospect as finding the point where the slope of an infinitesimally short chord is zero, without the intermediate step of finding a general expression for the slope (the derivative).
It was also immediately controversial. The ordinary law of refraction was at that time attributed to René Descartes (d.1650), who had tried to explain it by supposing that light was a force that propagated instantaneously, or that light was analogous to a tennis ball that traveled faster in the denser medium, either premise being inconsistent with Fermat's. Descartes' most prominent defender, Claude Clerselier, criticized Fermat for apparently ascribing knowledge and intent to nature, and for failing to explain why nature should prefer to economize on time rather than distance. Clerselier wrote in part:
1. The principle that you take as the basis of your demonstration, namely that nature always acts in the shortest and simplest ways, is merely a moral principle and not a physical one; it is not, and cannot be, the cause of any effect in nature .... For otherwise we would attribute knowledge to nature; but here, by "nature", we understand only this order and this law established in the world as it is, which acts without foresight, without choice, and by a necessary determination.
2. This same principle would make nature irresolute ... For I ask you ... when a ray of light must pass from a point in a rare medium to a point in a dense one, is there not reason for nature to hesitate if, by your principle, it must choose the straight line as soon as the bent one, since if the latter proves shorter in time, the former is shorter and simpler in length? Who will decide and who will pronounce?
Fermat, being unaware of the mechanistic foundations of his own principle, was not well placed to defend it, except as a purely geometric and kinematic proposition. The wave theory of light, first proposed by Robert Hooke in the year of Fermat's death, and rapidly improved by Ignace-Gaston Pardies and (especially) Christiaan Huygens, contained the necessary foundations; but the recognition of this fact was surprisingly slow.
Huygens's oversight
In 1678, Huygens proposed that every point reached by a luminous disturbance becomes a source of a spherical wave; the sum of these secondary waves determines the form of the wave at any subsequent time. Huygens repeatedly referred to the envelope of his secondary wavefronts as the termination of the movement, meaning that the later wavefront was the outer boundary that the disturbance could reach in a given time, which was therefore the minimum time in which each point on the later wavefront could be reached. But he did not argue that the direction of minimum time was that from the secondary source to the point of tangency; instead, he deduced the ray direction from the extent of the common tangent surface corresponding to a given extent of the initial wavefront. His only endorsement of Fermat's principle was limited in scope: having derived the law of ordinary refraction, for which the rays are normal to the wavefronts, Huygens gave a geometric proof that a ray refracted according to this law takes the path of least time. He would hardly have thought this necessary if he had known that the principle of least time followed directly from the same common-tangent construction by which he had deduced not only the law of ordinary refraction, but also the laws of rectilinear propagation and ordinary reflection (which were also known to follow from Fermat's principle), and a previously unknown law of extraordinary refraction – the last by means of secondary wavefronts that were spheroidal rather than spherical, with the result that the rays were generally oblique to the wavefronts. It was as if Huygens had not noticed that his construction implied Fermat's principle, and even as if he thought he had found an exception to that principle. Manuscript evidence cited by Alan E.Shapiro tends to confirm that Huygens believed the principle of least time to be invalid "in double refraction, where the rays are not normal to the wave fronts".
Shapiro further reports that the only three authorities who accepted "Huygens' principle" in the 17th and 18th centuries, namely Philippe de La Hire, Denis Papin, and Gottfried Wilhelm Leibniz, did so because it accounted for the extraordinary refraction of "Iceland crystal" (calcite) in the same manner as the previously known laws of geometrical optics. But, for the time being, the corresponding extension of Fermat's principle went unnoticed.
Laplace, Young, Fresnel, and Lorentz
On 30 January 1809, Pierre-Simon Laplace, reporting on the work of his protégé Étienne-Louis Malus, claimed that the extraordinary refraction of calcite could be explained under the corpuscular theory of light with the aid of Maupertuis's principle of least action: that the integral of speed with respect to distance was a minimum. The corpuscular speed that satisfied this principle was proportional to the reciprocal of the ray speed given by the radius of Huygens' spheroid. Laplace continued:
According to Huygens, the velocity of the extraordinary ray, in the crystal, is simply expressed by the radius of the spheroid; consequently his hypothesis does not agree with the principle of the least action: but it is remarkable that it agrees with the principle of Fermat, which is, that light passes, from a given point without the crystal, to a given point within it, in the least possible time; for it is easy to see that this principle coincides with that of the least action, if we invert the expression of the velocity.
Laplace's report was the subject of a wide-ranging rebuttal by Thomas Young, who wrote in part:
The principle of Fermat, although it was assumed by that mathematician on hypothetical, or even imaginary grounds, is in fact a fundamental law with respect to undulatory motion, and is the basis of every determination in the Huygenian theory... Mr. Laplace seems to be unacquainted with this most essential principle of one of the two theories which he compares; for he says, that "it is remarkable" that the Huygenian law of extraordinary refraction agrees with the principle of Fermat; which he would scarcely have observed, if he had been aware that the law was an immediate consequence of the principle.
In fact Laplace was aware that Fermat's principle follows from Huygens' construction in the case of refraction from an isotropic medium to an anisotropic one; a geometric proof was contained in the long version of Laplace's report, printed in 1810.
Young's claim was more general than Laplace's, and likewise upheld Fermat's principle even in the case of extraordinary refraction, in which the rays are generally not perpendicular to the wavefronts. Unfortunately, however, the omitted middle sentence of the quoted paragraph by Young began "The motion of every undulation must necessarily be in a direction perpendicular to its surface ..." (emphasis added), and was therefore bound to sow confusion rather than clarity.
No such confusion subsists in Augustin-Jean Fresnel's "Second Memoir" on double refraction (Fresnel, 1827), which addresses Fermat's principle in several places (without naming Fermat), proceeding from the special case in which rays are normal to wavefronts, to the general case in which rays are paths of least time or stationary time. (In the following summary, page numbers refer to Alfred W.Hobson's translation.)
For refraction of a plane wave at parallel incidence on one face of an anisotropic crystalline wedge (pp.291–2), in order to find the "first ray arrived" at an observation point beyond the other face of the wedge, it suffices to treat the rays outside the crystal as normal to the wavefronts, and within the crystal to consider only the parallel wavefronts (whatever the ray direction). So in this case, Fresnel does not attempt to trace the complete ray path.
Next, Fresnel considers a ray refracted from a point-source M inside a crystal, through a point A on the surface, to an observation point B outside (pp.294–6). The surface passing through B and given by the "locus of the disturbances which arrive first" is, according to Huygens' construction, normal to "the ray AB of swiftest arrival". But this construction requires knowledge of the "surface of the wave" (that is, the secondary wavefront) within the crystal.
Then he considers a plane wavefront propagating in a medium with non-spherical secondary wavefronts, oriented so that the ray path given by Huygens' construction – from the source of the secondary wavefront to its point of tangency with the subsequent primary wavefront – is not normal to the primary wavefronts (p.296). He shows that this path is nevertheless "the path of quickest arrival of the disturbance" from the earlier primary wavefront to the point of tangency.
In a later heading (p.305) he declares that "The construction of Huygens, which determines the path of swiftest arrival" is applicable to secondary wavefronts of any shape. He then notes that when we apply Huygens' construction to refraction into a crystal with a two-sheeted secondary wavefront, and draw the lines from the two points of tangency to the center of the secondary wavefront, "we shall have the directions of the two paths of swiftest arrival, and consequently of the ordinary and of the extraordinary ray."
Under the heading "Definition of the word Ray" (p.309), he concludes that this term must be applied to the line which joins the center of the secondary wave to a point on its surface, whatever the inclination of this line to the surface.
As a "new consideration" (pp.310–11), he notes that if a plane wavefront is passed through a small hole centered on point E, then the direction ED of maximum intensity of the resulting beam will be that in which the secondary wave starting from E will "arrive there the first", and the secondary wavefronts from opposite sides of the hole (equidistant from E) will "arrive at D in the same time" as each other. This direction is not assumed to be normal to any wavefront.
Thus Fresnel showed, even for anisotropic media, that the ray path given by Huygens' construction is the path of least time between successive positions of a plane or diverging wavefront, that the ray velocities are the radii of the secondary "wave surface" after unit time, and that a stationary traversal time accounts for the direction of maximum intensity of a beam. However, establishing the general equivalence between Huygens' construction and Fermat's principle would have required further consideration of Fermat's principle in point-to-point terms.
Hendrik Lorentz, in a paper written in 1886 and republished in 1907, deduced the principle of least time in point-to-point form from Huygens' construction. But the essence of his argument was somewhat obscured by an apparent dependence on aether and aether drag.
Lorentz's work was cited in 1959 by Adriaan J. de Witte, who then offered his own argument, which "although in essence the same, is believed to be more cogent and more general". De Witte's treatment is more original than that description might suggest, although limited to two dimensions; it uses calculus of variations to show that Huygens' construction and Fermat's principle lead to the same differential equation for the ray path, and that in the case of Fermat's principle, the converse holds. De Witte also noted that "The matter seems to have escaped treatment in textbooks."
In popular culture
The short story Story of Your Life by the speculative fiction writer Ted Chiang contains visual depictions of Fermat's Principle along with a discussion of its teleological dimension. Keith Devlin's The Math Instinct contains a chapter, "Elvis the Welsh Corgi Who Can Do Calculus" that discusses the calculus "embedded" in some animals as they solve the "least time" problem in actual situations.
| Physical sciences | Waves | Physics |
37277 | https://en.wikipedia.org/wiki/Rubidium%E2%80%93strontium%20dating | Rubidium–strontium dating | The rubidium–strontium dating method (Rb–Sr) is a radiometric dating technique, used by scientists to determine the age of rocks and minerals from their content of specific isotopes of rubidium (87Rb) and strontium (87Sr, 86Sr). One of the two naturally occurring isotopes of rubidium, 87Rb, decays to 87Sr with a half-life of 49.23 billion years. The radiogenic daughter, 87Sr, produced in this decay process is the only one of the four naturally occurring strontium isotopes that was not produced exclusively by stellar nucleosynthesis predating the formation of the Solar System. Over time, decay of 87Rb increases the amount of radiogenic 87Sr while the amount of other Sr isotopes remains unchanged.
The ratio 87Sr/86Sr in a mineral sample can be accurately measured using a mass spectrometer. If the amount of Sr and Rb isotopes in the sample when it formed can be determined, the age can be calculated from the increase in 87Sr/86Sr. Different minerals that crystallized from the same silicic melt will all have the same initial 87Sr/86Sr as the parent melt. However, because Rb substitutes for K in minerals and these minerals have different K/Ca ratios, the minerals will have had different starting Rb/Sr ratios, and the final 87Sr/86Sr ratio will not have increased as much in the minerals poorer in Rb. Typically, Rb/Sr increases in the order plagioclase, hornblende, K-feldspar, biotite, muscovite. Therefore, given sufficient time for significant production (ingrowth) of radiogenic 87Sr, measured 87Sr/86Sr values will be different in the minerals, increasing in the same order. Comparison of different minerals in a rock sample thus allows scientists to infer the original 87Sr/86Sr ratio and determine the age of the rock.
In addition, Rb is a highly incompatible element that, during partial melting of the mantle, prefers to join the magmatic melt rather than remain in mantle minerals. As a result, Rb is enriched in crustal rocks relative to the mantle, and 87Sr/86Sr is higher for crust rock than mantle rock. This allows scientists to distinguish magma produced by melting of crust rock from magma produced by melting of mantle rock, even if subsequent magma differentiation produces similar overall chemistry. Scientists can also estimate from 87Sr/86Sr when crust rock was first formed from magma extracted from the mantle, even if the rock is subsequently metamorphosed or even melted and recrystallized. This provides clues to the age of the Earth's continents.
Development of this process was aided by German chemists Otto Hahn and Fritz Strassmann, who later went on to discover nuclear fission in December 1938.
Example
For example, consider the case of an igneous rock such as a granite that contains several major Sr-bearing minerals including plagioclase feldspar, K-feldspar, hornblende, biotite, and muscovite. Each of these minerals has a different initial rubidium/strontium ratio dependent on their potassium content, the concentration of Rb and K in the melt and the temperature at which the minerals formed. Rubidium substitutes for potassium within the lattice of minerals at a rate proportional to its concentration within the melt.
The ideal scenario according to Bowen's reaction series would see a granite melt begin crystallizing a cumulate assemblage of plagioclase and hornblende (i.e.; tonalite or diorite), which is low in K (and hence Rb) but high in Sr (as this substitutes for Ca), which proportionally enriches the melt in K and Rb. This then causes orthoclase and biotite, both K rich minerals into which Rb can substitute, to precipitate. The resulting Rb–Sr ratios and Rb and Sr abundances of both the whole rocks and their component minerals will be markedly different. This, thus, allows a different rate of radiogenic Sr to evolve in the separate rocks and their component minerals as time progresses.
Calculating the age
The age of a sample is determined by analysing several minerals within multiple subsamples from different parts of the original sample. The 87Sr/86Sr ratio for each subsample is plotted against its 87Rb/86Sr ratio on a graph called an isochron. If these form a straight line then the subsamples are consistent, and the age probably reliable. The slope of the line dictates the age of the sample.
Given the universal law of radioactive decay and the following rubidium beta decay: ^{87}_{37}Rb ->[{\beta^-}]~^{87}_{38}Sr ~+e^-\ + \bar{\nu}_e, we obtain the expression which describes the growth of strontium-87 from the decay of rubidium-87:\lambda being the decay constant of rubidium. Furthermore, we consider the number of ^{86}_{38}Sr as a constant, since it is stable and not radiogenic. Hence, is the isochron equation. After measurements of Rubidum and Strontium concentration in the mineral we can easily determine the age, the t value, of the sample.
Sources of error
Rb–Sr dating relies on correctly measuring the Rb–Sr ratio of a mineral or whole rock sample, plus deriving an accurate 87Sr/86Sr ratio for the mineral or whole rock sample.
Several preconditions must be satisfied before a Rb–Sr date can be considered as representing the time of emplacement or formation of a rock.
The system must have remained closed to Rb and Sr diffusion from the time at which the rock formed or fell below the closure temperature (generally considered to be 650 °C);
The minerals which are taken from a rock to construct an isochron must have formed in chemical equilibrium with one another or in the case of sediments, be deposited at the same time;
The rock must not have undergone any metasomatism which could have disturbed the Rb–Sr system either thermally or chemically
One of the major drawbacks (and, conversely, the most important use) of utilizing Rb and Sr to derive a radiometric date is their relative mobility, especially in hydrothermal fluids. Rb and Sr are relatively mobile alkaline elements and as such are relatively easily moved around by the hot, often carbonated hydrothermal fluids present during metamorphism or magmatism.
Conversely, these fluids may metasomatically alter a rock, introducing new Rb and Sr into the rock (generally during potassic alteration or calcic (albitisation) alteration. Rb–Sr can then be used on the altered mineralogy to date the time of this alteration, but not the date at which the rock formed.
Thus, assigning age significance to a result requires studying the metasomatic and thermal history of the rock, any metamorphic events, and any evidence of fluid movement. A Rb–Sr date which is at variance with other geochronometers may not be useless, it may be providing data on an event which is not representing the age of formation of the rock.
Uses
Geochronology
The Rb–Sr dating method has been used extensively in dating terrestrial and lunar rocks, and meteorites. If the initial amount of Sr is known or can be extrapolated, the age can be determined by measurement of the Rb and Sr concentrations and the 87Sr/86Sr ratio. The dates indicate the true age of the minerals only if the rocks have not been subsequently altered.
The important concept in isotopic tracing is that Sr derived from any mineral through weathering reactions will have the same 87Sr/86Sr as the mineral. Although this is a potential source of error for terrestrial rocks, it is irrelevant for lunar rocks and meteorites, as there are no chemical weathering reactions in those environments.
Isotope geochemistry
Initial 87Sr/86Sr ratios are a useful tool in archaeology, forensics and paleontology because the 87Sr/86Sr of a skeleton, sea shell or indeed a clay artefact is directly comparable to the source rocks upon which it was formed or upon which the organism lived. Thus, by measuring the current-day 87Sr/86Sr ratio (and often the 143Nd–144Nd ratios as well) the geological fingerprint of an object or skeleton can be measured, allowing migration patterns to be determined.
Strontium isotope stratigraphy
Strontium isotope stratigraphy relies on recognised variations in the 87Sr/86Sr ratio of seawater over time. The application of Sr isotope stratigraphy is generally limited to carbonate samples for which the Sr seawater curve is well defined. This is well known for the Cenozoic time-scale but, due to poorer preservation of carbonate sequences in the Mesozoic and earlier, it is not completely understood for older sequences. In older sequences diagenetic alteration combined with greater uncertainties in estimating absolute ages due to lack of overlap between other geochronometers (for example U–Th) leads to greater uncertainties in the exact shape of the Sr isotope seawater curve.
| Physical sciences | Geochronology | Earth science |
37278 | https://en.wikipedia.org/wiki/Galago | Galago | Galagos , also known as bush babies or nagapies (meaning "night monkeys" in Afrikaans), are small nocturnal primates native to continental, sub-Sahara Africa, and make up the family Galagidae (also sometimes called Galagonidae). They are considered a sister group of the Lorisidae.
According to some accounts, the name "bush baby" comes from either the animal's cries or its appearance. The Ghanaian name aposor is given to them because of their firm grip on branches.
In both variety and abundance, the bush babies are the most successful strepsirrhine primates in Africa, according to the African Wildlife Foundation.
Taxonomic classification and phylogeny
Galagos are currently grouped into six genera. Euoticus is a basal sister taxon to all the other galagids. The 'dwarf' galagids recently grouped under the genus Galagoides have been found, based on genetic data, and supported by analysis of vocalisations and morphology, to actually consist of two clades, which are not sister taxa, in eastern and western/central Africa (separated by the rift valley). The latter are basal to all the other non-Euoticus galagids. The former group is sister to Galago and has been elevated to full genus status as Paragalago. The genera Otolemur and Sciurocheirus are also sisters.
Family Galagidae - galagos, or bushbabies
Genus Euoticus, needle-clawed bushbabies
Southern needle-clawed bushbaby, E. elegantulus
Northern needle-clawed bushbaby, E. pallidus
Genus Galago, lesser galagos, or lesser bushbabies
Galago senegalensis group
Somali bushbaby, G. gallarum
Mohol bushbaby, G. moholi
Senegal bushbaby, G. senegalensis
Galago matschiei group
Dusky bushbaby, G. matschiei
Genus Galagoides, western dwarf galagos
Prince Demidoff's bushbaby, Gs. demidovii
Angolan dwarf galago, Gs. kumbirensis
Thomas's bushbaby, Gs. thomasi
Genus †Laetolia
†Laetolia sadimanensis
Genus Otolemur, greater galagos, or thick-tailed bushbabies
Brown greater galago, O. crassicaudatus
Northern greater galago, O. garnettii
Silvery greater galago, O. monteiri
Genus Paragalago, eastern dwarf galagos
Paragalago zanzibaricus group
Kenya coast galago, P. cocos
Grant's bushbaby, P. granti
Zanzibar bushbaby, P. zanzibaricus
Paragalago orinus group
Uluguru bushbaby, P. orinus
Rondo bushbaby, P. rondoensis
Genus Sciurocheirus, squirrel galagos
Bioko Allen's bushbaby, S. alleni
Cross River bushbaby, S. cameronensis
Gabon bushbaby, S. gabonensis
Makandé squirrel galago, S. makandensis
The phylogeny of Galagidae according to Masters et al., 2017 is as follows:
Characteristics
Galagos have large eyes, allowing them good night vision, in addition to other characteristics, like strong hind limbs, acute hearing, and long tails that help them balance. Their ears are bat-like and allow them to track insects in the dark. They catch insects on the ground or snatch them out of the air. They are fast, agile creatures. As they bound through the thick bushes, they fold their delicate ears back to protect them. They also fold them during rest. They have nails on most of their digits, except for the second toe of the hind foot, which bears a grooming claw. Their diet is a mixture of insects and other small animals, fruit, and tree gums. They have pectinate (comb-like) incisors called toothcombs, and the dental formula: They are active at night.
After a gestation period of 110–133 days, young galagos are born with half-closed eyes and are initially unable to move about independently. After a few (6–8) days, the mother carries the infant in her mouth, and places it on branches while feeding. Females may have singles, twins, or triplets, and may become very aggressive. Each newborn weighs less than . For the first three days, the infant is kept in constant contact with the mother. The young are fed by the mother for six weeks and can feed themselves at two months. The young grow rapidly, often causing the mother to walk awkwardly as she transports them.
Females maintain a territory shared with their offspring, though males leave their mothers' territories after puberty. Thus social groups consist of closely related females and their young. Adult males maintain separate territories, which overlap with those of the female social groups; generally, one adult male mates with all the females in an area. Males that have not established such territories sometimes form small bachelor groups.
Bush-babies are sometimes kept as pets, although this is not advised because, like many other nonhuman primates, they are a likely sources of diseases that can cross species barriers. Equally, they are very likely to attract attention from customs officials on importation into many countries. Reports from veterinary and zoological sources indicate captive lifetimes of 12.0 to 16.5 years, suggesting a natural lifetime over a decade.
Galagos communicate by calling to each other and by marking their paths with urine. By following the scent of urine, they can land on exactly the same branch every time. Each species produces a unique set of loud calls that have different functions. One function is to identify individuals as members of a particular species across distances. Scientists can recognize all known galago species by their 'loud calls'. At the end of the night, group members use a special rallying call and gather to sleep in a nest of leaves, a group of branches, or a hole in a tree.
Jumping
Galagos have remarkable jumping abilities. The highest reliably reported jump for a galago is . According to a study published by the Royal Society, given the body mass of each animal and the fact that the leg muscles amount to about 25% of this, galago's jumping muscles should perform six to nine times better than those of a frog. This is thought to be due to elastic energy storage in tendons of the lower leg, allowing far greater jumps than would otherwise be possible for an animal of their size. In mid-flight, they tuck their arms and legs close to the body; they bring them out at the last second to grab a branch. In a series of leaps, a galago can cover ten yards in mere seconds. The tail, which is longer than the length of the head and body combined, assists the legs in powering the jumps. They may also hop like a kangaroo or simply run or walk on four legs. Such strong, complicated, and coordinated movements are due to the rostral half of the posterior parietal cortex that is linked to the motor, premotor, and visuomotor areas of the frontal cortex.
Behaviour
Generally, the social structure of the galago has components of both social life and solitary life. This can be seen in their play. They swing off branches or climb high and throw things. Social play includes play fights, play grooming, and following-play. During following-play, two galagos jump sporadically and chase each other through the trees. The older galagos in a group prefer to rest alone, while younger ones are in constant contact with one another. This is observed in the Galago garnetti species. Mothers often leave infants alone for long periods and do not try to stop them from leaving. On the other hand, the offspring tries to stay close to, and initiate social interactions with the mother.
Grooming is a very important part of galago daily life. They often groom themselves before, during, and after rest. Social grooming is done more often by males in the group. Females often reject attempts by males to groom them.
Relationship with humans
The name “bush baby” also refers to a myth that is used to scare children to stay indoors at night. Their baby-like cry is most likely the basis of the myth, about a powerful animal that can kidnap humans. It is also said that wild bush babies/galagos in Nigeria can never be found dead on plain ground. Rather, they make a nest of sticks, leaves or branches to die in. Endangerment of the species in sub-Saharan Africa has made this claim difficult to verify.
| Biology and health sciences | Primates | null |
37284 | https://en.wikipedia.org/wiki/Brain%20tumor | Brain tumor | A brain tumor occurs when a group of cells within the brain turn cancerous and grow out of control, creating a mass. There are two main types of tumors: malignant (cancerous) tumors and benign (non-cancerous) tumors. These can be further classified as primary tumors, which start within the brain, and secondary tumors, which most commonly have spread from tumors located outside the brain, known as brain metastasis tumors. All types of brain tumors may produce symptoms that vary depending on the size of the tumor and the part of the brain that is involved. Where symptoms exist, they may include headaches, seizures, problems with vision, vomiting and mental changes. Other symptoms may include difficulty walking, speaking, with sensations, or unconsciousness.
The cause of most brain tumors is unknown, though up to 4% of brain cancers may be caused by CT scan radiation. Uncommon risk factors include exposure to vinyl chloride, Epstein–Barr virus, ionizing radiation, and inherited syndromes such as neurofibromatosis, tuberous sclerosis, and von Hippel-Lindau Disease. Studies on mobile phone exposure have not shown a clear risk. The most common types of primary tumors in adults are meningiomas (usually benign) and astrocytomas such as glioblastomas. In children, the most common type is a malignant medulloblastoma. Diagnosis is usually by medical examination along with computed tomography (CT) or magnetic resonance imaging (MRI). The result is then often confirmed by a biopsy. Based on the findings, the tumors are divided into different grades of severity.
Treatment may include some combination of surgery, radiation therapy and chemotherapy. If seizures occur, anticonvulsant medication may be needed. Dexamethasone and furosemide are medications that may be used to decrease swelling around the tumor. Some tumors grow gradually, requiring only monitoring and possibly needing no further intervention. Treatments that use a person's immune system are being studied. Outcomes for malignant tumors vary considerably depending on the type of tumor and how far it has spread at diagnosis. Although benign tumors only grow in one area, they may still be life-threatening depending on their size and location. Malignant glioblastomas usually have very poor outcomes, while benign meningiomas usually have good outcomes. The average five-year survival rate for all (malignant) brain cancers in the United States is 33%.
Secondary, or metastatic, brain tumors are about four times as common as primary brain tumors, with about half of metastases coming from lung cancer. Primary brain tumors occur in around 250,000 people a year globally, and make up less than 2% of cancers. In children younger than 15, brain tumors are second only to acute lymphoblastic leukemia as the most common form of cancer. In New South Wales, Australia in 2005, the average lifetime economic cost of a case of brain cancer was AU$1.9 million, the greatest of any type of cancer.
Signs and symptoms
The signs and symptoms of brain tumors are broad. People may experience symptoms regardless of whether the tumor is benign (not cancerous) or cancerous. Primary and secondary brain tumors present with similar symptoms, depending on the location, size, and rate of growth of the tumor. For example, larger tumors in the frontal lobe can cause changes in the ability to think. However, a smaller tumor in an area such as Wernicke's area (small area responsible for language comprehension) can result in a greater loss of function.
Headaches
Headaches as a result of raised intracranial pressure can be an early symptom of brain cancer. However, isolated headache without other symptoms is rare, and other symptoms including visual abnormalities may occur before headaches become common. Certain warning signs for headache exist which make the headache more likely to be associated with brain cancer. These are defined as "abnormal neurological examination, headache worsened by Valsalva maneuver, headache causing awakening from sleep, new headache in the older population, progressively worsening headache, atypical headache features, or patients who do not fulfill the strict definition of migraine". Other associated signs are headaches that are worse in the morning or that subside after vomiting.
Location-specific symptoms
The brain is divided into lobes and each lobe or area has its own function. A tumour in any of these lobes may affect the area's performance. The symptoms experienced are often linked to the location of the tumour, but each person may experience something different.
Frontal lobe: Tumours may contribute to poor reasoning, inappropriate social behavior, personality changes, poor planning, lower inhibition, and decreased production of speech (Broca's area).
Temporal lobe: Tumours in this lobe may contribute to poor memory, loss of hearing, and difficulty in language comprehension (Wernicke's area is located in this lobe).
Parietal lobe: Tumours here may result in poor interpretation of languages, difficulty with speaking, writing, drawing, naming, and recognizing, and poor spatial and visual perception.
Occipital lobe: Damage to this lobe may result in poor vision or loss of vision.
Cerebellum: Tumours in this area may cause poor balance, muscle movement, and posture.
Brain stem: Tumours on the brainstem can cause seizures, endocrine problems, respiratory changes, visual changes, headaches and partial paralysis.
Leptomeninges: Tumours that spread to the leptomeninges, the lining of the brain, may cause cranial nerve palsies such as facial paralysis, abnormalities of eye movement, abnormalities of facial sensation or swallowing difficulty, depending on which cranial nerves are involved.
Behaviour changes
A person's personality may be altered due to the tumor-damaging lobes of the brain. Since the frontal, temporal, and parietal lobes control inhibition, emotions, mood, judgement, reasoning, and behavior, a tumor in those regions can cause inappropriate social behavior, temper tantrums, laughing at things which merit no laughter, and even psychological symptoms such as depression and anxiety. More research is needed into the effectiveness and safety of medication for depression in people with brain tumors.
Personality changes can have damaging effects such as unemployment, unstable relationships, and a lack of control.
Cause
A known cause of brain cancers is ionizing radiation. Approximately 4% of brain cancers in the general population are caused by CT-scan radiation. For brain cancers that follow a CT scan at lags of 2 years or more, it has been estimated that 40% are attributable to CT-scan radiation. The risk of brain cancer is dose dependent, with the relative risk increasing by 0.8 for each 100 gray of ionizing radiation received. At this dose, approximately 6391 people would have to be exposed to cause 1 case of brain cancer. Ionizing radiation to the head as part of treatment for other cancers is also a risk factor for developing brain cancer.
Mutations and deletions of tumor suppressor genes, such as P53, are thought to be the cause of some forms of brain tumor. Inherited conditions, such as Von Hippel–Lindau disease, tuberous sclerosis, multiple endocrine neoplasia, and neurofibromatosis type 2 carry a high risk for the development of brain tumors. People with celiac disease have a slightly increased risk of developing brain tumors. Smoking may increase the risk, but evidence of this remains unclear.
Although studies have not shown any link between cell-phone or mobile-phone radiation and the occurrence of brain tumors, the World Health Organization has classified mobile-phone radiation on the IARC scale into Group 2B – possibly carcinogenic.
The claim that cell-phone usage may cause brain cancer is likely based on epidemiological studies which observed a slight increase in glioma risk among heavy users of wireless phones. When those studies were conducted, GSM (2G) phones were in use. Modern, third-generation (3G) phones emit, on average, about 1% of the energy emitted by those GSM (2G) phones, and therefore the finding of an association between cell-phone usage and increased risk of brain cancer is not based upon current phone usage.
Pathophysiology
Meninges
Human brains are surrounded by a system of connective tissue membranes called meninges that separate the brain from the skull. This three-layered covering is composed of (from the outside in) the dura mater, arachnoid mater, and pia mater. The arachnoid and pia are physically connected and thus often considered as a single layer, the leptomeninges. Between the arachnoid mater and the pia mater is the subarachnoid space which contains cerebrospinal fluid (CSF). This fluid circulates in the narrow spaces between cells and through the cavities in the brain called ventricles, to support and protect the brain tissue. Blood vessels enter the central nervous system through the perivascular space above the pia mater. The cells in the blood vessel walls are joined tightly, forming the blood–brain barrier which protects the brain from toxins that might enter through the blood.
Tumors of the meninges are meningiomas and are often benign. Though not technically a tumor of brain tissue, they are often considered brain tumors since they protrude into the space where the brain is, causing symptoms. Since they are usually slow-growing tumors, meningiomas can be quite large by the time symptoms appear.
Brain matter
The three largest divisions of the brain are the cerebral cortex, cerebellum and the brainstem. These areas are composed of two broad classes of cells: neurons and glia. These two cell types are equally numerous in the brain as a whole, although glial cells outnumber neurons roughly 4 to 1 in the cerebral cortex. Glia come in several types, which perform a number of critical functions, including structural support, metabolic support, insulation, and guidance of development. Primary tumors of the glial cells are called gliomas and often are malignant by the time they are diagnosed.
The thalamus and hypothalamus are major divisions of the diencephalon, with the pituitary gland and pineal gland attached at the bottom; tumors of the pituitary and pineal gland are often benign.
The brainstem lies between the large cerebral cortex and the spinal cord. It is divided into the midbrain, pons, and medulla oblongata.
Diagnosis
There are no specific signs or symptoms for brain cancer, but the presence of a combination of symptoms and the lack of alternative causes may indicate a brain tumor. A medical history aids in the diagnosis. Clinical and laboratory investigations will serve to exclude infections as the cause of the symptoms.
Brain tumors, when compared to tumors in other areas of the body, pose a challenge for diagnosis. Commonly, radioactive tracers are uptaken in large volumes in tumors due to the high activity of tumor cells, allowing for radioactive imaging of the tumor. However, most of the brain is separated from the blood by the blood–brain barrier (BBB), a membrane that exerts a strict control over what substances are allowed to pass into the brain. Therefore, many tracers that may reach tumors in other areas of the body easily would be unable to reach brain tumors until there was a disruption of the BBB by the tumor. Disruption of the BBB is well imaged via MRI or CT scan, and is therefore regarded as the main diagnostic indicator for malignant gliomas, meningiomas, and brain metastases.
Imaging
Medical imaging plays a central role in the diagnosis of brain tumors. Early imaging methods – invasive and sometimes dangerous – such as pneumoencephalography and cerebral angiography have been replaced by non-invasive, high-resolution techniques, especially magnetic resonance imaging (MRI) and computed tomography (CT) scans. MRI with contrast enhancement is the preferred imaging test in the diagnosis of brain tumors. Glioblastomas usually enhance with contrast on T1 MRI weighted MRI imaging, and on T2 with FLAIR imaging showing hyperintense cerebral edema. Low grade gliomas are usually hypointense on T1 MRI, and hyperintense with T2 with FLAIR MRI. Meningiomas are usually homogenously enhanced with dural thickening on MRI.
Treatment with radiation can lead to treatment induced changes in the brain, including radiation necrosis (death of brain tissue due to radiation treatments) visible on brain imaging and which can be difficult to differentiate from tumor recurrence.
Different Types of MRI Scans
Magnetic Resonance Angiography (MRA) looks at the blood vessels in the brain. In the diagnosis of brain tumor, MRAs are typically carried out before surgery to help surgeons get a better understanding of the tumor vasculature. For example, a study was done where surgeons were able to separate benign brain tumors from malignant ones by analyzing the shapes of the blood vessels that were extracted from MRA. Although not required, some MRA may inject contrast agent, gadolinium, into the patient to get an enhanced image
Magnetic Resonance Spectroscopy (MRS) measures the metabolic changes or chemical changes inside the tumor. The most common MRS is proton spectroscopy with its frequency measured in parts per million (ppm). Gliomas or malignant brain tumors have different spectra from normal brain tissue in that they have greater choline levels and lower N-acetyl aspartate (NAA) signals. Using MRS in brain tumor diagnosis can help doctors identify the type of tumor and its aggressiveness. For example, benign brain tumors or meningioma have increased alanine levels. It can also help to distinguish brain tumors from scar tissues or dead tissues caused by previous radiation treatment, which does not have increased choline levels that brain tumors have, and from tumor-mimicking lesions such as abscesses or infarcts.
Perfusion Magnetic Resonance Imaging (pMRI) assess the blood volume and blood flow of different parts of the brain and brain tumors. pMRI requires the injection of contrast agent, usually gadopentetate dimeglumine (Gd-DTPA) into the veins in order to enhance the contrast. pMRI provides a cerebral blood volume map that shows the tumor vascularity and angiogenesis. Brain tumors would require a larger blood supply and thus, would show a high cerebral blood volume on the pMRI map. The vascular morphology and degree of angiogenesis from pMRI help to determine the grade and malignancy of brain tumors. For brain tumor diagnosis, pMRI is useful in determining the best site to perform biopsy and to help reduce sampling error. pMRI is also valuable for after treatment to determine if the abnormal area is a remaining tumor or a scar tissue. For patients that are undergoing anti-angiogenesis cancer therapy, pMRI can give the doctors a better sense of efficacy of the treatment by monitoring tumor cerebral blood volume.
Functional MRI (fMRI) measures blood flow changes in active parts of the brain while the patient is performing tasks and provides specific locations of the brain that are responsible for certain functions. Before performing a brain tumor surgery on patients, neurosurgeons would use fMRI to avoid damage to structures of the brain that correspond with important brain functions while resecting the tumor at the same time. Preoperative fMRI is important because it is often difficult to distinguish the anatomy near the tumor as it distorts its surrounding regions. Neurosurgeons would use fMRI to plan whether to perform a resection where tumor is surgically removed as much as possible, a biopsy where they take a surgical sampling amount to provide a diagnosis, or to not undergo surgery at all. For example, a neurosurgeon may be opposed to resecting a tumor near the motor cortex as that would affect the patient's movements. Without preoperative fMRI, the neurosurgeon would have to perform an awake-craniotomy where the patient would have to interact during open surgery to see if tumor removal would affect important brain functions.
Diffusion Weighted Imaging (DWI) a form of MRI that measures random Brownian motion of water molecules along a magnetic field gradient. For brain tumor diagnosis, measurement of apparent diffusion coefficient (ADC) in brain tumors allow doctors to categorize tumor type. Most brain tumors have higher ADC than normal brain tissues and doctors can match the observed ADC of the patient's brain tumor with a list of accepted ADC to identify tumor type. DWI is also useful for treatment and therapy purposes where changes in diffusion can be analyzed in response to drug, radiation, or gene therapy. Successful response results in apoptosis and increase in diffusion while failed treatment results in unchanged diffusion values.
Other Types of Imaging Techniques
Computed Tomography (CT) Scan uses x-rays to take pictures from different angles and computer processing to combine the pictures into a 3D image. A CT scan usually serves as an alternative to MRI in cases where the patient cannot have an MRI due to claustrophobia or pacemaker. Compared to MRI, a CT scan shows a more detailed image of the bone structures near the tumor and can be used to measure the tumor's size. Like an MRI, a contrast dye may also be injected into the veins or ingested by mouth before a CT scan to better outline any tumors that may be present. CT scans use contrast materials that are iodine-based and barium sulfate compounds. The downside of using CT scans as opposed to MRI is that some brain tumors do not show up well on CT scans because some intra-axial masses are faint and resemble normal brain tissue. In some scenarios, brain tumors in CT scans may be mistaken for infarction, infection, and demyelination. To suspect that an intra-axial mass is a brain tumor instead of other possibilities, there must be unexplained calcifications in the brain, preservation of the cortex, and disproportionate mass effect.
CT Angiography (CTA) provides information about the blood vessels in the brain using X-rays. A contrast agent is always required to be injected into the patient in the CT scanner. CTA serves as an alternative to MRA.
Positron Emission Tomography (PET) Scan uses radiolabelled substances, such as FDG which taken up by cells that are actively dividing. Tumor cells are more actively dividing so they would absorb more of the radioactive substance. After injection, a scanner would be used to create an image of the radioactive areas in the brain. PET scans are used more often for high-grade tumors than for low-grade tumors. It is useful after treatment to help doctors determine if the abnormal area on an MRI image is a remaining tumor or a scar tissue. Scar tissues will not show up on PET scans while tumors would.
Pathology
Maximal safe surgical resection (to preserve as much neurological function as possible) and histologic examination of the tumor is also required to aid in the diagnosis. Cancer cells may have specific characteristics.
Atypia: an indication of abnormality of a cell (which may be indicative of malignancy). Significance of the abnormality is highly dependent on context.
Neoplasia: the (uncontrolled) division of cells that is characteristic of cancer.
Necrosis: the (premature) death of cells, caused by external factors such as infection, toxin or trauma. Necrotic cells send the wrong chemical signals which prevent phagocytes from disposing of the dead cells, leading to a buildup of dead tissue, cell debris and toxins at or near the site of the necrotic cells
Local hypoxia, or the deprivation of adequate oxygen supply to certain areas of the brain, including within the tumor, as the tumor grows and recruits local blood vessels.
Classification
Tumors can be benign or malignant, can occur in different parts of the brain, and may be classified as primary or secondary. A primary tumor is one that has started in the brain, as opposed to a metastatic tumor, which is one that has spread to the brain from another area of the body. The incidence of metastatic tumors is approximately four times greater than primary tumors. Tumors may or may not be symptomatic: some tumors are discovered because the patient has symptoms, others show up incidentally on an imaging scan, or at an autopsy.
Grading of the tumors of the central nervous system commonly occurs on a 4-point scale (I-IV) created by the World Health Organization in 1993. Grade I tumors are the least severe and commonly associated with long-term survival, with severity and prognosis worsening as the grade increases. Low-grade tumors are often benign, while higher grades are aggressively malignant and/or metastatic. Other grading scales do exist, many based upon the same criteria as the WHO scale and graded from I-IV.
Primary
The most common primary brain tumors are:
Gliomas (50.4%)
Meningiomas (20.8%)
Pituitary adenomas (15%)
Nerve sheath tumors (10%)
These common tumors can also be organized according to tissue of origin as shown below:
Secondary
Secondary tumors of the brain are metastatic and have spread to the brain from cancers originating in another organ. Metastatic spread is usually by the blood. The most common types of cancers that spread to the brain are lung cancer (accounting for over half of all cases), breast cancer, melanoma skin cancer, kidney cancer and colon cancer.
By behavior
Brain tumors can be cancerous (malignant) or non-cancerous (benign). However, the definitions of malignant or benign neoplasms differ from those commonly used in other types of cancerous or non-cancerous neoplasms in the body.
In cancers elsewhere in the body, three malignant properties differentiate benign tumors from malignant forms of cancer: benign tumors are self-limited and do not invade or metastasize. Characteristics of malignant tumors include:
uncontrolled mitosis (growth by division beyond the normal limits)
anaplasia: the cells in the neoplasm have an obviously different form (in size and shape). Anaplastic cells display marked pleomorphism. The cell nuclei are characteristically extremely hyperchromatic (darkly stained) and enlarged; the nucleus might have the same size as the cytoplasm of the cell (nuclear-cytoplasmic ratio may approach 1:1, instead of the normal 1:4 or 1:6 ratio). Giant cells – considerably larger than their neighbors – may form and possess either one enormous nucleus or several nuclei (syncytia). Anaplastic nuclei are variable and bizarre in size and shape.
invasion or infiltration:
Invasion or invasiveness is the spatial expansion of the tumor through uncontrolled mitosis, in the sense that the neoplasm invades the space occupied by adjacent tissue, thereby pushing the other tissue aside and eventually compressing the tissue. Often these tumors are associated with clearly outlined tumors in imaging.
Infiltration is the behavior of the tumor either to grow (microscopic) tentacles that push into the surrounding tissue (often making the outline of the tumor undefined or diffuse) or to have tumor cells "seeded" into the tissue beyond the circumference of the tumorous mass.
metastasis (spread to other locations in the body via lymph or blood).
By genetics
In 2016, the WHO restructured their classifications of some categories of gliomas to include distinct genetic mutations that have been useful in differentiating tumor types, prognoses, and treatment responses. Genetic mutations are typically detected via immunohistochemistry, a technique that visualizes the presence or absence of a targeted protein via staining.
Mutations in IDH1 and IDH2 genes are commonly found in low-grade gliomas
Loss of both IDH genes combined with loss of chromosome arms 1p and 19q indicates the tumor is an oligodendroglioma
Loss of TP53 and ATRX characterizes astrocytomas
Genes EGFR, TERT, and PTEN, are commonly altered in gliomas and are useful in differentiating tumor grade and biology
Specific types
Anaplastic astrocytoma, Anaplastic oligodendroglioma, Astrocytoma, Central neurocytoma, Choroid plexus carcinoma, Choroid plexus papilloma, Choroid plexus tumor, Colloid cyst, Dysembryoplastic neuroepithelial tumour, Ependymal tumor, Fibrillary astrocytoma, Giant-cell glioblastoma, Glioblastoma, Gliomatosis cerebri, Gliosarcoma, Hemangiopericytoma, Medulloblastoma, Medulloepithelioma, Meningeal carcinomatosis, Neuroblastoma, Neurocytoma, Oligoastrocytoma, Oligodendroglioma, Optic nerve sheath meningioma, Pediatric ependymoma, Pilocytic astrocytoma, Pinealoblastoma, Pineocytoma, Pleomorphic anaplastic neuroblastoma, Pleomorphic xanthoastrocytoma, Primary central nervous system lymphoma, Sphenoid wing meningioma, Subependymal giant cell astrocytoma, Subependymoma, Trilateral retinoblastoma.
Treatment
A medical team generally assesses the treatment options and presents them to the person affected and their family. Various types of treatment are available depending on tumor type and location, and may be combined to produce the best chances of survival:
Surgery: complete or partial resection of the tumor with the objective of removing as many tumor cells as possible.
Radiotherapy: the most commonly used treatment for brain tumors; the tumor is irradiated with beta, x rays or gamma rays.
Chemotherapy: a treatment option for cancer, however, it is not always used to treat brain tumors as the blood–brain barrier can prevent some drugs from reaching the cancerous cells.
A variety of experimental therapies are available through clinical trials.
Survival rates in primary brain tumors depend on the type of tumor, age, functional status of the patient, the extent of surgical removal and other factors specific to each case.
Standard care for anaplastic oligodendrogliomas and anaplastic oligoastrocytomas is surgery followed by radiotherapy. One study found a survival benefit for the addition of chemotherapy to radiotherapy after surgery, compared with radiotherapy alone.
Surgery
Surgical resection of the greatest extent of contrast enhancing tumor possible (gross total resection) is associated with increased overall and progression free survival in those with glioblastoma. Gross total resection is often required in other brain tumors. Minimally invasive techniques are becoming the dominant trend in neurosurgical oncology. The main objective of surgery is to remove as many tumor cells as possible, with complete removal being the best outcome and cytoreduction ("debulking") of the tumor may otherwise be done. Due to the infiltrative nature of glioblastomas, total resection is usually unachievable and progression after surgery usually occurs, with progression occurring about 7 months after surgery.
Many meningiomas, with the exception of some tumors located at the skull base, can be successfully removed surgically.
Most pituitary adenomas can be removed surgically, often using a minimally invasive approach through the nasal cavity and skull base (trans-nasal, trans-sphenoidal approach). Large pituitary adenomas require a craniotomy (opening of the skull) for their removal. Radiotherapy, including stereotactic approaches, is reserved for inoperable cases.
Postoperative radiotherapy and chemotherapy are integral parts of the therapeutic standard for malignant tumors.
Multiple metastatic tumors are generally treated with radiotherapy and chemotherapy rather than surgery and the prognosis in such cases is determined by the primary tumor, and is generally poor.
Radiation therapy
The goal of radiation therapy is to kill tumor cells while leaving normal brain tissue unharmed. In standard external beam radiation therapy, multiple treatments of standard-dose "fractions" of radiation are applied to the brain. This process is repeated for a total of 10 to 30 treatments, depending on the type of tumor. This additional treatment provides some patients with improved outcomes and longer survival rates.
Radiosurgery is a treatment method that uses computerized calculations to focus radiation at the site of the tumor while minimizing the radiation dose to the surrounding brain. Radiosurgery may be an adjunct to other treatments, or it may represent the primary treatment technique for some tumors. Forms used include stereotactic radiosurgery, such as Gamma knife, Cyberknife or Novalis Tx radiosurgery.
Radiotherapy is the most common treatment for secondary brain tumors. The amount of radiotherapy depends on the size of the area of the brain affected by cancer. Conventional external beam "whole-brain radiotherapy treatment" (WBRT) or "whole-brain irradiation" may be suggested if there is a risk that other secondary tumors will develop in the future. Stereotactic radiotherapy is usually recommended in cases involving fewer than three small secondary brain tumors. Radiotherapy may be used following, or in some cases in place of, resection of the tumor. Forms of radiotherapy used for brain cancer include external beam radiation therapy, the most common, and brachytherapy and proton therapy, the last especially used for children.
People who receive stereotactic radiosurgery (SRS) and whole-brain radiation therapy (WBRT) for the treatment of metastatic brain tumors have more than twice the risk of developing learning and memory problems than those treated with SRS alone. Results of a 2021 systematic review found that when using SRS as the initial treatment, survival or death related to brain metastasis was not greater than alone versus SRS with WBRT.
Postoperative conventional daily radiotherapy improves survival for adults with good functional well-being and high grade glioma compared to no postoperative radiotherapy. Hypofractionated radiation therapy has similar efficacy for survival as compared to conventional radiotherapy, particularly for individuals aged 60 and older with glioblastoma.
Chemotherapy
Patients undergoing chemotherapy are administered drugs designed to kill tumor cells. Although chemotherapy may improve overall survival in patients with the most malignant primary brain tumors, it does so in only about 20 percent of patients. Chemotherapy is often used in young children instead of radiation, as radiation may have negative effects on the developing brain. The decision to prescribe this treatment is based on a patient's overall health, type of tumor, and extent of cancer. The toxicity and many side effects of the drugs, and the uncertain outcome of chemotherapy in brain tumors puts this treatment further down the line of treatment options with surgery and radiation therapy preferred.
UCLA Neuro-Oncology publishes real-time survival data for patients with a diagnosis of glioblastoma. They are the only institution in the United States that displays how brain tumor patients are performing on current therapies. They also show a listing of chemotherapy agents used to treat high-grade glioma tumors.
Genetic mutations have significant effects on the effectiveness of chemotherapy. Gliomas with IDH1 or IDH2 mutations respond better to chemotherapy than those without the mutation. Loss of chromosome arms 1p and 19q also indicate better response to chemoradiation.
Other
A shunt may be used to relieve symptoms caused by intracranial pressure, by reducing the build-up of fluid (hydrocephalus) caused by the blockage of the free flow of cerebrospinal fluid.
For those with brain tumors, anti-seizure prophylactic (preventative) medications are not usually recommended. However, anti-epileptics are used in those with seizures.
Cerebral edema secondary to brain tumors is managed by corticosteroids. Dexamethasone is the preferred corticosteroid due to its long half life and reduced effect on water retention (mineralcorticoid activity). Bevacizumab (an anti-VEGFA antibody) may improve cerebral edema in those that are unresponsive to steroids.
Prognosis
The prognosis of brain cancer depends on the type of cancer diagnosed. Medulloblastoma has a good prognosis with chemotherapy, radiotherapy, and surgical resection while glioblastoma has a median survival of only 15 months even with aggressive chemoradiotherapy and surgery. Brainstem gliomas have the poorest prognosis of any form of brain cancer, with most patients dying within one year, even with therapy that typically consists of radiation to the tumor along with corticosteroids. However, one type, focal brainstem gliomas in children, seems open to exceptional prognosis and long-term survival has frequently been reported.
Prognosis is also affected by presentation of genetic mutations. Certain mutations provide better prognosis than others. IDH1 and IDH2 mutations in gliomas, as well as deletion of chromosome arms 1p and 19q, generally indicate better prognosis. TP53, ATRX, EGFR, PTEN, and TERT mutations are also useful in determining prognosis.
Glioblastoma
Glioblastoma is the most aggressive (grade 4) and most common form of a malignant primary brain tumor. Even when aggressive multimodality therapy consisting of radiotherapy, chemotherapy, and surgical excision is used, median survival is only 15–18 months. Standard therapy for glioblastoma consists of maximal surgical resection of the tumor, followed by radiotherapy between two and four weeks after the surgical procedure to remove the cancer, then by chemotherapy, such as temozolomide. Most patients with glioblastoma take a corticosteroid, typically dexamethasone, during their illness to relieve symptoms. Experimental treatments include targeted therapy, gamma knife radiosurgery, boron neutron capture therapy, gene therapy, and chemowafer implants.
Oligodendrogliomas
Oligodendrogliomas are incurable but slowly progressive malignant brain tumors. They can be treated with surgical resection, chemotherapy, radiotherapy or a combination. For some suspected low-grade (grade II) tumors, only a course of watchful waiting and symptomatic therapy is opted for. These tumors show co-deletions of the p and q arms of chromosome 1 and chromosome 19 respectively (1p19q co-deletion) and have been found to be especially chemosensitive with one report claiming them to be one of the most chemosensitive tumors. A median survival of up to 16.7 years has been reported for grade II oligodendrogliomas.
Acoustic neuroma
Acoustic neuromas are non-cancerous tumors. They can be treated with surgery, radiation therapy, or observation. Early intervention with surgery or radiation is recommended to prevent progressive hearing loss.
Epidemiology
The incidence of brain tumors is higher in developed countries. This could be explained by undiagnosed tumor-related deaths in resource limited or lower income countries or by early deaths caused by other poverty-related causes that preempt a patient's life before tumors develop.
The incidence of CNS tumors in the United States, Israel, and the Nordic countries is relatively high, while Japan and Asian countries have a lower incidence.
United States
In the United States in 2015, approximately 166,039 people were living with brain or other central nervous system tumors. Over 2018, it was projected that there would be 23,880 new cases of brain tumors and 16,830 deaths in 2018 as a result, accounting for 1.4 percent of all cancers and 2.8 percent of all cancer deaths. Median age of diagnosis was 58 years old, while median age of death was 65. Diagnosis was slightly more common in males, at approximately 7.5 cases per 100 000 people, while females saw 2 fewer at 5.4. Deaths as a result of brain cancer were 5.3 per 100 000 for males, and 3.6 per 100 000 for females, making brain cancer the 10th leading cause of cancer death in the United States. Overall lifetime risk of developing brain cancer is approximated at 0.6 percent for men and women.
UK
Brain, other CNS or intracranial tumors are the ninth most common cancer in the UK (around 10,600 people were diagnosed in 2013), and it is the eighth most common cause of cancer death (around 5,200 people died in 2012). White British patients with brain tumour are 30% more likely to die within a year of diagnosis than patients from other ethnicities. The reason for this is unknown.
Children
In the United States more than 28,000 people under 20 are estimated to have a brain tumor. About 3,720 new cases of brain tumors are expected to be diagnosed in those under 15 in 2019. Higher rates were reported in 1985–1994 than in 1975–1983. There is some debate as to the reasons; one theory is that the trend is the result of improved diagnosis and reporting, since the jump occurred at the same time that MRIs became available widely, and there was no coincident jump in mortality. Central nervous system tumors make up 20–25 percent of cancers in children.
The average survival rate for all primary brain cancers in children is 74%. Brain cancers are the most common cancer in children under 19, are result in more death in this group than leukemia. Younger people do less well.
The most common brain tumor types in children (0–14) are: pilocytic astrocytoma, malignant glioma, medulloblastoma, neuronal and mixed neuronal-glial tumors, and ependymoma.
In children under 2, about 70% of brain tumors are medulloblastomas, ependymomas, and low-grade gliomas. Less commonly, and seen usually in infants, are teratomas and atypical teratoid rhabdoid tumors. Germ cell tumors, including teratomas, make up just 3% of pediatric primary brain tumors, but the worldwide incidence varies significantly.
In the UK, 429 children aged 14 and under are diagnosed with a brain tumour on average each year, and 563 children and young people under the age of 19 are diagnosed.
Research
Immunotherapy
Cancer immunotherapy is being actively studied. For malignant gliomas no therapy has been shown to improve life expectancy as of 2015.
Vesicular stomatitis virus
In 2000, researchers used the vesicular stomatitis virus (VSV) to infect and kill cancer cells without affecting healthy cells.
Retroviral replicating vectors
Led by Prof. Nori Kasahara, researchers from USC, who are now at UCLA, reported in 2001 the first successful example of applying the use of retroviral replicating vectors towards transducing cell lines derived from solid tumors. Building on this initial work, the researchers applied the technology to in vivo models of cancer and in 2005 reported a long-term survival benefit in an experimental brain tumor animal model. Subsequently, in preparation for human clinical trials, this technology was further developed by Tocagen (a pharmaceutical company primarily focused on brain cancer treatments) as a combination treatment (Toca 511 & Toca FC). This has been under investigation since 2010 in a Phase I/II clinical trial for the potential treatment of recurrent high-grade glioma including glioblastoma and anaplastic astrocytoma. No results have yet been published.
Non-invasive detection
Efforts to detect and monitor development and treatment response of brain tumors by liquid biopsy from blood, cerebrospinal fluid or urine, are in the early stages of development.
| Biology and health sciences | Cancer | null |
37315 | https://en.wikipedia.org/wiki/Computer-aided%20design | Computer-aided design | Computer-aided design (CAD) is the use of computers (or ) to aid in the creation, modification, analysis, or optimization of a design. This software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing. Designs made through CAD software help protect products and inventions when used in patent applications. CAD output is often in the form of electronic files for print, machining, or other manufacturing operations. The terms computer-aided drafting (CAD) and computer-aided design and drafting (CADD) are also used.
Its use in designing electronic systems is known as electronic design automation (EDA). In mechanical design it is known as mechanical design automation (MDA), which includes the process of creating a technical drawing with the use of computer software.
CAD software for mechanical design uses either vector-based graphics to depict the objects of traditional drafting, or may also produce raster graphics showing the overall appearance of designed objects. However, it involves more than just shapes. As in the manual drafting of technical and engineering drawings, the output of CAD must convey information, such as materials, processes, dimensions, and tolerances, according to application-specific conventions.
CAD may be used to design curves and figures in two-dimensional (2D) space; or curves, surfaces, and solids in three-dimensional (3D) space.
CAD is an important industrial art extensively used in many applications, including automotive, shipbuilding, and aerospace industries, industrial and architectural design (building information modeling), prosthetics, and many more. CAD is also widely used to produce computer animation for special effects in movies, advertising and technical manuals, often called DCC digital content creation. The modern ubiquity and power of computers means that even perfume bottles and shampoo dispensers are designed using techniques unheard of by engineers of the 1960s. Because of its enormous economic importance, CAD has been a major driving force for research in computational geometry, computer graphics (both hardware and software), and discrete differential geometry.
The design of geometric models for object shapes, in particular, is occasionally called computer-aided geometric design (CAGD).
Overview
Computer-aided design is one of the many tools used by engineers and designers and is used in many ways depending on the profession of the user and the type of software in question.
CAD is one part of the whole digital product development (DPD) activity within the product lifecycle management (PLM) processes, and as such is used together with other tools, which are either integrated modules or stand-alone products, such as:
Computer-aided engineering (CAE) and finite element analysis (FEA, FEM)
Computer-aided manufacturing (CAM) including instructions to computer numerical control (CNC) machines
Photorealistic rendering and motion simulation
Document management and revision control using product data management (PDM)
CAD is also used for the accurate creation of photo simulations that are often required in the preparation of environmental impact reports, in which computer-aided designs of intended buildings are superimposed into photographs of existing environments to represent what that locale will be like, where the proposed facilities are allowed to be built. Potential blockage of view corridors and shadow studies are also frequently analyzed through the use of CAD.
Types
There are several different types of CAD, each requiring the operator to think differently about how to use them and design their virtual components in a different manner. Virtually all of CAD tools rely on constraint concepts that are used to define geometric or non-geometric elements of a model.
2D CAD
There are many producers of the lower-end 2D sketching systems, including a number of free and open-source programs. These provide an approach to the drawing process where scale and placement on the drawing sheet can easily be adjusted in the final draft as required, unlike in hand drafting.
3D CAD
3D wireframe is an extension of 2D drafting into a three-dimensional space. Each line has to be manually inserted into the drawing. The final product has no mass properties associated with it and cannot have features directly added to it, such as holes. The operator approaches these in a similar fashion to the 2D systems, although many 3D systems allow using the wireframe model to make the final engineering drawing views.
3D "dumb" solids are created in a way analogous to manipulations of real-world objects. Basic three-dimensional geometric forms (e.g., prisms, cylinders, spheres, or rectangles) have solid volumes added or subtracted from them as if assembling or cutting real-world objects. Two-dimensional projected views can easily be generated from the models. Basic 3D solids do not usually include tools to easily allow the motion of the components, set their limits to their motion, or identify interference between components.
There are several types of 3D solid modeling
Parametric modeling allows the operator to use what is referred to as "design intent". The objects and features are created modifiable. Any future modifications can be made by changing on how the original part was created. If a feature was intended to be located from the center of the part, the operator should locate it from the center of the model. The feature could be located using any geometric object already available in the part, but this random placement would defeat the design intent. If the operator designs the part as it functions, the parametric modeler is able to make changes to the part while maintaining geometric and functional relationships.
Direct or explicit modeling provide the ability to edit geometry without a history tree. With direct modeling, once a sketch is used to create geometry it is incorporated into the new geometry, and the designer only has to modify the geometry afterward without needing the original sketch. As with parametric modeling, direct modeling has the ability to include the relationships between selected geometry (e.g., tangency, concentricity).
Assembly modelling is a process which incorporates results of the previous single-part modelling into a final product containing several parts. Assemblies can be hierarchical, depending on the specific CAD software vendor, and highly complex models can be achieved (e.g. in building engineering by using computer-aided architectural design software)
Freeform CAD
Top-end CAD systems offer the capability to incorporate more organic, aesthetic and ergonomic features into the designs. Freeform surface modeling is often combined with solids to allow the designer to create products that fit the human form and visual requirements as well as they interface with the machine.
Technology
Originally software for CAD systems was developed with computer languages such as Fortran, ALGOL but with the advancement of object-oriented programming methods this has radically changed. Typical modern parametric feature-based modeler and freeform surface systems are built around a number of key C modules with their own APIs. A CAD system can be seen as built up from the interaction of a graphical user interface (GUI) with NURBS geometry or boundary representation (B-rep) data via a geometric modeling kernel. A geometry constraint engine may also be employed to manage the associative relationships between geometry, such as wireframe geometry in a sketch or components in an assembly.
Unexpected capabilities of these associative relationships have led to a new form of prototyping called digital prototyping. In contrast to physical prototypes, which entail manufacturing time in the design. That said, CAD models can be generated by a computer after the physical prototype has been scanned using an industrial CT scanning machine. Depending on the nature of the business, digital or physical prototypes can be initially chosen according to specific needs.
Today, CAD systems exist for all the major platforms (Windows, Linux, UNIX and Mac OS X); some packages support multiple platforms.
Currently, no special hardware is required for most CAD software. However, some CAD systems can do graphically and computationally intensive tasks, so a modern graphics card, high speed (and possibly multiple) CPUs and large amounts of RAM may be recommended.
The human-machine interface is generally via a computer mouse but can also be via a pen and digitizing graphics tablet. Manipulation of the view of the model on the screen is also sometimes done with the use of a Spacemouse/SpaceBall. Some systems also support stereoscopic glasses for viewing the 3D model. Technologies that in the past were limited to larger installations or specialist applications have become available to a wide group of users. These include the CAVE or HMDs and interactive devices like motion-sensing technology
Software
Starting with the IBM Drafting System in the mid-1960s, computer-aided design systems began to provide more capabilitties than just an ability to reproduce manual drafting with electronic drafting, and the cost-benefit for companies to switch to CAD became apparent. The software automated many tasks that are taken for granted from computer systems today, such as automated generation of bills of materials, auto layout in integrated circuits, interference checking, and many others. Eventually, CAD provided the designer with the ability to perform engineering calculations. During this transition, calculations were still performed either by hand or by those individuals who could run computer programs. CAD was a revolutionary change in the engineering industry, where draftsman, designer, and engineer roles that had previously been separate began to merge. CAD is an example of the pervasive effect computers were beginning to have on the industry.
Current computer-aided design software packages range from 2D vector-based drafting systems to 3D solid and surface modelers. Modern CAD packages can also frequently allow rotations in three dimensions, allowing viewing of a designed object from any desired angle, even from the inside looking out. Some CAD software is capable of dynamic mathematical modeling.
CAD technology is used in the design of tools and machinery and in the drafting and design of all types of buildings, from small residential types (houses) to the largest commercial and industrial structures (hospitals and factories).
CAD is mainly used for detailed design of 3D models or 2D drawings of physical components, but it is also used throughout the engineering process from conceptual design and layout of products, through strength and dynamic analysis of assemblies to definition of manufacturing methods of components. It can also be used to design objects such as jewelry, furniture, appliances, etc. Furthermore, many CAD applications now offer advanced rendering and animation capabilities so engineers can better visualize their product designs. 4D BIM is a type of virtual construction engineering simulation incorporating time or schedule-related information for project management.
CAD has become an especially important technology within the scope of computer-aided technologies, with benefits such as lower product development costs and a greatly shortened design cycle. CAD enables designers to layout and develop work on screen, print it out and save it for future editing, saving time on their drawings.
License management software
In the 2000s, some CAD system software vendors shipped their distributions with a dedicated license manager software that controlled how often or how many users can utilize the CAD system. It could run either on a local machine (by loading from a local storage device) or a local network fileserver and was usually tied to a specific IP address in latter case.
List of software packages
CAD software enables engineers and architects to design, inspect and manage engineering projects within an integrated graphical user interface (GUI) on a personal computer system. Most applications support solid modeling with boundary representation (B-Rep) and NURBS geometry, and enable the same to be published in a variety of formats.
Based on market statistics, commercial software from Autodesk, Dassault Systems, Siemens PLM Software, and PTC dominate the CAD industry. The following is a list of major CAD applications, grouped by usage statistics.
Commercial software
ABViewer
AC3D
Alibre Design
ArchiCAD (Graphisoft)
AutoCAD (Autodesk)
AutoTURN
AxSTREAM
BricsCAD
CATIA (Dassault Systèmes)
Cobalt
CorelCAD
EAGLE
Fusion 360 (Autodesk)
IntelliCAD
Inventor (Autodesk)
IRONCAD
KeyCreator (Kubotek)
Landscape Express
MEDUSA4
MicroStation (Bentley Systems)
Modelur (AgiliCity)
Onshape (PTC)
NX (Siemens Digital Industries Software)
PTC Creo (successor to Pro/ENGINEER) (PTC)
PunchCAD
Remo 3D
Revit (Autodesk)
Rhinoceros 3D
SketchUp
Solid Edge (Siemens Digital Industries Software)
SOLIDWORKS (Dassault Systèmes)
SpaceClaim
T-FLEX CAD
TranslateCAD
TurboCAD
Vectorworks (Nemetschek)
Open-source software
BRL-CAD
FreeCAD
LibreCAD
LeoCAD
OpenSCAD
QCAD
Salome (software)
SolveSpace
Freeware
BricsCAD Shape
Tinkercad (successor to Autodesk 123D)
CAD kernels
ACIS by (Spatial Corp owned by Dassault Systèmes)
C3D Toolkit by C3D Labs
Open CASCADE Open Source
Parasolid by (Siemens Digital Industries Software)
ShapeManager by (Autodesk)
| Technology | Basics | null |
37329 | https://en.wikipedia.org/wiki/Aldehyde | Aldehyde | In organic chemistry, an aldehyde () is an organic compound containing a functional group with the structure . The functional group itself (without the "R" side chain) can be referred to as an aldehyde but can also be classified as a formyl group. Aldehydes are a common motif in many chemicals important in technology and biology.
Structure and bonding
Aldehyde molecules have a central carbon atom that is connected by a double bond to oxygen, a single bond to hydrogen and another single bond to a third substituent, which is carbon or, in the case of formaldehyde, hydrogen. The central carbon is often described as being sp2-hybridized. The aldehyde group is somewhat polar. The bond length is about 120–122 picometers.
Physical properties and characterization
Aldehydes have properties that are diverse and that depend on the remainder of the molecule. Smaller aldehydes such as formaldehyde and acetaldehyde are soluble in water, and the volatile aldehydes have pungent odors.
Aldehydes can be identified by spectroscopic methods. Using IR spectroscopy, they display a strong νCO band near 1700 cm−1. In their 1H NMR spectra, the formyl hydrogen center absorbs near δH 9.5 to 10, which is a distinctive part of the spectrum. This signal shows the characteristic coupling to any protons on the α carbon with a small coupling constant typically less than 3.0 Hz. The 13C NMR spectra of aldehydes and ketones gives a suppressed (weak) but distinctive signal at δC 190 to 205.
Applications and occurrence
Important aldehydes and related compounds. The aldehyde group (or formyl group) is colored red. From the left: (1) formaldehyde and (2) its trimer 1,3,5-trioxane, (3) acetaldehyde and (4) its enol vinyl alcohol, (5) glucose (pyranose form as α--glucopyranose), (6) the flavorant cinnamaldehyde, (7) retinal, which forms with opsins photoreceptors, and (8) the vitamin pyridoxal.
Naturally occurring aldehydes
Traces of many aldehydes are found in essential oils and often contribute to their pleasant odours, including cinnamaldehyde, cilantro, and vanillin. Possibly due to the high reactivity of the formyl group, aldehydes are not commonly found in organic "building block" molecules, such as amino acids, nucleic acids, and lipids. However, most sugars are derivatives of aldehydes. These aldoses exist as hemiacetals, a sort of masked form of the parent aldehyde. For example, in aqueous solution only a tiny fraction of glucose exists as the aldehyde.
Synthesis
Hydroformylation
Of the several methods for preparing aldehydes, one dominant technology is hydroformylation. Hydroformylation is conducted on a very large scale for diverse aldehydes. It involves treatment of the alkene with a mixture of hydrogen gas and carbon monoxide in the presence of a metal catalyst. Illustrative is the generation of butyraldehyde by hydroformylation of propylene:
One complication with this process is the formation of isomers, such as isobutyraldehyde:
Oxidative routes
The largest operations involve methanol and ethanol respectively to formaldehyde and acetaldehyde, which are produced on multimillion ton scale annually. Other large scale aldehydes are produced by autoxidation of hydrocarbons: benzaldehyde from toluene, acrolein from propylene, and methacrolein from isobutene. In the Wacker process, oxidation of ethylene to acetaldehyde in the presence of copper and palladium catalysts, is also used. "Green" and cheap oxygen (or air) is the oxidant of choice.
Laboratories may instead apply a wide variety of specialized oxidizing agents, which are often consumed stoichiometrically. chromium(VI) reagents are popular. Oxidation can be achieved by heating the alcohol with an acidified solution of potassium dichromate. In this case, excess dichromate will further oxidize the aldehyde to a carboxylic acid, so either the aldehyde is distilled out as it forms (if volatile) or milder reagents such as PCC are used.
A variety of reagent systems achieve aldehydes under chromium-free conditions. One such are the hypervalent organoiodine compounds (i.e., IBX acid, Dess–Martin periodinane), although these often also oxidize the α position. A Lux-Flood acid will activate other pre-oxidized substrates: various sulfoxides (e.g. the Swern oxidation), or amine oxides (e.g., the Ganem oxidation). Sterically-hindered nitroxyls (i.e., TEMPO) can catalyze aldehyde formation with a cheaper oxidant.
Alternatively, vicinal diols or their oxidized sequelae (acyloins or α-hydroxy acids) can be oxidized with cleavage to two aldehydes or an aldehyde and carbon dioxide.
Specialty methods
Common reactions
Aldehydes participate in many reactions. From the industrial perspective, important reactions are:
condensations, e.g., to prepare plasticizers and polyols, and
reduction to produce alcohols, especially "oxo-alcohols". From the biological perspective, the key reactions involve addition of nucleophiles to the formyl carbon in the formation of imines (oxidative deamination) and hemiacetals (structures of aldose sugars).
Acid-base reactions
Because of resonance stabilization of the conjugate base, an α-hydrogen in an aldehyde is weakly acidic with a pKa near 17. Note, however, this is much more acidic than an alkane or ether hydrogen, which has pKa near 50 approximately, and is even more acidic than a ketone α-hydrogen which has pKa near 20. This acidification of the α-hydrogen in aldehyde is attributed to:
the electron-withdrawing quality of the formyl center and
the fact that the conjugate base, an enolate anion, delocalizes its negative charge.
The formyl proton itself does not readily undergo deprotonation.
Enolization
Aldehydes (except those without an alpha carbon, or without protons on the alpha carbon, such as formaldehyde and benzaldehyde) can exist in either the keto or the enol tautomer. Keto–enol tautomerism is catalyzed by either acid or base. In neutral solution, the enol is the minority tautomer, reversing several times per second. But it becomes the dominant tautomer in strong acid or base solutions, and enolized aldehydes undergo nucleophilic attack at the α position.
Reduction
The formyl group can be readily reduced to a primary alcohol (). Typically this conversion is accomplished by catalytic hydrogenation either directly or by transfer hydrogenation. Stoichiometric reductions are also popular, as can be effected with sodium borohydride.
Oxidation
The formyl group readily oxidizes to the corresponding carboxyl group (). The preferred oxidant in industry is oxygen or air. In the laboratory, popular oxidizing agents include potassium permanganate, nitric acid, chromium(VI) oxide, and chromic acid. The combination of manganese dioxide, cyanide, acetic acid and methanol will convert the aldehyde to a methyl ester.
Another oxidation reaction is the basis of the silver-mirror test. In this test, an aldehyde is treated with Tollens' reagent, which is prepared by adding a drop of sodium hydroxide solution into silver nitrate solution to give a precipitate of silver(I) oxide, and then adding just enough dilute ammonia solution to redissolve the precipitate in aqueous ammonia to produce complex. This reagent converts aldehydes to carboxylic acids without attacking carbon–carbon double bonds. The name silver-mirror test arises because this reaction produces a precipitate of silver, whose presence can be used to test for the presence of an aldehyde.
A further oxidation reaction involves Fehling's reagent as a test. The complex ions are reduced to a red-brick-coloured precipitate.
If the aldehyde cannot form an enolate (e.g., benzaldehyde), addition of strong base induces the Cannizzaro reaction. This reaction results in disproportionation, producing a mixture of alcohol and carboxylic acid.
Nucleophilic addition reactions
Nucleophiles add readily to the carbonyl group. In the product, the carbonyl carbon becomes sp3-hybridized, being bonded to the nucleophile, and the oxygen center becomes protonated:
In many cases, a water molecule is removed after the addition takes place; in this case, the reaction is classed as an addition–elimination or addition–condensation reaction. There are many variations of nucleophilic addition reactions.
Oxygen nucleophiles
In the acetalisation reaction, under acidic or basic conditions, an alcohol adds to the carbonyl group and a proton is transferred to form a hemiacetal. Under acidic conditions, the hemiacetal and the alcohol can further react to form an acetal and water. Simple hemiacetals are usually unstable, although cyclic ones such as glucose can be stable. Acetals are stable, but revert to the aldehyde in the presence of acid. Aldehydes can react with water to form hydrates, . These diols are stable when strong electron withdrawing groups are present, as in chloral hydrate. The mechanism of formation is identical to hemiacetal formation.
Another aldehyde molecule can also act as the nucleophile to give polymeric or oligomeric acetals called paraldehydes.
Nitrogen nucleophiles
In alkylimino-de-oxo-bisubstitution, a primary or secondary amine adds to the carbonyl group and a proton is transferred from the nitrogen to the oxygen atom to create a carbinolamine. In the case of a primary amine, a water molecule can be eliminated from the carbinolamine intermediate to yield an imine or its trimer, a hexahydrotriazine This reaction is catalyzed by acid. Hydroxylamine () can also add to the carbonyl group. After the elimination of water, this results in an oxime. An ammonia derivative of the form such as hydrazine () or 2,4-dinitrophenylhydrazine can also be the nucleophile and after the elimination of water, resulting in the formation of a hydrazone, which are usually orange crystalline solids. This reaction forms the basis of a test for aldehydes and ketones.
Carbon nucleophiles
The cyano group in HCN can add to the carbonyl group to form cyanohydrins, . In this reaction the ion is the nucleophile that attacks the partially positive carbon atom of the carbonyl group. The mechanism involves a pair of electrons from the carbonyl-group double bond transferring to the oxygen atom, leaving it single-bonded to carbon and giving the oxygen atom a negative charge. This intermediate ion rapidly reacts with , such as from the HCN molecule, to form the alcohol group of the cyanohydrin.
Organometallic compounds, such as organolithium reagents, Grignard reagents, or acetylides, undergo nucleophilic addition reactions, yielding a substituted alcohol group. Related reactions include organostannane additions, Barbier reactions, and the Nozaki–Hiyama–Kishi reaction.
In the aldol reaction, the metal enolates of ketones, esters, amides, and carboxylic acids add to aldehydes to form β-hydroxycarbonyl compounds (aldols). Acid or base-catalyzed dehydration then leads to α,β-unsaturated carbonyl compounds. The combination of these two steps is known as the aldol condensation.
The Prins reaction occurs when a nucleophilic alkene or alkyne reacts with an aldehyde as electrophile. The product of the Prins reaction varies with reaction conditions and substrates employed.
Bisulfite reaction
Aldehydes characteristically form "addition compounds" with bisulfites:
This reaction is used as a test for aldehydes and is useful for separation or purification of aldehydes.
More complex reactions
Dialdehydes
A dialdehyde is an organic chemical compound with two aldehyde groups. The nomenclature of dialdehydes have the ending -dial or sometimes -dialdehyde. Short aliphatic dialdehydes are sometimes named after the diacid from which they can be derived. An example is butanedial, which is also called succinaldehyde (from succinic acid).
Biochemistry
Some aldehydes are substrates for aldehyde dehydrogenase enzymes which metabolize aldehydes in the body. There are toxicities associated with some aldehydes that are related to neurodegenerative disease, heart disease, and some types of cancer.
Examples of aldehydes
Formaldehyde (methanal)
Acetaldehyde (ethanal)
Propionaldehyde (propanal)
Butyraldehyde (butanal)
Isovaleraldehyde
Benzaldehyde (phenylmethanal)
Cinnamaldehyde
Vanillin
Tolualdehyde
Furfural
Retinaldehyde
Glycolaldehyde
Examples of dialdehydes
Glutaraldehyde
Glyoxal
Malondialdehyde
Phthalaldehyde
Succindialdehyde
Uses
Of all aldehydes, formaldehyde is produced on the largest scale, about . It is mainly used in the production of resins when combined with urea, melamine, and phenol (e.g., Bakelite). It is a precursor to methylene diphenyl diisocyanate ("MDI"), a precursor to polyurethanes. The second main aldehyde is butyraldehyde, of which about are prepared by hydroformylation. It is the principal precursor to 2-ethylhexanol, which is used as a plasticizer. Acetaldehyde once was a dominating product, but production levels have declined to less than because it mainly served as a precursor to acetic acid, which is now prepared by carbonylation of methanol. Many other aldehydes find commercial applications, often as precursors to alcohols, the so-called oxo alcohols, which are used in detergents. Some aldehydes are produced only on a small scale (less than 1000 tons per year) and are used as ingredients in flavours and perfumes such as Chanel No. 5. These include cinnamaldehyde and its derivatives, citral, and lilial.
Nomenclature
IUPAC names for aldehydes
The common names for aldehydes do not strictly follow official guidelines, such as those recommended by IUPAC, but these rules are useful. IUPAC prescribes the following nomenclature for aldehydes:
Acyclic aliphatic aldehydes are named as derivatives of the longest carbon chain containing the aldehyde group. Thus, HCHO is named as a derivative of methane, and is named as a derivative of butane. The name is formed by changing the suffix -e of the parent alkane to -al, so that HCHO is named methanal, and is named butanal.
In other cases, such as when a group is attached to a ring, the suffix -carbaldehyde may be used. Thus, is known as cyclohexanecarbaldehyde. If the presence of another functional group demands the use of a suffix, the aldehyde group is named with the prefix formyl-. This prefix is preferred to methanoyl-.
If the compound is a natural product or a carboxylic acid, the prefix oxo- may be used to indicate which carbon atom is part of the aldehyde group; for example, is named 2-oxoethanoic acid.
If replacing the aldehyde group with a carboxyl group () would yield a carboxylic acid with a trivial name, the aldehyde may be named by replacing the suffix -ic acid or -oic acid in this trivial name by -aldehyde.
Etymology
The word aldehyde was coined by Justus von Liebig as a contraction of the Latin (dehydrogenated alcohol). In the past, aldehydes were sometimes named after the corresponding alcohols, for example, vinous aldehyde for acetaldehyde. (Vinous is from Latin "wine", the traditional source of ethanol, cognate with vinyl.)
The term formyl group is derived from the Latin word "ant". This word can be recognized in the simplest aldehyde, formaldehyde, and in the simplest carboxylic acid, formic acid.
| Physical sciences | Carbon–oxygen bond | null |
37335 | https://en.wikipedia.org/wiki/Drought | Drought | A drought is a period of drier-than-normal conditions. A drought can last for days, months or years. Drought often has large impacts on the ecosystems and agriculture of affected regions, and causes harm to the local economy. Annual dry seasons in the tropics significantly increase the chances of a drought developing, with subsequent increased wildfire risks. Heat waves can significantly worsen drought conditions by increasing evapotranspiration. This dries out forests and other vegetation, and increases the amount of fuel for wildfires.
Drought is a recurring feature of the climate in most parts of the world, becoming more extreme and less predictable due to climate change, which dendrochronological studies date back to 1900. There are three kinds of drought effects, environmental, economic and social. Environmental effects include the drying of wetlands, more and larger wildfires, loss of biodiversity.
Economic impacts of drought result due to negative disruptions to agriculture and livestock farming (causing food insecurity), forestry, public water supplies, maritime navigation (due to e.g.: lower water levels), electric power supply (by affecting hydropower systems) and impacts on human health.
Social and health costs include the negative effect on the health of people directly exposed to this phenomenon (excessive heat waves), high food costs, stress caused by failed harvests, water scarcity, etc. Drought can also lead to increased air pollution due to increased dust concentrations and wildfires. Prolonged droughts have caused mass migrations and humanitarian crisis.
Examples for regions with increased drought risks are the Amazon basin, Australia, the Sahel region and India. For example, in 2005, parts of the Amazon basin experienced the worst drought in 100 years. Australia could experience more severe droughts and they could become more frequent in the future, a government-commissioned report said on July 6, 2008. The long Australian Millennial drought broke in 2010. The 2020–2022 Horn of Africa drought has surpassed the horrific drought in 2010–2011 in both duration and severity. More than 150 districts in India are drought vulnerable, mostly concentrated in the state of Rajasthan, Gujarat, Madhya Pradesh and its adjoining Chhattisgarh, Uttar Pradesh, northern Karnataka and adjoining Maharashtra of the country.
Throughout history, humans have usually viewed droughts as disasters due to the impact on food availability and the rest of society. People have viewed drought as a natural disaster or as something influenced by human activity, or as a result of supernatural forces.
Definition
The IPCC Sixth Assessment Report defines a drought simply as "drier than normal conditions". This means that a drought is "a moisture deficit relative to the average water availability at a given location and season".
According to National Integrated Drought Information System, a multi-agency partnership, drought is generally defined as "a deficiency of precipitation over an extended period of time (usually a season or more), resulting in a water shortage". The National Weather Service office of the NOAA defines drought as "a deficiency of moisture that results in adverse impacts on people, animals, or vegetation over a sizeable area".
Drought is a complex phenomenon − relating to the absence of water − which is difficult to monitor and define. By the early 1980s, over 150 definitions of "drought" had already been published. The range of definitions reflects differences in regions, needs, and disciplinary approaches.
Categories
There are three major categories of drought based on where in the water cycle the moisture deficit occurs: meteorological drought, hydrological drought, and agricultural or ecological drought. A meteorological drought occurs due to lack of precipitation. A hydrological drought is related to low runoff, streamflow, and reservoir and groundwater storage. An agricultural or ecological drought is causing plant stress from a combination of evaporation and low soil moisture. Some organizations add another category: socioeconomic drought occurs when the demand for an economic good exceeds supply as a result of a weather-related shortfall in water supply. The socioeconomic drought is a similar concept to water scarcity.
The different categories of droughts have different causes but similar effects:
Meteorological drought occurs when there is a prolonged time with less than average precipitation. Meteorological drought usually precedes the other kinds of drought. As a drought persists, the conditions surrounding it gradually worsen and its impact on the local population gradually increases.
Hydrological drought happens when water reserves available in sources such as aquifers, lakes and reservoirs fall below average or a locally significant threshold. Hydrological drought tends to present more slowly because it involves stored water that is used but not replenished. Due to the close interaction with water use, this type of drought is can be heavily influenced by water management. Both positive and negative human influences have been discovered and strategic water management strategies seem key to mitigate drought impact. Like agricultural droughts, hydrological droughts can be triggered by more than just a loss of rainfall. For instance, around 2007 Kazakhstan was awarded a large amount of money by the World Bank to restore water that had been diverted to other nations from the Aral Sea under Soviet rule. Similar circumstances also place their largest lake, Balkhash, at risk of completely drying out.
Agricultural or ecological droughts affect crop production or ecosystems in general. This condition can also arise independently from any change in precipitation levels when either increased irrigation or soil conditions and erosion triggered by poorly planned agricultural endeavors cause a shortfall in water available to the crops.
Indices and monitoring
Several indices have been defined to quantify and monitor drought at different spatial and temporal scales. A key property of drought indices is their spatial comparability, and they must be statistically robust. Drought indices include:
Palmer drought index (sometimes called the Palmer drought severity index (PDSI)): a regional drought index commonly used for monitoring drought events and studying areal extent and severity of drought episodes. The index uses precipitation and temperature data to study moisture supply and demand using a simple water balance model.
Keetch-Byram Drought Index: an index that is calculated based on rainfall, air temperature, and other meteorological factors.
Standardized precipitation index (SPI): It is computed based on precipitation, which makes it a simple and easy-to-apply indicator for monitoring and prediction of droughts in different parts of the world. The World Meteorological Organization recommends this index for identifying and monitoring meteorological droughts in different climates and time periods.
Standardized Precipitation Evapotranspiration Index (SPEI): a multiscalar drought index based on climatic data. The SPEI accounts also for the role of the increased atmospheric evaporative demand on drought severity. Evaporative demand is particularly dominant during periods of precipitation deficit. The SPEI calculation requires long-term and high-quality precipitation and atmospheric evaporative demand datasets. These can be obtained from ground stations or gridded data based on reanalysis as well as satellite and multi-source datasets.
Indices related to vegetation: root-zone soil moisture, vegetation condition index (VDI) and vegetation health index (VHI). The VCI and VHI are computed based on vegetation indices such as the normalized difference vegetation index (NDVI) and temperature datasets.
Deciles index
Standardized runoff index
High-resolution drought information helps to better assess the spatial and temporal changes and variability in drought duration, severity, and magnitude at a much finer scale. This supports the development of site-specific adaptation measures.
The application of multiple indices using different datasets helps to better manage and monitor droughts than using a single dataset, This is particularly the case in regions of the world where not enough data is available such as Africa and South America. Using a single dataset can be limiting, as it may not capture the full spectrum of drought characteristics and impacts.
Careful monitoring of moisture levels can also help predict increased risk for wildfires.
Causes
General precipitation deficiency
Mechanisms of producing precipitation include convective, stratiform, and orographic rainfall. Convective processes involve strong vertical motions that can cause the overturning of the atmosphere in that location within an hour and cause heavy precipitation, while stratiform processes involve weaker upward motions and less intense precipitation over a longer duration.
Precipitation can be divided into three categories, based on whether it falls as liquid water, liquid water that freezes on contact with the surface, or ice.
Droughts occur mainly in areas where normal levels of rainfall are, in themselves, low. If these factors do not support precipitation volumes sufficiently to reach the surface over a sufficient time, the result is a drought. Drought can be triggered by a high level of reflected sunlight and above average prevalence of high pressure systems, winds carrying continental, rather than oceanic air masses, and ridges of high pressure areas aloft can prevent or restrict the developing of thunderstorm activity or rainfall over one certain region. Once a region is within drought, feedback mechanisms such as local arid air, hot conditions which can promote warm core ridging, and minimal evapotranspiration can worsen drought conditions.
Dry season
Within the tropics, distinct, wet and dry seasons emerge due to the movement of the Intertropical Convergence Zone or Monsoon trough. The dry season greatly increases drought occurrence, and is characterized by its low humidity, with watering holes and rivers drying up. Because of the lack of these watering holes, many grazing animals are forced to migrate due to the lack of water in search of more fertile lands. Examples of such animals are zebras, elephants, and wildebeest. Because of the lack of water in the plants, bushfires are common. Since water vapor becomes more energetic with increasing temperature, more water vapor is required to increase relative humidity values to 100% at higher temperatures (or to get the temperature to fall to the dew point). Periods of warmth quicken the pace of fruit and vegetable production, increase evaporation and transpiration from plants, and worsen drought conditions.
El Niño–Southern Oscillation (ENSO)
The El Niño–Southern Oscillation (ENSO) phenomenon can sometimes play a significant role in drought. ENSO comprises two patterns of temperature anomalies in the central Pacific Ocean, known as La Niña and El Niño. La Niña events are generally associated with drier and hotter conditions and further exacerbation of drought in California and the Southwestern United States, and to some extent the U.S. Southeast. Meteorological scientists have observed that La Niñas have become more frequent over time.
Conversely, during El Niño events, drier and hotter weather occurs in parts of the Amazon River Basin, Colombia, and Central America. Winters during the El Niño are warmer and drier than average conditions in the Northwest, northern Midwest, and northern Mideast United States, so those regions experience reduced snowfalls. Conditions are also drier than normal from December to February in south-central Africa, mainly in Zambia, Zimbabwe, Mozambique, and Botswana. Direct effects of El Niño resulting in drier conditions occur in parts of Southeast Asia and Northern Australia, increasing bush fires, worsening haze, and decreasing air quality dramatically. Drier-than-normal conditions are also in general observed in Queensland, inland Victoria, inland New South Wales, and eastern Tasmania from June to August. As warm water spreads from the west Pacific and the Indian Ocean to the east Pacific, it causes extensive drought in the western Pacific. Singapore experienced the driest February in 2014 since records began in 1869, with only 6.3 mm of rain falling in the month and temperatures hitting as high as 35 °C on 26 February. The years 1968 and 2005 had the next driest Februaries, when 8.4 mm of rain fell.
Climate change
Globally, the occurrence of droughts has increased as a result of the increase in temperature and atmospheric evaporative demand. In addition, increased climate variability has increased the frequency and severity of drought events. Moreover, the occurrence and impact of droughts are aggravated by anthropogenic activities such as land use change and water management and demand.
The IPCC Sixth Assessment Report also pointed out that "Warming over land drives an increase in atmospheric evaporative demand and in the severity of drought events" and "Increased atmospheric evaporative demand increases plant water stress, leading to agricultural and ecological drought".
There is a rise of compound warm-season droughts in Europe that are concurrent with an increase in potential evapotranspiration.
Erosion and human activities
Human activity can directly trigger exacerbating factors such as over-farming, excessive irrigation, deforestation, and erosion adversely impact the ability of the land to capture and hold water. In arid climates, the main source of erosion is wind. Erosion can be the result of material movement by the wind. The wind can cause small particles to be lifted and therefore moved to another region (deflation). Suspended particles within the wind may impact on solid objects causing erosion by abrasion (ecological succession). Wind erosion generally occurs in areas with little or no vegetation, often in areas where there is insufficient rainfall to support vegetation.
Impacts
Drought is one of the most complex and major natural hazards, and it has devastating impacts on the environment, economy, water resources, agriculture, and society worldwide.
One can divide the impacts of droughts and water shortages into three groups: environmental, economic and social (including health).
Environmental and economic impacts
Environmental effects of droughts include: lower surface and subterranean water-levels, lower flow-levels (with a decrease below the minimum leading to direct danger for amphibian life), increased pollution of surface water, the drying out of wetlands, more and larger wildfires, higher deflation intensity, loss of biodiversity, worse health of trees and the appearance of pests and dendroid diseases. Drought-induced mortality of trees lacks in most climate models in their representation of forests as land carbon sink.
Economic losses as a result of droughts include lower agricultural, forests, game and fishing output, higher food-production costs, lower energy-production levels in hydro plants, losses caused by depleted water tourism and transport revenue, problems with water supply for the energy sector and for technological processes in metallurgy, mining, the chemical, paper, wood, foodstuff industries etc., disruption of water supplies for municipal economies.
Further examples of common environmental and economic consequences of drought include:
Alteration of diversity of plant communities, which can have an impact on net primary production and other ecosystem services.
Wildfires, such as Australian bushfires and wildfires in the United States, become more common during times of drought and may cause human deaths.
Dust Bowls, themselves a sign of erosion, which further erode the landscape
Dust storms, when drought hits an area suffering from desertification and erosion
Habitat damage, affecting both terrestrial and aquatic wildlife
Snake migration, which results in snake-bites
Reduced electricity production due to reduced water-flow through hydroelectric dams
Shortages of water for industrial users
Agricultural impacts
Droughts can cause land degradation and loss of soil moisture, resulting in the destruction of cropland productivity. This can result in diminished crop growth or yield productions and carrying capacity for livestock. Drought in combination with high levels of grazing pressure can function as the tipping point for an ecosystem, causing woody encroachment.
Water stress affects plant development and quality in a variety of ways: firstly drought can cause poor germination and impaired seedling development. At the same time plant growth relies on cellular division, cell enlargement, and differentiation. Drought stress impairs mitosis and cell elongation via loss of turgor pressure which results in poor growth. Development of leaves is also dependent upon turgor pressure, concentration of nutrients, and carbon assimilates all of which are reduced by drought conditions, thus drought stress lead to a decrease in leaf size and number. Plant height, biomass, leaf size and stem girth has been shown to decrease in maize under water limiting conditions. Crop yield is also negatively effected by drought stress, the reduction in crop yield results from a decrease in photosynthetic rate, changes in leaf development, and altered allocation of resources all due to drought stress. Crop plants exposed to drought stress suffer from reductions in leaf water potential and transpiration rate. Water-use efficiency increases in crops such as wheat while decreasing in others, such as potatoes.
Plants need water for the uptake of nutrients from the soil, and for the transport of nutrients throughout the plant: drought conditions limit these functions leading to stunted growth. Drought stress also causes a decrease in photosynthetic activity in plants due to the reduction of photosynthetic tissues, stomatal closure, and reduced performance of photosynthetic machinery. This reduction in photosynthetic activity contributes to the reduction in plant growth and yields. Another factor influencing reduced plant growth and yields include the allocation of resources; following drought stress plants will allocate more resources to roots to aid in water uptake increasing root growth and reducing the growth of other plant parts while decreasing yields.
Social and health impacts
The most negative impacts of drought for humans include crop failure, food crisis, famine, malnutrition, and poverty, which lead to loss of life and mass migration of people.
There are negative effects on the health of people who are directly exposed to this phenomenon (excessive heat waves). Droughts can also cause limitations of water supplies, increased water pollution levels, high food-costs, stress caused by failed harvests, water scarcity, etc. Reduced water quality can occur because lower water-flows reduce dilution of pollutants and increase contamination of remaining water sources.
This explains why droughts and water scarcity operate as a factor which increases the gap between developed and developing countries.
Effects vary according to vulnerability. For example, subsistence farmers are more likely to migrate during drought because they do not have alternative food-sources. Areas with populations that depend on water sources as a major food-source are more vulnerable to famine.
Further examples of social and health consequences include:
Water scarcity, crop failure, famine and hunger – drought provides too little water to support food crops; malnutrition, dehydration and related diseases
Mass migration, resulting in internal displacement and international refugees
Social unrest
War over natural resources, including water and food
Cyanotoxin accumulation within food chains and water supply (some of which are among the most potent toxins known to science) can cause cancer with low exposure over the long term. High levels of microcystin appeared in San Francisco Bay Area salt-water shellfish and fresh-water supplies throughout the state of California in 2016.
Loss of fertile soils
Wind erosion is much more severe in arid areas and during times of drought. For example, in the Great Plains, it is estimated that soil loss due to wind erosion can be as much as 6100 times greater in drought years than in wet years.
Loess is a homogeneous, typically nonstratified, porous, friable, slightly coherent, often calcareous, fine-grained, silty, pale yellow or buff, windblown (Aeolian) sediment. It generally occurs as a widespread blanket deposit that covers areas of hundreds of square kilometers and tens of meters thick. Loess often stands in either steep or vertical faces. Loess tends to develop into highly rich soils. Under appropriate climatic conditions, areas with loess are among the most agriculturally productive in the world. Loess deposits are geologically unstable by nature, and will erode very readily. Therefore, windbreaks (such as big trees and bushes) are often planted by farmers to reduce the wind erosion of loess.
Regions particularly affected
Amazon basin
In 2005, parts of the Amazon basin experienced the worst drought in 100 years. A 2006 article reported results showing that the forest in its present form could survive only three years of drought. Scientists at the Brazilian National Institute of Amazonian Research argue in the article that this drought response, coupled with the effects of deforestation on regional climate, are pushing the rainforest towards a "tipping point" where it would irreversibly start to die. It concludes that the rainforest is on the brink of being turned into savanna or desert, with catastrophic consequences for the world's climate. According to the WWF, the combination of climate change and deforestation increases the drying effect of dead trees that fuels forest fires.
Australia
The 1997–2009 Millennium Drought in Australia led to a water supply crisis across much of the country. As a result, many desalination plants were built for the first time (see list).
By far the largest part of Australia is desert or semi-arid lands commonly known as the outback. A 2005 study by Australian and American researchers investigated the desertification of the interior, and suggested that one explanation was related to human settlers who arrived about 50,000 years ago. Regular burning by these settlers could have prevented monsoons from reaching interior Australia. In June 2008 it became known that an expert panel had warned of long term, maybe irreversible, severe ecological damage for the whole Murray-Darling basin if it did not receive sufficient water by October 2008. Australia could experience more severe droughts and they could become more frequent in the future, a government-commissioned report said on July 6, 2008. Australian environmentalist Tim Flannery, predicted that unless it made drastic changes, Perth in Western Australia could become the world's first ghost metropolis, an abandoned city with no more water to sustain its population. The long Australian Millennial drought broke in 2010.
East Africa
East Africa, including for example Ethiopia, Eritrea, Kenya, Somalia, South Sudan, Sudan, Tanzania, and Uganda, has a diverse climate, ranging from hot, dry regions to cooler, wetter highland regions. The region has considerable variability in seasonal rainfall and a very complex topography. In the northern parts of the region within the Nile basin (Ethiopia, Sudan), the rainfall is characterized by an unimodal cycle with a wet season from July to September. The rest of the region has a bimodal annual cycle, featuring long rains from March to May and the short rains from October to December. The frequent occurrence of hydrological extremes, like droughts and floods, harms the already vulnerable population suffering from severe poverty and economic turmoil. Droughts prompted food shortages for example in 1984–85, 2006 and 2011.
The Eastern African region experiences the impacts of climate change in different forms. For instance, below-average rainfall occurred for six consecutive rainy seasons in the Horn of Africa during the period 2020–2023 leading to the third longest and most widespread drought on record with dire implications for food security (see Horn of Africa drought (2020–present)). Conversely, other parts experienced extreme floods, e.g., the 2020 East Africa floods in Ethiopia, Rwanda, Kenya, Burundi, and Uganda, and the 2022 floods in South Sudan.
A key feature in the region is the heterogeneous distribution of hydrologic extremes in space and time. For instance, El Niño can cause droughts in one part of the region and floods in the other. This is also a common situation within a country, e.g., in Ethiopia. The recent years with consecutive droughts followed by floods are a testament to the need to better forecast these kinds of events and their impacts.
Himalayan river basins
Approximately 2.4 billion people live in the drainage basin of the Himalayan rivers. India, China, Pakistan, Bangladesh, Nepal and Myanmar could experience floods followed by droughts in coming decades. More than 150 districts in India are drought vulnerable, mostly concentrated in the state of Rajasthan, Gujarat, Madhya Pradesh and its adjoining Chhattisgarh, Uttar Pradesh, northern Karnataka and adjoining Maharashtra of the country. Drought in India affecting the Ganges is of particular concern, as it provides drinking water and agricultural irrigation for more than 500 million people.
North America
The west coast of North America, which gets much of its water from glaciers in mountain ranges such as the Rocky Mountains and Sierra Nevada, also would be affected.
By country or region
Droughts in particular countries:
| Physical sciences | Natural disasters | null |
37379 | https://en.wikipedia.org/wiki/Relative%20density | Relative density | Relative density, also called specific gravity, is a dimensionless quantity defined as the ratio of the density (mass of a unit volume) of a substance to the density of a given reference material. Specific gravity for solids and liquids is nearly always measured with respect to water at its densest (at ); for gases, the reference is air at room temperature (). The term "relative density" (abbreviated r.d. or RD) is preferred in SI, whereas the term "specific gravity" is gradually being abandoned.
If a substance's relative density is less than 1 then it is less dense than the reference; if greater than 1 then it is denser than the reference. If the relative density is exactly 1 then the densities are equal; that is, equal volumes of the two substances have the same mass. If the reference material is water, then a substance with a relative density (or specific gravity) less than 1 will float in water. For example, an ice cube, with a relative density of about 0.91, will float. A substance with a relative density greater than 1 will sink.
Temperature and pressure must be specified for both the sample and the reference. Pressure is nearly always 1 atm (101.325 kPa). Where it is not, it is more usual to specify the density directly. Temperatures for both sample and reference vary from industry to industry. In British brewing practice, the specific gravity, as specified above, is multiplied by 1000. Specific gravity is commonly used in industry as a simple means of obtaining information about the concentration of solutions of various materials such as brines, must weight (syrups, juices, honeys, brewers wort, must, etc.) and acids.
Basic calculation
Relative density () or specific gravity () is a dimensionless quantity, as it is the ratio of either densities or weights
where is relative density, is the density of the substance being measured, and is the density of the reference. (By convention , the Greek letter rho, denotes density.)
The reference material can be indicated using subscripts: which means "the relative density of substance with respect to reference". If the reference is not explicitly stated then it is normally assumed to be water at 4 °C (or, more precisely, 3.98 °C, which is the temperature at which water reaches its maximum density). In SI units, the density of water is (approximately) 1000 kg/m3 or 1 g/cm3, which makes relative density calculations particularly convenient: the density of the object only needs to be divided by 1000 or 1, depending on the units.
The relative density of gases is often measured with respect to dry air at a temperature of 20 °C and a pressure of 101.325 kPa absolute, which has a density of 1.205 kg/m3. Relative density with respect to air can be obtained by
where is the molar mass and the approximately equal sign is used because equality pertains only if 1 mol of the gas and 1 mol of air occupy the same volume at a given temperature and pressure, i.e., they are both ideal gases. Ideal behaviour is usually only seen at very low pressure. For example, one mol of an ideal gas occupies 22.414 L at 0 °C and 1 atmosphere whereas carbon dioxide has a molar volume of 22.259 L under those same conditions.
Those with SG greater than 1 are denser than water and will, disregarding surface tension effects, sink in it. Those with an SG less than 1 are less dense than water and will float on it. In scientific work, the relationship of mass to volume is usually expressed directly in terms of the density (mass per unit volume) of the substance under study. It is in industry where specific gravity finds wide application, often for historical reasons.
True specific gravity of a liquid can be expressed mathematically as:
where is the density of the sample and is the density of water.
The apparent specific gravity is simply the ratio of the weights of equal volumes of sample and water in air:
where represents the weight of the sample measured in air and the weight of an equal volume of water measured in air.
It can be shown that true specific gravity can be computed from different properties:
where g is the local acceleration due to gravity, V is the volume of the sample and of water (the same for both), ρsample is the density of the sample, ρH2O is the density of water, WV represents a weight obtained in vacuum, is the mass of the sample and is the mass of an equal volume of water.
The density of water and of the sample varies with temperature and pressure, so it is necessary to specify the temperatures and pressures at which the densities or weights were determined. Measurements are nearly always made at 1 nominal atmosphere (101.325 kPa ± variations from changing weather patterns), but as specific gravity usually refers to highly incompressible aqueous solutions or other incompressible substances (such as petroleum products), variations in density caused by pressure are usually neglected at least where apparent specific gravity is being measured. For true (in vacuo) specific gravity calculations, air pressure must be considered (see below). Temperatures are specified by the notation (Ts/Tr), with Ts representing the temperature at which the sample's density was determined and Tr the temperature at which the reference (water) density is specified. For example, SG (20 °C/4 °C) would be understood to mean that the density of the sample was determined at 20 °C and of the water at 4 °C. Taking into account different sample and reference temperatures, while SGH2O = (20 °C/20 °C), it is also the case that SGH2O = = (20 °C/4 °C). Here, temperature is being specified using the current ITS-90 scale and the densities used here and in the rest of this article are based on that scale. On the previous IPTS-68 scale, the densities at 20 °C and 4 °C are and respectively, resulting in an SG (20 °C/4 °C) value for water of .
As the principal use of specific gravity measurements in industry is determination of the concentrations of substances in aqueous solutions and as these are found in tables of SG versus concentration, it is extremely important that the analyst enter the table with the correct form of specific gravity. For example, in the brewing industry, the Plato table lists sucrose concentration by weight against true SG, and was originally (20 °C/4 °C) i.e. based on measurements of the density of sucrose solutions made at laboratory temperature (20 °C) but referenced to the density of water at 4 °C which is very close to the temperature at which water has its maximum density, ρH2O equal to 999.972 kg/m3 in SI units ( in cgs units or 62.43 lb/cu ft in United States customary units). The ASBC table in use today in North America for apparent specific gravity measurements at (20 °C/20 °C) is derived from the original Plato table using Plato et al.‘s value for SG(20 °C/4 °C) = . In the sugar, soft drink, honey, fruit juice and related industries, sucrose concentration by weight is taken from a table prepared by A. Brix, which uses SG (17.5 °C/17.5 °C). As a final example, the British SG units are based on reference and sample temperatures of 60 °F and are thus (15.56 °C/15.56 °C).
Given the specific gravity of a substance, its actual density can be calculated by rearranging the above formula:
Occasionally a reference substance other than water is specified (for example, air), in which case specific gravity means density relative to that reference.
Temperature dependence
See Density for a table of the measured densities of water at various temperatures.
The density of substances varies with temperature and pressure so that it is necessary to specify the temperatures and pressures at which the densities or masses were determined. It is nearly always the case that measurements are made at nominally 1 atmosphere (101.325 kPa ignoring the variations caused by changing weather patterns) but as relative density usually refers to highly incompressible aqueous solutions or other incompressible substances (such as petroleum products) variations in density caused by pressure are usually neglected at least where apparent relative density is being measured. For true (in vacuo) relative density calculations air pressure must be considered (see below). Temperatures are specified by the notation (Ts/Tr) with Ts representing the temperature at which the sample's density was determined and Tr the temperature at which the reference (water) density is specified. For example, SG (20 °C/4 °C) would be understood to mean that the density of the sample was determined at 20 °C and of the water at 4 °C. Taking into account different sample and reference temperatures, while SGH2O = 1.000000 (20 °C/20 °C) it is also the case that RDH2O = = 0.9982288 (20 °C/4 °C). Here temperature is being specified using the current ITS-90 scale and the densities used here and in the rest of this article are based on that scale. On the previous IPTS-68 scale the densities at 20 °C and 4 °C are, respectively, 0.9982041 and 0.9999720 resulting in an RD (20 °C/4 °C) value for water of 0.99823205.
The temperatures of the two materials may be explicitly stated in the density symbols; for example:
relative density: 8.15; or specific gravity: 2.432
where the superscript indicates the temperature at which the density of the material is measured, and the subscript indicates the temperature of the reference substance to which it is compared.
Uses
Relative density can also help to quantify the buoyancy of a substance in a fluid or gas, or determine the density of an unknown substance from the known density of another. Relative density is often used by geologists and mineralogists to help determine the mineral content of a rock or other sample. Gemologists use it as an aid in the identification of gemstones. Water is preferred as the reference because measurements are then easy to carry out in the field (see below for examples of measurement methods).
As the principal use of relative density measurements in industry is determination of the concentrations of substances in aqueous solutions and these are found in tables of RD vs concentration it is extremely important that the analyst enter the table with the correct form of relative density. For example, in the brewing industry, the Plato table, which lists sucrose concentration by mass against true RD, were originally (20 °C/4 °C) that is based on measurements of the density of sucrose solutions made at laboratory temperature (20 °C) but referenced to the density of water at 4 °C which is very close to the temperature at which water has its maximum density of ρ() equal to 0.999972 g/cm3 (or 62.43 lb·ft−3). The ASBC table in use today in North America, while it is derived from the original Plato table is for apparent relative density measurements at (20 °C/20 °C) on the IPTS-68 scale where the density of water is 0.9982071 g/cm3. In the sugar, soft drink, honey, fruit juice and related industries sucrose concentration by mass is taken from this work which uses SG (17.5 °C/17.5 °C). As a final example, the British RD units are based on reference and sample temperatures of 60 °F and are thus (15.56 °C/15.56 °C).
Measurement
Relative density can be calculated directly by measuring the density of a sample and dividing it by the (known) density of the reference substance. The density of the sample is simply its mass divided by its volume. Although mass is easy to measure, the volume of an irregularly shaped sample can be more difficult to ascertain. One method is to put the sample in a water-filled graduated cylinder and read off how much water it displaces. Alternatively the container can be filled to the brim, the sample immersed, and the volume of overflow measured. The surface tension of the water may keep a significant amount of water from overflowing, which is especially problematic for small samples. For this reason it is desirable to use a water container with as small a mouth as possible.
For each substance, the density, ρ, is given by
When these densities are divided, references to the spring constant, gravity and cross-sectional area simply cancel, leaving
Hydrostatic weighing
Relative density is more easily and perhaps more accurately measured without measuring volume. Using a spring scale, the sample is weighed first in air and then in water. Relative density (with respect to water) can then be calculated using the following formula:
where
Wair is the weight of the sample in air (measured in newtons, pounds-force or some other unit of force)
Wwater is the weight of the sample in water (measured in the same units).
This technique cannot easily be used to measure relative densities less than one, because the sample will then float. Wwater becomes a negative quantity, representing the force needed to keep the sample underwater.
Another practical method uses three measurements. The sample is weighed dry. Then a container filled to the brim with water is weighed, and weighed again with the sample immersed, after the displaced water has overflowed and been removed. Subtracting the last reading from the sum of the first two readings gives the weight of the displaced water. The relative density result is the dry sample weight divided by that of the displaced water. This method allows the use of scales which cannot handle a suspended sample. A sample less dense than water can also be handled, but it has to be held down, and the error introduced by the fixing material must be considered.
Hydrometer
The relative density of a liquid can be measured using a hydrometer. This consists of a bulb attached to a stalk of constant cross-sectional area, as shown in the adjacent diagram.
First the hydrometer is floated in the reference liquid (shown in light blue), and the displacement (the level of the liquid on the stalk) is marked (blue line). The reference could be any liquid, but in practice it is usually water.
The hydrometer is then floated in a liquid of unknown density (shown in green). The change in displacement, Δx, is noted. In the example depicted, the hydrometer has dropped slightly in the green liquid; hence its density is lower than that of the reference liquid. It is necessary that the hydrometer floats in both liquids.
The application of simple physical principles allows the relative density of the unknown liquid to be calculated from the change in displacement. (In practice the stalk of the hydrometer is pre-marked with graduations to facilitate this measurement.)
In the explanation that follows,
ρref is the known density (mass per unit volume) of the reference liquid (typically water).
ρnew is the unknown density of the new (green) liquid.
RDnew/ref is the relative density of the new liquid with respect to the reference.
V is the volume of reference liquid displaced, i.e. the red volume in the diagram.
m is the mass of the entire hydrometer.
g is the local gravitational constant.
Δx is the change in displacement. In accordance with the way in which hydrometers are usually graduated, Δx is here taken to be negative if the displacement line rises on the stalk of the hydrometer, and positive if it falls. In the example depicted, Δx is negative.
A is the cross sectional area of the shaft.
Since the floating hydrometer is in static equilibrium, the downward gravitational force acting upon it must exactly balance the upward buoyancy force. The gravitational force acting on the hydrometer is simply its weight, mg. From the Archimedes buoyancy principle, the buoyancy force acting on the hydrometer is equal to the weight of liquid displaced. This weight is equal to the mass of liquid displaced multiplied by g, which in the case of the reference liquid is ρrefVg. Setting these equal, we have
or just
Exactly the same equation applies when the hydrometer is floating in the liquid being measured, except that the new volume is (see note above about the sign of Δx). Thus,
Combining () and () yields
But from () we have . Substituting into () gives
This equation allows the relative density to be calculated from the change in displacement, the known density of the reference liquid, and the known properties of the hydrometer. If Δx is small then, as a first-order approximation of the geometric series equation () can be written as:
This shows that, for small Δx, changes in displacement are approximately proportional to changes in relative density.
Pycnometer
A pycnometer (from ), also called pyknometer or specific gravity bottle, is a device used to determine the density of a liquid. A pycnometer is usually made of glass, with a close-fitting ground glass stopper with a capillary tube through it, so that air bubbles may escape from the apparatus. This device enables a liquid's density to be measured accurately by reference to an appropriate working fluid, such as water or mercury, using an analytical balance.
If the flask is weighed empty, full of water, and full of a liquid whose relative density is desired, the relative density of the liquid can easily be calculated. The particle density of a powder, to which the usual method of weighing cannot be applied, can also be determined with a pycnometer. The powder is added to the pycnometer, which is then weighed, giving the weight of the powder sample. The pycnometer is then filled with a liquid of known density, in which the powder is completely insoluble. The weight of the displaced liquid can then be determined, and hence the relative density of the powder.
A gas pycnometer, the gas-based manifestation of a pycnometer, compares the change in pressure caused by a measured change in a closed volume containing a reference (usually a steel sphere of known volume) with the change in pressure caused by the sample under the same conditions. The difference in change of pressure represents the volume of the sample as compared to the reference sphere, and is usually used for solid particulates that may dissolve in the liquid medium of the pycnometer design described above, or for porous materials into which the liquid would not fully penetrate.
When a pycnometer is filled to a specific, but not necessarily accurately known volume, V and is placed upon a balance, it will exert a force
where mb is the mass of the bottle and g the gravitational acceleration at the location at which the measurements are being made. ρa is the density of the air at the ambient pressure and ρb is the density of the material of which the bottle is made (usually glass) so that the second term is the mass of air displaced by the glass of the bottle whose weight, by Archimedes Principle must be subtracted. The bottle is filled with air but as that air displaces an equal amount of air the weight of that air is canceled by the weight of the air displaced. Now we fill the bottle with the reference fluid e.g. pure water. The force exerted on the pan of the balance becomes:
If we subtract the force measured on the empty bottle from this (or tare the balance before making the water measurement) we obtain.
where the subscript n indicated that this force is net of the force of the empty bottle. The bottle is now emptied, thoroughly dried and refilled with the sample. The force, net of the empty bottle, is now:
where ρs is the density of the sample. The ratio of the sample and water forces is:
This is called the apparent relative density, denoted by subscript A, because it is what we would obtain if we took the ratio of net weighings in air from an analytical balance or used a hydrometer (the stem displaces air). Note that the result does not depend on the calibration of the balance. The only requirement on it is that it read linearly with force. Nor does RDA depend on the actual volume of the pycnometer.
Further manipulation and finally substitution of RDV, the true relative density (the subscript V is used because this is often referred to as the relative density ), for ρs/ρw gives the relationship between apparent and true relative density:
In the usual case we will have measured weights and want the true relative density. This is found from
Since the density of dry air at 101.325 kPa at 20 °C is 0.001205 g/cm3 and that of water is 0.998203 g/cm3 we see that the difference between true and apparent relative densities for a substance with relative density (20 °C/20 °C) of about 1.100 would be 0.000120. Where the relative density of the sample is close to that of water (for example dilute ethanol solutions) the correction is even smaller.
The pycnometer is used in ISO standard: ISO 1183-1:2004, ISO 1014–1985 and ASTM standard: ASTM D854.
Types
Gay-Lussac, pear shaped, with perforated stopper, adjusted, capacity 1, 2, 5, 10, 25, 50 and 100 mL
as above, with ground-in thermometer, adjusted, side tube with cap
Hubbard, for bitumen and heavy crude oils, cylindrical type, ASTM D 70, 24 mL
as above, conical type, ASTM D 115 and D 234, 25 mL
Boot, with vacuum jacket and thermometer, capacity 5, 10, 25 and 50 mL
Digital density meters
Hydrostatic Pressure-based Instruments: This technology relies upon Pascal's Principle which states that the pressure difference between two points within a vertical column of fluid is dependent upon the vertical distance between the two points, the density of the fluid and the gravitational force. This technology is often used for tank gauging applications as a convenient means of liquid level and density measure.
Vibrating Element Transducers: This type of instrument requires a vibrating element to be placed in contact with the fluid of interest. The resonant frequency of the element is measured and is related to the density of the fluid by a characterization that is dependent upon the design of the element. In modern laboratories precise measurements of relative density are made using oscillating U-tube meters. These are capable of measurement to 5 to 6 places beyond the decimal point and are used in the brewing, distilling, pharmaceutical, petroleum and other industries. The instruments measure the actual mass of fluid contained in a fixed volume at temperatures between 0 and 80 °C but as they are microprocessor based can calculate apparent or true relative density and contain tables relating these to the strengths of common acids, sugar solutions, etc.
Ultrasonic Transducer: Ultrasonic waves are passed from a source, through the fluid of interest, and into a detector which measures the acoustic spectroscopy of the waves. Fluid properties such as density and viscosity can be inferred from the spectrum.
Radiation-based Gauge: Radiation is passed from a source, through the fluid of interest, and into a scintillation detector, or counter. As the fluid density increases, the detected radiation "counts" will decrease. The source is typically the radioactive isotope caesium-137, with a half-life of about 30 years. A key advantage for this technology is that the instrument is not required to be in contact with the fluid—typically the source and detector are mounted on the outside of tanks or piping.
Buoyant Force Transducer: the buoyancy force produced by a float in a homogeneous liquid is equal to the weight of the liquid that is displaced by the float. Since buoyancy force is linear with respect to the density of the liquid within which the float is submerged, the measure of the buoyancy force yields a measure of the density of the liquid. One commercially available unit claims the instrument is capable of measuring relative density with an accuracy of ± 0.005 RD units. The submersible probe head contains a mathematically characterized spring-float system. When the head is immersed vertically in the liquid, the float moves vertically and the position of the float controls the position of a permanent magnet whose displacement is sensed by a concentric array of Hall-effect linear displacement sensors. The output signals of the sensors are mixed in a dedicated electronics module that provides a single output voltage whose magnitude is a direct linear measure of the quantity to be measured.
Relative density in soil mechanics
Relative density a measure of the current void ratio in relation to the maximum and minimum void rations, and applied effective stress control the mechanical behavior of cohesionless soil. Relative density is defined by
in which , and are the maximum, minimum and actual void rations.
Limitations
Specific gravity (SG) is a useful concept but has several limitations. One major issue is its sensitivity to temperature since the density of both the substance being measured and the reference changes with temperature, affecting accuracy. It also assumes materials are incompressible, which isn't true for gasses or some liquids under varying pressures. It doesn't provide detailed information about a material’s composition or properties beyond density. Errors can also occur due to impurities, incomplete mixing, or air bubbles in liquids, which can skew results.
Examples
(Samples may vary, and these figures are approximate.)
Substances with a relative density of 1 are neutrally buoyant, those with RD greater than one are denser than water, and so (ignoring surface tension effects) will sink in it, and those with an RD of less than one are less dense than water, and so will float.
Example:
Helium gas has a density of 0.164 g/L; it is 0.139 times as dense as air, which has a density of 1.18 g/L.
Urine normally has a specific gravity between 1.003 and 1.030. The Urine Specific Gravity diagnostic test is used to evaluate renal concentration ability for assessment of the urinary system. Low concentration may indicate diabetes insipidus, while high concentration may indicate albuminuria or glycosuria.
Blood normally has a specific gravity of approximately 1.060.
Vodka 80° proof (40% v/v) has a specific gravity of 0.9498.
| Physical sciences | Ratio | Basics and measurement |
37400 | https://en.wikipedia.org/wiki/Mole%20%28unit%29 | Mole (unit) | The mole (symbol mol) is a unit of measurement, the base unit in the International System of Units (SI) for amount of substance, an SI base quantity proportional to the number of elementary entities of a substance. One mole is an aggregate of exactly elementary entities (approximately 602 sextillion or 602 billion times a trillion), which can be atoms, molecules, ions, ion pairs, or other particles. The number of particles in a mole is the Avogadro number (symbol N0) and the numerical value of the Avogadro constant (symbol NA) expressed in mol−1. The value was chosen on the basis of the historical definition of the mole as the amount of substance that corresponds to the number of atoms in 12 grams of 12C, which made the mass of a mole of a compound expressed in grams, numerically equal to the average molecular mass or formula mass of the compound expressed in daltons. With the 2019 revision of the SI, the numerical equivalence is now only approximate but may be assumed for all practical purposes.
The mole is widely used in chemistry as a convenient way to express amounts of reactants and amounts of products of chemical reactions. For example, the chemical equation can be interpreted to mean that for each 2 mol molecular hydrogen (H2) and 1 mol molecular oxygen (O2) that react, 2 mol of water (H2O) form. The concentration of a solution is commonly expressed by its molar concentration, defined as the amount of dissolved substance per unit volume of solution, for which the unit typically used is mole per litre (mol/L).
Concepts
Relation to the Avogadro constant
The number of entities (symbol N) in a one-mole sample equals the Avogadro number (symbol N0), a dimensionless quantity.
Historically, N0 approximates the number of nucleons (protons or neutrons) in one gram of ordinary matter.
The Avogadro constant (symbol ) has numerical multiplier given by the Avogadro number with the unit reciprocal mole (mol−1).
The ratio is a measure of the amount of substance (with the unit mole).
Nature of the entities
Depending on the nature of the substance, an elementary entity may be an atom, a molecule, an ion, an ion pair, or a subatomic particle such as a proton. For example, 10 moles of water (a chemical compound) and 10 moles of mercury (a chemical element) contain equal numbers of substance, with one atom of mercury for each molecule of water, despite the two quantities having different volumes and different masses.
The mole corresponds to a given count of entities. Usually, the entities counted are chemically identical and individually distinct. For example, a solution may contain a certain number of dissolved molecules that are more or less independent of each other. However, the constituent entities in a solid are fixed and bound in a lattice arrangement, yet they may be separable without losing their chemical identity. Thus, the solid is composed of a certain number of moles of such entities. In yet other cases, such as diamond, where the entire crystal is essentially a single molecule, the mole is still used to express the number of atoms bound together, rather than a count of molecules. Thus, common chemical conventions apply to the definition of the constituent entities of a substance, in other cases exact definitions may be specified.
The mass of a substance is equal to its relative atomic (or molecular) mass multiplied by the molar mass constant, which is almost exactly 1 g/mol.
Similar units
Like chemists, chemical engineers use the unit mole extensively, but different unit multiples may be more suitable for industrial use. For example, the SI unit for volume is the cubic metre, a much larger unit than the commonly used litre in the chemical laboratory. When amount of substance is also expressed in kmol (1000 mol) in industrial-scaled processes, the numerical value of molarity remains the same, as . Chemical engineers once used the kilogram-mole (notation kg-mol), which is defined as the number of entities in 12 kg of 12C, and often referred to the mole as the gram-mole (notation g-mol), then defined as the number of entities in 12 g of 12C, when dealing with laboratory data.
Late 20th-century chemical engineering practice came to use the kilomole (kmol), which was numerically identical to the kilogram-mole (until the 2019 revision of the SI, which redefined the mole by fixing the value of the Avogadro constant, making it very nearly equivalent to but no longer exactly equal to the gram-mole), but whose name and symbol adopt the SI convention for standard multiples of metric units – thus, kmol means 1000 mol. This is equivalent to the use of kg instead of g. The use of kmol is not only for "magnitude convenience" but also makes the equations used for modelling chemical engineering systems coherent. For example, the conversion of a flowrate of kg/s to kmol/s only requires dividing by the molar mass in g/mol (as ) without multiplying by 1000 unless the basic SI unit of mol/s were to be used, which would otherwise require the molar mass to be converted to kg/mol.
For convenience in avoiding conversions in the imperial (or US customary units), some engineers adopted the pound-mole (notation lb-mol or lbmol), which is defined as the number of entities in 12 lb of 12C. One lb-mol is equal to , which is the same numerical value as the number of grams in an international avoirdupois pound.
Greenhouse and growth chamber lighting for plants is sometimes expressed in micromoles per square metre per second, where 1 mol photons ≈ photons. The obsolete unit einstein is variously defined as the energy in one mole of photons and also as simply one mole of photons.
Derived units and SI multiples
The only SI derived unit with a special name derived from the mole is the katal, defined as one mole per second of catalytic activity. Like other SI units, the mole can also be modified by adding a metric prefix that multiplies it by a power of 10:
One femtomole is exactly 602,214,076 molecules; attomole and smaller quantities cannot be exactly realized. The yoctomole, equal to around 0.6 of an individual molecule, did make appearances in scientific journals in the year the yocto- prefix was officially implemented.
History
The history of the mole is intertwined with that of units of molecular mass, and the Avogadro constant.
The first table of standard atomic weight was published by John Dalton (1766–1844) in 1805, based on a system in which the relative atomic mass of hydrogen was defined as 1. These relative atomic masses were based on the stoichiometric proportions of chemical reaction and compounds, a fact that greatly aided their acceptance: It was not necessary for a chemist to subscribe to atomic theory (an unproven hypothesis at the time) to make practical use of the tables. This would lead to some confusion between atomic masses (promoted by proponents of atomic theory) and equivalent weights (promoted by its opponents and which sometimes differed from relative atomic masses by an integer factor), which would last throughout much of the nineteenth century.
Jöns Jacob Berzelius (1779–1848) was instrumental in the determination of relative atomic masses to ever-increasing accuracy. He was also the first chemist to use oxygen as the standard to which other masses were referred. Oxygen is a useful standard, as, unlike hydrogen, it forms compounds with most other elements, especially metals. However, he chose to fix the atomic mass of oxygen as 100, which did not catch on.
Charles Frédéric Gerhardt (1816–56), Henri Victor Regnault (1810–78) and Stanislao Cannizzaro (1826–1910) expanded on Berzelius' works, resolving many of the problems of unknown stoichiometry of compounds, and the use of atomic masses attracted a large consensus by the time of the Karlsruhe Congress (1860). The convention had reverted to defining the atomic mass of hydrogen as 1, although at the level of precision of measurements at that time – relative uncertainties of around 1% – this was numerically equivalent to the later standard of oxygen = 16. However the chemical convenience of having oxygen as the primary atomic mass standard became ever more evident with advances in analytical chemistry and the need for ever more accurate atomic mass determinations.
The name mole is an 1897 translation of the German unit Mol, coined by the chemist Wilhelm Ostwald in 1894 from the German word Molekül (molecule). The related concept of equivalent mass had been in use at least a century earlier.
In chemistry, it has been known since Proust's law of definite proportions (1794) that knowledge of the mass of each of the components in a chemical system is not sufficient to define the system. Amount of substance can be described as mass divided by Proust's "definite proportions", and contains information that is missing from the measurement of mass alone. As demonstrated by Dalton's law of partial pressures (1803), a measurement of mass is not even necessary to measure the amount of substance (although in practice it is usual). There are many physical relationships between amount of substance and other physical quantities, the most notable one being the ideal gas law (where the relationship was first demonstrated in 1857). The term "mole" was first used in a textbook describing these colligative properties.
Standardization
Developments in mass spectrometry led to the adoption of oxygen-16 as the standard substance, in lieu of natural oxygen.
The oxygen-16 definition was replaced with one based on carbon-12 during the 1960s. The International Bureau of Weights and Measures defined the mole as "the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilograms of carbon-12." Thus, by that definition, one mole of pure 12C had a mass of exactly 12 g. The four different definitions were equivalent to within 1%.
Because a dalton, a unit commonly used to measure atomic mass, is exactly 1/12 of the mass of a carbon-12 atom, this definition of the mole entailed that the mass of one mole of a compound or element in grams was numerically equal to the average mass of one molecule or atom of the substance in daltons, and that the number of daltons in a gram was equal to the number of elementary entities in a mole. Because the mass of a nucleon (i.e. a proton or neutron) is approximately 1 dalton and the nucleons in an atom's nucleus make up the overwhelming majority of its mass, this definition also entailed that the mass of one mole of a substance was roughly equivalent to the number of nucleons in one atom or molecule of that substance.
Since the definition of the gram was not mathematically tied to that of the dalton, the number of molecules per mole NA (the Avogadro constant) had to be determined experimentally. The experimental value adopted by CODATA in 2010 is .
In 2011 the measurement was refined to .
The mole was made the seventh SI base unit in 1971 by the 14th CGPM.
2019 revision of the SI
Before the 2019 revision of the SI, the mole was defined as the amount of substance of a system that contains as many elementary entities as there are atoms in 12 grams of carbon-12 (the most common isotope of carbon).
The term gram-molecule was formerly used to mean one mole of molecules, and gram-atom for one mole of atoms. For example, 1 mole of MgBr2 is 1 gram-molecule of MgBr2 but 3 gram-atoms of MgBr2.
In 2011, the 24th meeting of the General Conference on Weights and Measures (CGPM) agreed to a plan for a possible revision of the SI base unit definitions at an undetermined date.
On 16 November 2018, after a meeting of scientists from more than 60 countries at the CGPM in Versailles, France, all SI base units were defined in terms of physical constants. This meant that each SI unit, including the mole, would not be defined in terms of any physical objects but rather they would be defined by physical constants that are, in their nature, exact.
Such changes officially came into effect on 20 May 2019. Following such changes, "one mole" of a substance was redefined as containing "exactly elementary entities" of that substance.
Criticism
Since its adoption into the International System of Units in 1971, numerous criticisms of the concept of the mole as a unit like the metre or the second have arisen:
the number of molecules, etc. in a given amount of material is a fixed dimensionless quantity that can be expressed simply as a number, not requiring a distinct base unit;
The SI thermodynamic mole is irrelevant to analytical chemistry and could cause avoidable costs to advanced economies
The mole is not a true metric (i.e. measuring) unit, rather it is a parametric unit, and amount of substance is a parametric base quantity
the SI defines numbers of entities as quantities of dimension one, and thus ignores the ontological distinction between entities and units of continuous quantities
the mole is often used interchangeably and inconsistently to refer to both a unit and a quantity without appropriate use of amount of substance potentially causing confusion for novice chemistry students.
Mole Day
October 23, denoted 10/23 in the US, is recognized by some as Mole Day. It is an informal holiday in honor of the unit among chemists. The date is derived from the Avogadro number, which is approximately . It starts at 6:02 a.m. and ends at 6:02 p.m. Alternatively, some chemists celebrate June 2 (), June 22 (), or 6 February (), a reference to the 6.02 or 6.022 part of the constant.
| Physical sciences | Amount of substance | null |
37401 | https://en.wikipedia.org/wiki/Fertilizer | Fertilizer | A fertilizer (American English) or fertiliser (British English) is any material of natural or synthetic origin that is applied to soil or to plant tissues to supply plant nutrients. Fertilizers may be distinct from liming materials or other non-nutrient soil amendments. Many sources of fertilizer exist, both natural and industrially produced. For most modern agricultural practices, fertilization focuses on three main macro nutrients: nitrogen (N), phosphorus (P), and potassium (K) with occasional addition of supplements like rock flour for micronutrients. Farmers apply these fertilizers in a variety of ways: through dry or pelletized or liquid application processes, using large agricultural equipment, or hand-tool methods.
Historically, fertilization came from natural or organic sources: compost, animal manure, human manure, harvested minerals, crop rotations, and byproducts of human-nature industries (e.g. fish processing waste, or bloodmeal from animal slaughter). However, starting in the 19th century, after innovations in plant nutrition, an agricultural industry developed around synthetically created agrochemical fertilizers. This transition was important in transforming the global food system, allowing for larger-scale industrial agriculture with large crop yields.
Nitrogen-fixing chemical processes, such as the Haber process invented at the beginning of the 20th century, and amplified by production capacity created during World War II, led to a boom in using nitrogen fertilizers. In the latter half of the 20th century, increased use of nitrogen fertilizers (800% increase between 1961 and 2019) has been a crucial component of the increased productivity of conventional food systems (more than 30% per capita) as part of the so-called "Green Revolution".
The use of artificial and industrially-applied fertilizers has caused environmental consequences such as water pollution and eutrophication due to nutritional runoff; carbon and other emissions from fertilizer production and mining; and contamination and pollution of soil. Various sustainable agriculture practices can be implemented to reduce the adverse environmental effects of fertilizer and pesticide use and environmental damage caused by industrial agriculture.
History
Management of soil fertility has preoccupied farmers since the beginning of agriculture. Middle Eastern, Chinese, Mesoamerican, and Cultures of the Central Andes were all early adopters of agriculture. This is thought to have led to their cultures growing faster in population which allowed an exportation of culture to neighboring hunter-gatherer groups. Fertilizer use along with agriculture allowed some of these early societies a critical advantage over their neighbors, leading them to become dominant cultures in their respective regions (P Bellwood - 2023). Egyptians, Romans, Babylonians, and early Germans are all recorded as using minerals or manure to enhance the productivity of their farms. The scientific research of plant nutrition started well before the work of German chemist Justus von Liebig although his name is most mentioned as the "father of the fertilizer industry". Nicolas Théodore de Saussure and scientific colleagues at the time were quick to disprove the simplifications of von Liebig. Prominent scientists whom von Liebig drew were Carl Ludwig Sprenger and Hermann Hellriegel. In this field, a 'knowledge erosion' took place, partly driven by an intermingling of economics and research. John Bennet Lawes, an English entrepreneur, began experimenting on the effects of various manures on plants growing in pots in 1837, and a year or two later the experiments were extended to crops in the field. One immediate consequence was that in 1842 he patented a manure formed by treating phosphates with sulfuric acid, and thus was the first to create the artificial manure industry. In the succeeding year, he enlisted the services of Joseph Henry Gilbert; together they performed crop experiments at the Institute of Arable Crops Research.
The Birkeland–Eyde process was one of the competing industrial processes at the beginning of nitrogen-based fertilizer production. This process was used to fix atmospheric nitrogen (N2) into nitric acid (HNO3), one of several chemical processes called nitrogen fixation. The resultant nitric acid was then used as a source of nitrate (NO3−). A factory based on the process was built in Rjukan and Notodden in Norway and large hydroelectric power facilities were built.
The 1910s and 1920s witnessed the rise of the Haber process and the Ostwald process. The Haber process produces ammonia (NH3) from methane (CH4) (natural gas) gas and molecular nitrogen (N2) from the air. The ammonia from the Haber process is then partially converted into nitric acid (HNO3) in the Ostwald process. It is estimated that a third of annual global food production uses ammonia from the Haber–Bosch process and that this supports nearly half the world's population. After World War II, nitrogen production plants that had ramped up for wartime bomb manufacturing were pivoted towards agricultural uses. The use of synthetic nitrogen fertilizers has increased steadily over the last 50 years, rising almost 20-fold to the current rate of 100 million tonnes of nitrogen per year.
The development of synthetic nitrogen fertilizers has significantly supported global population growth. It has been estimated that almost half the people on the Earth are currently fed due to synthetic nitrogen fertilizer use. The use of phosphate fertilizers has also increased from 9 million tonnes per year in 1960 to 40 million tonnes per year in 2000.
Agricultural use of inorganic fertilizers in 2021 was 195 million tonnes of nutrients, of which 56% was nitrogen. Asia represented 53% of the world's total agricultural use of inorganic fertilizers in 2021, followed by the Americas (29%), Europe (12%), Africa (4%) and Oceania (2%). This ranking of the regions is the same for all nutrients. The main users of inorganic fertilizers are, in descending order, China, India, Brazil, and the United States of America (see Table 15), with China the largest user of each nutrient.
A maize crop yielding 6–9 tonnes of grain per hectare () requires of phosphate fertilizer to be applied; soybean crops require about half, 20–25 kg per hectare. Yara International is the world's largest producer of nitrogen-based fertilizers.
Mechanism
Fertilizers enhance the growth of plants. This goal is met in two ways, the traditional one being additives that provide nutrients. The second mode by which some fertilizers act is to enhance the effectiveness of the soil by modifying its water retention and aeration. This article, like many on fertilizers, emphasizes the nutritional aspect.
Fertilizers typically provide, in varying proportions:
Three main macronutrients (NPK):
Nitrogen (N): leaf growth and stems
Phosphorus (P): development of roots, flowers, seeds and fruit;
Potassium (K): strong stem growth, movement of water in plants, promotion of flowering and fruiting;
three secondary macronutrients: calcium (Ca), magnesium (Mg), and sulfur (S);
Micronutrients: copper (Cu), iron (Fe), manganese (Mn), molybdenum (Mo), zinc (Zn), and boron (B). Of occasional significance are silicon (Si), cobalt (Co), and vanadium (V).
The nutrients required for healthy plant life are classified according to the elements, but the elements are not used as fertilizers. Instead, compounds containing these elements are the basis of fertilizers. The macro-nutrients are consumed in larger quantities and are present in plant tissue in quantities from 0.15% to 6.0% on a dry matter (DM) (0% moisture) basis. Plants are made up of four main elements: hydrogen, oxygen, carbon, and nitrogen. Carbon, hydrogen, and oxygen are widely available respectively in carbon dioxide and in water. Although nitrogen makes up most of the atmosphere, it is in a form that is unavailable to plants. Nitrogen is the most important fertilizer since nitrogen is present in proteins (amide bonds between amino acids), DNA (puric and pyrimidic bases), and other components (e.g., tetrapyrrolic heme in chlorophyll). To be nutritious to plants, nitrogen must be made available in a "fixed" form. Only some bacteria and their host plants (notably legumes) can fix atmospheric nitrogen () by converting it to ammonia (). Phosphate () is required for the production of DNA (genetic code) and ATP, the main energy carrier in cells, as well as certain lipids (phospholipids, the main components of the lipidic double layer of the cell membranes).
Microbiological considerations
Two sets of enzymatic reactions are highly relevant to the efficiency of nitrogen-based fertilizers.
Urease
The first is the hydrolysis (reaction with water) of urea (). Many soil bacteria possess the enzyme urease, which catalyzes the conversion of urea to ammonium ion () and bicarbonate ion ().
Ammonia oxidation
Ammonia-oxidizing bacteria (AOB), such as species of Nitrosomonas, oxidize ammonia () to nitrite (), a process termed nitrification. Nitrite-oxidizing bacteria, especially Nitrobacter, oxidize nitrite () to nitrate (), which is extremely soluble and mobile and is a major cause of eutrophication and algal bloom.
Classification
Fertilizers are classified in several ways. They are classified according to whether they provide a single nutrient (e.g., K, P, or N), in which case they are classified as "straight fertilizers". "Multinutrient fertilizers" (or "complex fertilizers") provide two or more nutrients, for example, N and P. Fertilizers are also sometimes classified as inorganic (the topic of most of this article) versus organic. Inorganic fertilizers exclude carbon-containing materials except ureas. Organic fertilizers are usually (recycled) plant- or animal-derived matter. Inorganic are sometimes called synthetic fertilizers since various chemical treatments are required for their manufacture.
Single nutrient ("straight") fertilizers
The main nitrogen-based straight fertilizer is ammonia (NH3) ammonium (NH4+) or its solutions, including:
Ammonium nitrate (NH4NO3) with 34-35% nitrogen is also widely used.
Urea (CO(NH2)2), with 45-46% nitrogen, another popular source of nitrogen, having the advantage that it is solid and non-explosive, unlike ammonia and ammonium nitrate.
Calcium ammonium nitrate Is a blend of 20-30% limestone CaCO3 or dolomite (Ca,Mg)CO3 and 70-80% ammonium nitrate with 24-28 % nitrogen.
Calcium nitrate with 15,5% nitrogen and 19% calcium, reportedly holding a small share of the nitrogen fertilizer market (4% in 2007).
The main straight phosphate fertilizers are the superphosphates:
"Single superphosphate" (SSP) consisting of 14–18% P2O5, again in the form of Ca(H2PO4)2, but also phosphogypsum ().
Triple superphosphate (TSP) typically consists of 44–48% of P2O5 and no gypsum.
A mixture of single superphosphate and triple superphosphate is called double superphosphate. More than 90% of a typical superphosphate fertilizer is water-soluble.
The main potassium-based straight fertilizer is muriate of potash (MOP, 95–99% KCl). It is typically available as 0-0-60 or 0-0-62 fertilizer.
Multinutrient fertilizers
These fertilizers are common. They consist of two or more nutrient components.
Binary (NP, NK, PK) fertilizers
Major two-component fertilizers provide both nitrogen and phosphorus to the plants. These are called NP fertilizers. The main NP fertilizers are
monoammonium phosphate (MAP) NH4H2PO4. With 11% nitrogen and 48% P2O5.
diammonium phosphate (DAP). (NH4)2HPO4. With 18% nitrogen and 46% P2O5
About 85% of MAP and DAP fertilizers are soluble in water.
NPK fertilizers
NPK fertilizers are three-component fertilizers providing nitrogen, phosphorus, and potassium. There exist two types of NPK fertilizers: compound and blends. Compound NPK fertilizers contain chemically bound ingredients, while blended NPK fertilizers are physical mixtures of single nutrient components.
NPK rating is a rating system describing the amount of nitrogen, phosphorus, and potassium in a fertilizer. NPK ratings consist of three numbers separated by dashes (e.g., 10-10-10 or 16-4-8) describing the chemical content of fertilizers. The first number represents the percentage of nitrogen in the product; the second number, P2O5; the third, K2O. Fertilizers do not actually contain P2O5 or K2O, but the system is a conventional shorthand for the amount of the phosphorus (P) or potassium (K) in a fertilizer. A bag of fertilizer labeled 16-4-8 contains of nitrogen (16% of the 50 pounds), an amount of phosphorus equivalent to that in 2 pounds of P2O5 (4% of 50 pounds), and 4 pounds of K2O (8% of 50 pounds). Most fertilizers are labeled according to this N-P-K convention, although Australian convention, following an N-P-K-S system, adds a fourth number for sulfur, and uses elemental values for all values including P and K.
Micronutrients
Micronutrients are consumed in smaller quantities and are present in plant tissue on the order of parts-per-million (ppm), ranging from 0.15 to 400 ppm or less than 0.04% dry matter. These elements are often required for enzymes essential to the plant's metabolism. Because these elements enable catalysts (enzymes), their impact far exceeds their weight%age. Typical micronutrients are boron, zinc, molybdenum, iron, and manganese. These elements are provided as water-soluble salts. Iron presents special problems because it converts to insoluble (bio-unavailable) compounds at moderate soil pH and phosphate concentrations. For this reason, iron is often administered as a chelate complex, e.g., the EDTA or EDDHA derivatives. The micronutrient needs depend on the plant and the environment. For example, sugar beets appear to require boron, and legumes require cobalt, while environmental conditions such as heat or drought make boron less available for plants.
Production
The production of synthetic, or inorganic, fertilizers require prepared chemicals, whereas organic fertilizers are derived from the organic processes of plants and animals in biological processes using biochemicals.
Nitrogen fertilizers
Nitrogen fertilizers are made from ammonia (NH3) produced by the Haber–Bosch process. In this energy-intensive process, natural gas (CH4) usually supplies the hydrogen, and the nitrogen (N2) is derived from the air. This ammonia is used as a feedstock for all other nitrogen fertilizers, such as anhydrous ammonium nitrate (NH4NO3) and urea (CO(NH2)2).
Deposits of sodium nitrate (NaNO3) (Chilean saltpeter) are also found in the Atacama Desert in Chile and was one of the original (1830) nitrogen-rich fertilizers used. It is still mined for fertilizer. Nitrates are also produced from ammonia by the Ostwald process.
Phosphate fertilizers
Phosphate fertilizers are obtained by extraction from phosphate rock, which contains two principal phosphorus-containing minerals, fluorapatite Ca5(PO4)3F (CFA) and hydroxyapatite Ca5(PO4)3OH. Billions of kg of phosphate rock are mined annually, but the size and quality of the remaining ore is decreasing. These minerals are converted into water-soluble phosphate salts by treatment with acids. The large production of sulfuric acid is primarily motivated by this application. In the nitrophosphate process or Odda process (invented in 1927), phosphate rock with up to a 20% phosphorus (P) content is dissolved with nitric acid (HNO3) to produce a mixture of phosphoric acid (H3PO4) and calcium nitrate (Ca(NO3)2). This mixture can be combined with a potassium fertilizer to produce a compound fertilizer with the three macronutrients N, P and K in easily dissolved form.
Potassium fertilizers
Potash is a mixture of potassium minerals used to make potassium (chemical symbol: K) fertilizers. Potash is soluble in water, so the main effort in producing this nutrient from the ore involves some purification steps, e.g., to remove sodium chloride (NaCl) (common salt). Sometimes potash is referred to as K2O, as a matter of convenience to those describing the potassium content. In fact, potash fertilizers are usually potassium chloride, potassium sulfate, potassium carbonate, or potassium nitrate.
NPK fertilizers
There are three major routes for manufacturing NPK fertilizers (named for their main ingredients: nitrogen (N), phosphorus (P), and potassium (K)):
bulk blending. The individual fertilizers are combined in the desired nutrient ratio.
The wet process is based on chemical reactions between liquid raw materials phosphoric acid, sulfuric acid, ammonia) and solid raw materials (such as potassium chloride).
The Nitrophosphate Process. Step 1. Nitrophosphates are made by acidiculating phosphate rock with nitric acid.
Nitric acid + Phosphate rock → Phosphoric acid + Calcium sulphate + hexafluorosilicic acid.
Ca5F(PO4)3 + 10 HNO3 →6 H3PO4 + 5 Ca(NO3)2 + HF
6 HF + SiO2 →H2SiF6 + 2 H2O
Step 2. Removal of Calcium Nitrate. It is important to remove the calcium nitrate because calcium nitrate is extremely hygroscopic.
Method 1.(Odda process) Calcium nitrate crystals are removed by centrifugation.
Method 2. Sulfonitric Process Ca(NO3)2 + H2SO4 + 2NH3 → CaSO4 + 2NH4NO3
Method 3.Phosphonitric Process Ca(NO3)2 + H3PO4 + 2NH3 → CaHPO4 + 2NH4NO3
Method 4.Carbonitric Process Ca(NO3)2 + CO2 + H2O + 2NH3 → CaCO3 + 2NH4NO3
Organic fertilizers
"Organic fertilizers" can describe those fertilizers with a biologic origin—derived from living or formerly living materials. Organic fertilizers can also describe commercially available and frequently packaged products that strive to follow the expectations and restrictions adopted by "organic agriculture" and "environmentally friendly" gardening – related systems of food and plant production that significantly limit or strictly avoid the use of synthetic fertilizers and pesticides. The "organic fertilizer" products typically contain both some organic materials as well as acceptable additives such as nutritive rock powders, ground seashells (crab, oyster, etc.), other prepared products such as seed meal or kelp, and cultivated microorganisms and derivatives.
Fertilizers of an organic origin (the first definition) include animal wastes, plant wastes from agriculture, seaweed, compost, and treated sewage sludge (biosolids). Beyond manures, animal sources can include products from the slaughter of animals – bloodmeal, bone meal, feather meal, hides, hoofs, and horns all are typical components. Organically derived materials available to industry such as sewage sludge may not be acceptable components of organic farming and gardening, because of factors ranging from residual contaminants to public perception. On the other hand, marketed "organic fertilizers" may include, and promote, processed organics because the materials have consumer appeal. No matter the definition nor composition, most of these products contain less-concentrated nutrients, and the nutrients are not as easily quantified. They can offer soil-building advantages as well as be appealing to those who are trying to farm / garden more "naturally".
In terms of volume, peat is the most widely used packaged organic soil amendment. It is an immature form of coal and improves the soil by aeration and absorbing water but confers no nutritional value to the plants. It is therefore not a fertilizer as defined in the beginning of the article, but rather an amendment. Coir, (derived from coconut husks), bark, and sawdust when added to soil all act similarly (but not identically) to peat and are also considered organic soil amendments – or texturizers – because of their limited nutritive inputs. Some organic additives can have a reverse effect on nutrients – fresh sawdust can consume soil nutrients as it breaks down and may lower soil pH – but these same organic texturizers (as well as compost, etc.) may increase the availability of nutrients through improved cation exchange, or through increased growth of microorganisms that in turn increase availability of certain plant nutrients. Organic fertilizers such as composts and manures may be distributed locally without going into industry production, making actual consumption more difficult to quantify.
Fertilizer consumption
China has become the largest producer and consumer of nitrogen fertilizers while Africa has little reliance on nitrogen fertilizers. Agricultural and chemical minerals are very important in industrial use of fertilizers, which is valued at approximately $200 billion. Nitrogen has a significant impact in the global mineral use, followed by potash and phosphate. The production of nitrogen has drastically increased since the 1960s. Phosphate and potash have increased in price since the 1960s, which is larger than the consumer price index. Potash is produced in Canada, Russia and Belarus, together making up over half of the world production. Potash production in Canada rose in 2017 and 2018 by 18.6%. Conservative estimates report 30 to 50% of crop yields are attributed to natural or synthetic commercial fertilizers. Fertilizer consumption has surpassed the amount of farmland in the United States.
Data on the fertilizer consumption per hectare arable land in 2012 are published by The World Bank. The diagram below shows fertilizer consumption by the European Union (EU) countries as kilograms per hectare (pounds per acre). The total consumption of fertilizer in the EU is 15.9 million tons for 105 million hectare arable land area (or 107 million hectare arable land according to another estimate). This figure equates to 151 kg of fertilizers consumed per ha arable land on average by the EU countries.
Application
Fertilizers are commonly used for growing all crops, with application rates depending on the soil fertility, usually as measured by a soil test and according to the particular crop. Legumes, for example, fix nitrogen from the atmosphere and generally do not require nitrogen fertilizer.
Liquid vs solid
Fertilizers are applied to crops both as solids and as liquid. About 90% of fertilizers are applied as solids. The most widely used solid inorganic fertilizers are urea, diammonium phosphate and potassium chloride. Solid fertilizer is typically granulated or powdered. Often solids are available as prills, a solid globule. Liquid fertilizers comprise anhydrous ammonia, aqueous solutions of ammonia, aqueous solutions of ammonium nitrate or urea. These concentrated products may be diluted with water to form a concentrated liquid fertilizer (e.g., UAN). Advantages of liquid fertilizer are its more rapid effect and easier coverage. The addition of fertilizer to irrigation water is called "fertigation". Granulated fertilizers are more economical to ship and store, not to mention easier to apply.
Urea
Urea is highly soluble in water and is therefore also very suitable for use in fertilizer solutions (in combination with ammonium nitrate: UAN), e.g., in 'foliar feed' fertilizers. For fertilizer use, granules are preferred over prills because of their narrower particle size distribution, which is an advantage for mechanical application.
Urea is usually spread at rates of between 40 and 300 kg/ha (35 to 270 lbs/acre) but rates vary. Smaller applications incur lower losses due to leaching. During summer, urea is often spread just before or during rain to minimize losses from volatilization (a process wherein nitrogen is lost to the atmosphere as ammonia gas).
Because of the high nitrogen concentration in urea, it is very important to achieve an even spread. Drilling must not occur on contact with or close to seed, due to the risk of germination damage. Urea dissolves in water for application as a spray or through irrigation systems.
In grain and cotton crops, urea is often applied at the time of the last cultivation before planting. In high rainfall areas and on sandy soils (where nitrogen can be lost through leaching) and where good in-season rainfall is expected, urea can be side- or top-dressed during the growing season. Top-dressing is also popular on pasture and forage crops. In cultivating sugarcane, urea is side dressed after planting and applied to each ratoon crop.
Because it absorbs moisture from the atmosphere, urea is often stored in closed containers.
Overdose or placing urea near seed is harmful.
Slow- and controlled-release fertilizers
Foliar application
Foliar fertilizers are applied directly to leaves. This method is almost invariably used to apply water-soluble straight nitrogen fertilizers and used especially for high-value crops such as fruits. Urea is the most common foliar fertilizer.
Chemicals that affect nitrogen uptake
Various chemicals are used to enhance the efficiency of nitrogen-based fertilizers. In this way farmers can limit the polluting effects of nitrogen run-off. Nitrification inhibitors (also known as nitrogen stabilizers) suppress the conversion of ammonia into nitrate, an anion that is more prone to leaching. 1-Carbamoyl-3-methylpyrazole (CMP), dicyandiamide, nitrapyrin (2-chloro-6-trichloromethylpyridine) and 3,4-dimethylpyrazole phosphate (DMPP) are popular. Urease inhibitors are used to slow the hydrolytic conversion of urea into ammonia, which is prone to evaporation as well as nitrification. The conversion of urea to ammonia catalyzed by enzymes called ureases. A popular inhibitor of ureases is N-(n-butyl)thiophosphoric triamide (NBPT).
Overfertilization
Careful use of fertilization technologies is important because excess nutrients can be detrimental. Fertilizer burn can occur when too much fertilizer is applied, resulting in damage or even death of the plant. Fertilizers vary in their tendency to burn roughly in accordance with their salt index.
Environmental effects
Synthetic fertilizer used in agriculture has wide-reaching environmental consequences.
According to the Intergovernmental Panel on Climate Change (IPCC) Special Report on Climate Change and Land, production of these fertilizers and associated land use practices are drivers of global warming. The use of fertilizer has also led to a number of direct environmental consequences: agricultural runoff which leads to downstream effects like ocean dead zones and waterway contamination, soil microbiome degradation, and accumulation of toxins in ecosystems. Indirect environmental impacts include: the environmental impacts of fracking for natural gas used in the Haber process, the agricultural boom is partially responsible for the rapid growth in human population and large-scale industrial agricultural practices are associated with habitat destruction, pressure on biodiversity and agricultural soil loss.
In order to mitigate environmental and food security concerns, the international community has included food systems in Sustainable Development Goal 2 which focuses on creating a climate-friendly and sustainable food production system. Most policy and regulatory approaches to address these issues focus on pivoting agricultural practices towards sustainable or regenerative agricultural practices: these use less synthetic fertilizers, better soil management (for example no-till agriculture) and more organic fertilizers.
For each ton of phosphoric acid produced by the processing of phosphate rock, five tons of waste are generated. This waste takes the form of impure, useless, radioactive solid called phosphogypsum. Estimates range from 100,000,000 and 280,000,000 tons of phosphogypsum waste produced annually worldwide.
Water
Phosphorus and nitrogen fertilizers can affect soil, surface water, and groundwater due to the dispersion of minerals into waterways due to high rainfall, snowmelt and can leaching into groundwater over time. Agricultural run-off is a major contributor to the eutrophication of freshwater bodies. For example, in the US, about half of all the lakes are eutrophic. The main contributor to eutrophication is phosphate, which is normally a limiting nutrient; high concentrations promote the growth of cyanobacteria and algae, the demise of which consumes oxygen. Cyanobacteria blooms ('algal blooms') can also produce harmful toxins that can accumulate in the food chain, and can be harmful to humans. Fertilizer run-off can be reduced by using weather-optimized fertilization strategies.
The nitrogen-rich compounds found in fertilizer runoff are the primary cause of serious oxygen depletion in many parts of oceans, especially in coastal zones, lakes and rivers. The resulting lack of dissolved oxygen greatly reduces the ability of these areas to sustain oceanic fauna. The number of oceanic dead zones near inhabited coastlines is increasing.
As of 2006, the application of nitrogen fertilizer is being increasingly controlled in northwestern Europe and the United States. In cases where eutrophication can be reversed, it may nevertheless take decades and significant soil management before the accumulated nitrates in groundwater can be broken down by natural processes.
Nitrate pollution
Only a fraction of the nitrogen-based fertilizers is converted to plant matter. The remainder accumulates in the soil or is lost as run-off. High application rates of nitrogen-containing fertilizers combined with the high water solubility of nitrate leads to increased runoff into surface water as well as leaching into groundwater, thereby causing groundwater pollution. The excessive use of nitrogen-containing fertilizers (be they synthetic or natural) is particularly damaging, as much of the nitrogen that is not taken up by plants is transformed into nitrate which is easily leached.
Nitrate levels above 10 mg/L (10 ppm) in groundwater can cause 'blue baby syndrome' (acquired methemoglobinemia). The nutrients, especially nitrates, in fertilizers can cause problems for natural habitats and for human health if they are washed off soil into watercourses or leached through soil into groundwater. Run-off can lead to fertilizing blooms of algae that use up all the oxygen and leave huge "dead zones" behind where other fish and aquatic life can not live.
Soil
Acidification
Soil acidification refers to the process by which the pH level of soil becomes more acidic over time. Soil pH is a measure of the soil's acidity or alkalinity and is determined on a scale from 0 to 14, with 7 being neutral. A pH value below 7 indicates acidic soil, while a pH value above 7 indicates alkaline or basic soil.
Soil acidification is a significant concern in agriculture and horticulture. It refers to the process of the soil becoming more acidic over time.
Nitrogen-containing fertilizers can cause soil acidification when added. This may lead to decrease in nutrient availability which may be offset by liming. These fertilizers release ammonium or nitrate ions, which can acidify the soil as they undergo chemical reactions.
When these nitrogen-containing fertilizers are added to the soil, they increase the concentration of hydrogen ions (H+) in the soil solution, which lowers the pH of the soil.
Accumulation of toxic elements
Cadmium
The concentration of cadmium in phosphorus-containing fertilizers varies considerably and can be problematic. For example, mono-ammonium phosphate fertilizer may have a cadmium content of as low as 0.14 mg/kg or as high as 50.9 mg/kg. The phosphate rock used in their manufacture can contain as much as 188 mg/kg cadmium (examples are deposits on Nauru and the Christmas Islands). Continuous use of high-cadmium fertilizer can contaminate soil (as shown in New Zealand) and plants. Limits to the cadmium content of phosphate fertilizers has been considered by the European Commission. Producers of phosphorus-containing fertilizers now select phosphate rock based on the cadmium content.
Fluoride
Phosphate rocks contain high levels of fluoride. Consequently, the widespread use of phosphate fertilizers has increased soil fluoride concentrations. It has been found that food contamination from fertilizer is of little concern as plants accumulate little fluoride from the soil; of greater concern is the possibility of fluoride toxicity to livestock that ingest contaminated soils. Also of possible concern are the effects of fluoride on soil microorganisms.
Radioactive elements
The radioactive content of the fertilizers varies considerably and depends both on their concentrations in the parent mineral and on the fertilizer production process. Uranium-238 concentrations can range from 7 to 100 pCi/g (picocuries per gram) in phosphate rock and from 1 to 67 pCi/g in phosphate fertilizers. Where high annual rates of phosphorus fertilizer are used, this can result in uranium-238 concentrations in soils and drainage waters that are several times greater than are normally present. However, the impact of these increases on the risk to human health from radinuclide contamination of foods is very small (less than 0.05 mSv/y).
Other metals
Steel industry wastes, recycled into fertilizers for their high levels of zinc (essential to plant growth), wastes can include the following toxic metals: lead arsenic, cadmium, chromium, and nickel. The most common toxic elements in this type of fertilizer are mercury, lead, and arsenic. These potentially harmful impurities can be removed; however, this significantly increases cost. Highly pure fertilizers are widely available and perhaps best known as the highly water-soluble fertilizers containing blue dyes used around households, such as Miracle-Gro. These highly water-soluble fertilizers are used in the plant nursery business and are available in larger packages at significantly less cost than retail quantities. Some inexpensive retail granular garden fertilizers are made with high purity ingredients.
Trace mineral depletion
Attention has been addressed to the decreasing concentrations of elements such as iron, zinc, copper and magnesium in many foods over the last 50–60 years. Intensive farming practices, including the use of synthetic fertilizers are frequently suggested as reasons for these declines and organic farming is often suggested as a solution. Although improved crop yields resulting from NPK fertilizers are known to dilute the concentrations of other nutrients in plants, much of the measured decline can be attributed to the use of progressively higher-yielding crop varieties that produce foods with lower mineral concentrations than their less-productive ancestors. It is, therefore, unlikely that organic farming or reduced use of fertilizers will solve the problem; foods with high nutrient density are posited to be achieved using older, lower-yielding varieties or the development of new high-yield, nutrient-dense varieties.
Fertilizers are, in fact, more likely to solve trace mineral deficiency problems than cause them: In Western Australia deficiencies of zinc, copper, manganese, iron and molybdenum were identified as limiting the growth of broad-acre crops and pastures in the 1940s and 1950s. Soils in Western Australia are very old, highly weathered and deficient in many of the major nutrients and trace elements. Since this time these trace elements are routinely added to fertilizers used in agriculture in this state. Many other soils around the world are deficient in zinc, leading to deficiency in both plants and humans, and zinc fertilizers are widely used to solve this problem.
Changes in soil biology
High levels of fertilizer may cause the breakdown of the symbiotic relationships between plant roots and mycorrhizal fungi.
Organic agriculture
Two types of agricultural management practices include organic agriculture and conventional agriculture. The former encourages soil fertility using local resources to maximize efficiency. Organic agriculture avoids synthetic agrochemicals. Conventional agriculture uses all the components that organic agriculture does not use.
Hydrogen consumption and sustainability
Most fertilizer is made from dirty hydrogen. Ammonia is produced from natural gas and air. The cost of natural gas makes up about 90% of the cost of producing ammonia. The increase in price of natural gases over the past decade, along with other factors such as increasing demand, have contributed to an increase in fertilizer price.
Contribution to climate change
The amount of greenhouse gases carbon dioxide, methane and nitrous oxide produced during the manufacture and use of nitrogen fertilizer is estimated as around 5% of anthropogenic greenhouse gas emissions. One third is produced during the production and two thirds during the use of fertilizers. Nitrogen fertilizer can be converted by soil bacteria to nitrous oxide, a greenhouse gas. Nitrous oxide emissions by humans, most of which are from fertilizer, between 2007 and 2016 have been estimated at 7 million tonnes per year, which is incompatible with limiting global warming to below 2 °C.
Atmosphere
Through the increasing use of nitrogen fertilizer, which was used at a rate of about 110 million tons (of N) per year in 2012, adding to the already existing amount of reactive nitrogen, nitrous oxide (N2O) has become the third most important greenhouse gas after carbon dioxide and methane. It has a global warming potential 296 times larger than an equal mass of carbon dioxide and it also contributes to stratospheric ozone depletion.
By changing processes and procedures, it is possible to mitigate some, but not all, of these effects on anthropogenic climate change.
Methane emissions from crop fields (notably rice paddy fields) are increased by the application of ammonium-based fertilizers. These emissions contribute to global climate change as methane is a potent greenhouse gas.
Policy
Regulation
In Europe, problems with high nitrate concentrations in runoff are being addressed by the European Union's Nitrates Directive. Within Britain, farmers are encouraged to manage their land more sustainably in 'catchment-sensitive farming'. In the US, high concentrations of nitrate and phosphorus in runoff and drainage water are classified as nonpoint source pollutants due to their diffuse origin; this pollution is regulated at the state level. Oregon and Washington, both in the United States, have fertilizer registration programs with on-line databases listing chemical analyses of fertilizers. Carbon emission trading and eco-tariffs affect the production and price of fertilizer.
Subsidies
In China, regulations have been implemented to control the use of N fertilizers in farming. In 2008, Chinese governments began to partially withdraw fertilizer subsidies, including subsidies to fertilizer transportation and to electricity and natural gas use in the industry. In consequence, the price of fertilizer has gone up and large-scale farms have begun to use less fertilizer. If large-scale farms keep reducing their use of fertilizer subsidies, they have no choice but to optimize the fertilizer they have which would therefore gain an increase in both grain yield and profit.
In March 2022, the United States Department of Agriculture announced a new $250M grant to promote American fertilizer production. Part of the Commodity Credit Corporation, the grant program will support fertilizer production that is independent of dominant fertilizer suppliers, made in America, and utilizing innovative production techniques to jumpstart future competition.
| Technology | Food and health | null |
37402 | https://en.wikipedia.org/wiki/Chicken | Chicken | The chicken (Gallus gallus domesticus) is a large and round short-winged bird, domesticated from the red junglefowl of Southeast Asia around 8,000 years ago. Most chickens are raised for food, providing meat and eggs; others are kept as pets or for cockfighting.
Chickens are common and widespread domestic animals, with a total population of 26.5 billion , and an annual production of more than 50 billion birds. A hen bred for laying can produce over 300 eggs per year. There are numerous cultural references to chickens in folklore, religion, and literature.
Nomenclature
Terms for chickens include:
Biddy: a chicken, or a newly hatched chicken
Capon: a castrated or neutered male chicken
Chick: a young chicken
Chook : a chicken (Australia/New Zealand, informal)
Cock: a fertile adult male chicken
Cockerel: a young male chicken
Hen: an adult female chicken
Pullet: a young female chicken less than a year old. In the poultry industry, a pullet is a sexually immature chicken less than 22 weeks of age.
Rooster: a fertile adult male chicken, especially in North America. Originated in the 18th century, possibly as a euphemism to avoid the sexual connotation of the word cock.
Yardbird: a chicken (southern United States, dialectal)
Chicken can mean a chick, as in William Shakespeare's play Macbeth, where Macduff laments the death of "all my pretty chickens and their dam". The usage is preserved in placenames such as the Hen and Chicken Islands. In older sources, and still often in trade and scientific contexts, chickens as a species are described as common fowl or domestic fowl.
Description
Chickens are relatively large birds, active by day. The body is round, the legs are unfeathered in most breeds, and the wings are short. Wild junglefowl can fly; chickens and their flight muscles are too heavy to allow them to fly more than a short distance. Size and coloration vary widely between breeds. Newly-hatched chicks of both modern and heritage varieties weigh the same, about . Modern varieties however grow much faster; by day 35 a Ross 708 broiler may weigh as against the of a heritage chicken of the same age.
Adult chickens of both sexes have a fleshy crest on their heads called a comb or cockscomb, and hanging flaps of skin on either side under their beaks called wattles; combs and wattles are more prominent in males. Some breeds have a mutation that causes extra feathering under the face, giving the appearance of a beard.
Chickens are omnivores. In the wild, they scratch at the soil to search for seeds, insects, and animals as large as lizards, small snakes, and young mice. A chicken may live for 5–10 years, depending on the breed. The world's oldest known chicken lived for 16 years.
Chickens are gregarious, living in flocks, and incubate eggs and raise young communally. Individual chickens dominate others, establishing a pecking order; dominant individuals take priority for access to food and nest sites. The concept of dominance, involving pecking, was described in female chickens by Thorleif Schjelderup-Ebbe in 1921 as the "pecking order". Male chickens tend to leap and use their claws in conflicts. Chickens are capable of mobbing and killing a weak or inexperienced predator, such as a young fox.
A male's crowing is a loud and sometimes shrill call, serving as a territorial signal to other males, and in response to sudden disturbances within their surroundings. Hens cluck loudly after laying an egg and to call their chicks. Chickens give different warning calls to indicate that a predator is approaching from the air or on the ground.
Reproduction and life-cycle
To initiate courting, some roosters may dance in a circle around or near a hen (a circle dance), often lowering the wing which is closest to the hen. The dance triggers a response in the hen and when she responds to his call, the rooster may mount the hen and proceed with the mating. Mating typically involves a sequence in which the male approaches the female and performs a waltzing display. If the female is unreceptive, she runs off; otherwise, she crouches, and the male mounts, treading with both feet on her back. After copulation the male does a tail-bending display.
Sperm transfer occurs by cloacal contact between the male and female, in an action called the 'cloacal kiss'. As with all birds, reproduction is controlled by a neuroendocrine system, the Gonadotropin-Releasing Hormone-I neurons in the hypothalamus. Reproductive hormones including estrogen, progesterone, and gonadotropins (luteinizing hormone and follicle-stimulating hormone) initiate and maintain sexual maturation changes. Reproduction declines with age, thought to be due to a decline in GnRH-I-N.
Hens often try to lay in nests that already contain eggs and sometimes move eggs from neighbouring nests into their own. A flock thus uses only a few preferred locations, rather than having a different nest for every bird. Under natural conditions, most birds lay only until a clutch is complete; they then incubate all the eggs. This is called "going broody". The hen sits on the nest, fluffing up or pecking defensively if disturbed. She rarely leaves the nest until the eggs have hatched.
Eggs of chickens from the high-altitude region of Tibet have special physiological adaptations that result in a higher hatching rate in low oxygen environments. When eggs are placed in a hypoxic environment, chicken embryos from these populations express much more hemoglobin than embryos from other chicken populations. This hemoglobin has a greater affinity for oxygen, binding oxygen more readily.
Fertile chicken eggs hatch at the end of the incubation period, about 21 days; the chick uses its egg tooth to break out of the shell. Hens remain on the nest for about two days after the first chick hatches; during this time the newly hatched chicks feed by absorbing the internal yolk sac. The hen guards her chicks and broods them to keep them warm. She leads them to food and water and calls them towards food. The chicks imprint on the hen and subsequently follow her continually. She continues to care for them until they are several weeks old.
Inbreeding of White Leghorn chickens tends to cause inbreeding depression expressed as reduced egg number and delayed sexual maturity. Strongly inbred Langshan chickens display obvious inbreeding depression in reproduction, particularly for traits such as age when the first egg is laid and egg number.
Origin
Phylogeny
Water or ground-dwelling fowl similar to modern partridges, in the Galliformes, the order of bird that chickens belong to, survived the Cretaceous–Paleogene extinction event that killed all tree-dwelling birds and their dinosaur relatives. Chickens are descended primarily from the red junglefowl (Gallus gallus) and are scientifically classified as the same species. Domesticated chickens freely interbreed with populations of red junglefowl. The domestic chicken has subsequently hybridised with grey junglefowl, Sri Lankan junglefowl and green junglefowl; a gene for yellow skin, for instance, was incorporated into domestic birds from the grey junglefowl (G. sonneratii). It is estimated that chickens share between 71 and 79% of their genome with red junglefowl.
Domestication
According to one early study, a single domestication event of the red junglefowl in present-day Thailand gave rise to the modern chicken with minor transitions separating the modern breeds. The red junglefowl is well adapted to take advantage of the vast quantities of seed produced during the end of the multi-decade bamboo seeding cycle, to boost its own reproduction. In domesticating the chicken, humans took advantage of the red junglefowl's ability to reproduce prolifically when exposed to a surge in its food supply.
Exactly when and where the chicken was domesticated remains controversial. Genomic studies estimate that the chicken was domesticated 8,000 years ago in Southeast Asia and spread to China and India 2,000 to 3,000 years later. Archaeological evidence supports domestic chickens in Southeast Asia well before 6000 BC, China by 6000 BC and India by 2000 BC. A landmark 2020 Nature study that fully sequenced 863 chickens across the world suggests that all domestic chickens originate from a single domestication event of red junglefowl whose present-day distribution is predominantly in southwestern China, northern Thailand and Myanmar. These domesticated chickens spread across Southeast and South Asia where they interbred with local wild species of junglefowl, forming genetically and geographically distinct groups. Analysis of the most popular commercial breed shows that the White Leghorn breed possesses a mosaic of divergent ancestries inherited from subspecies of red junglefowl.
Dispersal
Austronesia
A word for the domestic chicken (*manuk) is part of the reconstructed Proto-Austronesian language, indicating they were domesticated by the Austronesian peoples since ancient times. Chickens, together with dogs and pigs, were carried throughout the entire range of the prehistoric Austronesian maritime migrations to Island Southeast Asia, Micronesia, Island Melanesia, Polynesia, and Madagascar, starting from at least 3000 BC from Taiwan. These chickens may have been introduced during pre-Columbian times to South America via Polynesian seafarers, but this is disputed.
Americas
The possibility that domestic chickens were in the Americas before Western contact is debated by researchers, but blue-egged chickens, found only in the Americas and Asia, suggest an Asian origin for early American chickens. A lack of data from Thailand, Russia, the Indian subcontinent, Southeast Asia and Sub-Saharan Africa makes it difficult to lay out a clear map of the spread of chickens in these areas; better description and genetic analysis of local breeds threatened by extinction may also help with research into this area. Chicken bones from the Arauco Peninsula in south-central Chile were radiocarbon dated as pre-Columbian, and DNA analysis suggested they were related to prehistoric populations in Polynesia. However, further study of the same bones cast doubt on the findings.
Eurasia
Chicken remains have been difficult to date, given the small and fragile bird bones; this may account for discrepancies in dates given by different sources. Archaeological evidence is supplemented by mentions in historical texts from the last few centuries BC, and by depictions in prehistoric artworks, such as across Central Asia. Chickens were widespread throughout southern Central Asia by the 4th century BC.
Middle Eastern chicken remains go back to a little earlier than 2000 BC in Syria. Phoenicians spread chickens along the Mediterranean coasts as far as Iberia. During the Hellenistic period (4th–2nd centuries BC), in the southern Levant, chickens began to be widely domesticated for food. The first pictures of chickens in Europe are found on Corinthian pottery of the 7th century BC.
Breeding increased under the Roman Empire and reduced in the Middle Ages. Genetic sequencing of chicken bones from archaeological sites in Europe revealed that in the High Middle Ages chickens became less aggressive and began to lay eggs earlier in the breeding season.
Africa
Chickens reached Egypt via the Middle East for purposes of cockfighting about 1400 BC and became widely bred in Egypt around 300 BC. Three possible routes of introduction into Africa around the early first millennium AD could have been through the Egyptian Nile Valley, the East Africa Roman-Greek or Indian trade, or from Carthage and the Berbers, across the Sahara. The earliest known remains are from Mali, Nubia, East Coast, and South Africa and date back to the middle of the first millennium AD.
Diseases
Chickens are susceptible both to parasites such as mites, and to diseases caused by pathogens such as bacteria and viruses. The parasite Dermanyssus gallinae feeds on blood, causing irritation and reducing egg production, and acts as a vector for bacterial diseases such as salmonellosis and spirochaetosis.
Viral diseases include avian influenza.
Use by humans
Farming
Chickens are common and widespread domestic animals, with a total population of 23.7 billion . More than 50 billion chickens are reared annually as a source of meat and eggs. In the United States alone, more than 8 billion chickens are slaughtered each year for meat, and more than 300 million chickens are reared for egg production. The vast majority of poultry is raised in factory farms. According to the Worldwatch Institute, 74% of the world's poultry meat and 68% of eggs are produced this way. An alternative to intensive poultry farming is free-range farming. Friction between these two main methods has led to long-term issues of ethical consumerism. Opponents of intensive farming argue that it harms the environment, creates human health risks and is inhumane towards sentient animals. Advocates of intensive farming say that their efficient systems save land and food resources owing to increased productivity, and that the animals are looked after in a controlled environment. Chickens farmed for meat are called broilers. Broiler breeds typically take less than six weeks to reach slaughter size, some weeks longer for free range and organic broilers.
Chickens farmed primarily for eggs are called layer hens. The UK alone consumes more than 34 million eggs per day. Hens of some breeds can produce over 300 eggs per year; the highest authenticated rate of egg laying is 371 eggs in 364 days. After 12 months of laying, the commercial hen's egg-laying ability declines to the point where the flock is commercially unviable. Hens, particularly from battery cage systems, are sometimes infirm or have lost a significant amount of their feathers, and their life expectancy has been reduced from around seven years to less than two years. In the UK and Europe, laying hens are then slaughtered and used in processed foods, or sold as 'soup hens'. In some other countries, flocks are sometimes force moulted rather than being slaughtered to re-invigorate egg-laying. This involves complete withdrawal of food (and sometimes water) for 7–14 days or sufficiently long to cause a body weight loss of 25 to 35%, or up to 28 days under experimental conditions. This stimulates the hen to lose her feathers but also re-invigorates egg-production. Some flocks may be force-moulted several times. In 2003, more than 75% of all flocks were moulted in the US.
As pets
Keeping chickens as pets became increasingly popular in the 2000s among urban and suburban residents. Many people obtain chickens for their egg production but often name them and treat them as any other pet like cats or dogs. Chickens provide companionship and have individual personalities. While many do not cuddle much, they will eat from one's hand, jump onto one's lap, respond to and follow their handlers, as well as show affection. Chickens are social, inquisitive, intelligent birds, and many people find their behaviour entertaining. Certain breeds, such as silkies and many bantam varieties, are generally docile and are often recommended as good pets around children with disabilities.
Cockfighting
A cockfight is a contest held in a ring called a cockpit between two cocks. Cockfighting is outlawed in many countries as involving cruelty to animals. The activity seems to have been practised in the Indus Valley civilisation from 2500 to 2100 BC. In the process of domestication, chickens were apparently kept initially for cockfighting, and only later used for food.
In science
Chickens have long been used as model organisms to study developing embryos. Large numbers of embryos can be provided commercially; fertilized eggs can easily be opened and used to observe the developing embryo. Equally important, embryologists can carry out experiments on such embryos, close the egg again and study the effects later in development. For instance, many important discoveries in limb development have been made using chicken embryos, such as the discovery of the apical ectodermal ridge and the zone of polarizing activity.
The chicken was the first bird species to have its genome sequenced. At 1.21 Gb, the chicken genome is similarly sized compared to other birds, but smaller than nearly all mammals: the human genome is 3.2 Gb. The final gene set contained 26,640 genes (including noncoding genes and pseudogenes), with a total of 19,119 protein-coding genes, a similar number to the human genome. In 2006, scientists researching the ancestry of birds switched on a chicken recessive gene, talpid2, and found that the embryo jaws initiated formation of teeth, like those found in ancient bird fossils.
In culture, folklore, and religion
Chickens are featured widely in folklore, religion, literature, and popular culture. The chicken is a sacred animal in many cultures and deeply embedded in belief systems and religious practices.
Roosters are sometimes used for divination, a practice called alectryomancy. This involves the sacrifice of a sacred rooster, often during a ritual cockfight, used as a form of communication with the gods. In Gabriel García Márquez's Nobel-Prize-winning 1967 novel One Hundred Years of Solitude, cockfighting is outlawed in the town of Macondo after the patriarch of the Buendia family murders his cockfighting rival and is haunted by the man's ghost. Chicken jokes have been made at least since The Knickerbocker published one in 1847. Chickens have been featured in art in farmyard scenes such as Adriaen van Utrecht's 1646 Turkeys and Chickens and Walter Osborne's 1885 Feeding the Chickens. The nursery rhyme "Cock a doodle doo", its chorus line imitating the cockerel's call, was published in Mother Goose's Melody in 1765.
The 2000 animated adventure comedy film Chicken Run, directed by Peter Lord and Nick Park, featured anthropomorphic chickens with many chicken jokes.
| Biology and health sciences | Biology | null |
37411 | https://en.wikipedia.org/wiki/Alkaline%20earth%20metal | Alkaline earth metal | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 2
|
|-
! 3
|
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
The alkaline earth metals are six chemical elements in group 2 of the periodic table. They are beryllium (Be), magnesium (Mg), calcium (Ca), strontium (Sr), barium (Ba), and radium (Ra). The elements have very similar properties: they are all shiny, silvery-white, somewhat reactive metals at standard temperature and pressure.
Together with helium, these elements have in common an outer s orbital which is full—that is, this orbital contains its full complement of two electrons, which the alkaline earth metals readily lose to form cations with charge +2, and an oxidation state of +2. Helium is grouped with the noble gases and not with the alkaline earth metals, but it is theorized to have some similarities to beryllium when forced into bonding and has sometimes been suggested to belong to group 2.
All the discovered alkaline earth metals occur in nature, although radium occurs only through the decay chain of uranium and thorium and not as a primordial element. There have been experiments, all unsuccessful, to try to synthesize element 120, the next potential member of the group.
Characteristics
Chemical
As with other groups, the members of this family show patterns in their electronic configuration, especially the outermost shells, resulting in trends in chemical behavior:
Most of the chemistry has been observed only for the first five members of the group. The chemistry of radium is not well-established due to its radioactivity; thus, the presentation of its properties here is limited.
The alkaline earth metals are all silver-colored and soft, and have relatively low densities, melting points, and boiling points. In chemical terms, all of the alkaline earth metals react with the halogens to form the alkaline earth metal halides, all of which are ionic crystalline compounds (except for beryllium chloride, beryllium bromide and beryllium iodide, which are covalent). All the alkaline earth metals except beryllium also react with water to form strongly alkaline hydroxides and, thus, should be handled with great care. The heavier alkaline earth metals react more vigorously than the lighter ones. The alkaline earth metals have the second-lowest first ionization energies in their respective periods of the periodic table because of their somewhat low effective nuclear charges and the ability to attain a full outer shell configuration by losing just two electrons. The second ionization energy of all of the alkaline metals is also somewhat low.
Beryllium is an exception: It does not react with water or steam unless at very high temperatures, and its halides are covalent. If beryllium did form compounds with an ionization state of +2, it would polarize electron clouds that are near it very strongly and would cause extensive orbital overlap, since beryllium has a high charge density. All compounds that include beryllium have a covalent bond. Even the compound beryllium fluoride, which is the most ionic beryllium compound, has a low melting point and a low electrical conductivity when melted.
All the alkaline earth metals have two electrons in their valence shell, so the energetically preferred state of achieving a filled electron shell is to lose two electrons to form doubly charged positive ions.
Compounds and reactions
The alkaline earth metals all react with the halogens to form ionic halides, such as calcium chloride (), as well as reacting with oxygen to form oxides such as strontium oxide (). Calcium, strontium, and barium react with water to produce hydrogen gas and their respective hydroxides (magnesium also reacts, but much more slowly), and also undergo transmetalation reactions to exchange ligands.
{| class="wikitable sortable"
|+ Solubility-related constants for alkaline-earth-metal fluorides
! Metal
! M2+ hydration (-MJ/mol)
! "MF2" unit hydration (-MJ/mol)
! MF2 lattice (-MJ/mol)
! Solubility (mol/kL)
|-
| Be || 2.455 || 3.371 || 3.526 || soluble
|-
| Mg || 1.922 || 2.838 || 2.978 || 1.2
|-
| Ca || 1.577 || 2.493 || 2.651 || 0.2
|-
| Sr || 1.415 || 2.331 || 2.513 || 0.8
|-
| Ba || 1.361 || 2.277 || 2.373 || 6
|}
Physical and atomic
Nuclear stability
Isotopes of all six alkaline earth metals are present in the Earth's crust and the solar system at varying concentrations, dependent upon the nuclides' half lives and, hence, their nuclear stabilities. The first five have one, three, five, four, and six stable (or observationally stable) isotopes respectively, for a total of 19 stable nuclides, as listed here: beryllium-9; magnesium-24, -25, -26; calcium-40, -42, -43, -44, -46; strontium-84, -86, -87, -88; barium-132, -134, -135, -136, -137, -138. The four underlined isotopes in the list are predicted by radionuclide decay energetics to be only observationally stable and to decay with extremely long half-lives through double-beta decay, though no decays attributed definitively to these isotopes have yet been observed as of 2024. Radium has no stable nor primordial isotopes.
In addition to the stable species, calcium and barium each have one extremely long-lived and primordial radionuclide: calcium-48 and barium-130, with half-lives of and years, respectively. Both are far longer than the current age of the universe (4.7× and 117× billion times longer, respectively) and less than one part per ten billion has decayed since the formation of the Earth. The two isotopes are stable for practical purposes.
Apart from the 21 stable or nearly-stable isotopes, the six alkaline earth elements each possess a large number of known radioisotopes. None of the isotopes other than the aforementioned 21 are primordial: all have half lives too short for even a single atom to have survived since the solar system's formation, after the seeding of heavy nuclei by nearby supernovae and collisions between neutron stars, and any present are derived from ongoing natural processes. Beryllium-7, beryllium-10, and calcium-41 are trace, as well as cosmogenic, nuclides, formed by the impact of cosmic rays with atmospheric or crustal atoms. The longest half-lives among them are 1.387 million years for beryllium-10, 99.4 thousand years for calcium-41, 1599 years for radium-226 (radium's longest-lived isotope), 28.90 years for strontium-90, 10.51 years for barium-133, and 5.75 years for radium-228. All others have half-lives of less than half a year, most significantly shorter.
Calcium-48 and barium-130, the two primordial and non-stable isotopes, decay only through double beta emission and have extremely long half-lives, by virtue of the extremely low probability of both beta decays occurring at the same time. All isotopes of radium are highly radioactive and are primarily generated through the decay of heavier radionuclides. The longest-lived of them is radium-226, a member of the decay chain of uranium-238. Strontium-90 and barium-140 are common fission products of uranium in nuclear reactors, accounting for 5.73% and 6.31% of uranium-235's fission products respectively when bombarded by thermal neutrons. The two isotopes have half-lives each of 28.90 years and 12.7 days. Strontium-90 is produced in appreciable quantities in operating nuclear reactors running on uranium-235 or plutonium-239 fuel, and a minuscule secular equilibrium concentration is also present due to rare spontaneous fission decays in naturally occurring uranium.
Calcium-48 is the lightest nuclide known to undergo double beta decay. Naturally occurring calcium and barium are very weakly radioactive: calcium contains about 0.1874% calcium-48, and barium contains about 0.1062% barium-130. On average, one double-beta decay of calcium-48 will occur per second for every 90 tons of natural calcium, or 230 tons of limestone (calcium carbonate). Through the same decay mechanism, one decay of barium-130 will occur per second for every 16,000 tons of natural barium, or 27,000 tons of baryte (barium sulfate).
The longest lived isotope of radium is radium-226 with a half-life of 1600 years; it along with radium-223, -224, and -228 occur naturally in the decay chains of primordial thorium and uranium. Beryllium-8 is notable by its absence as it splits in half virtually instantaneously into two alpha particles whenever it is formed. The triple alpha process in stars can only occur at energies high enough for beryllium-8 to fuse with a third alpha particle before it can decay, forming carbon-12. This thermonuclear rate-limiting bottleneck is the reason most main sequence stars spend billions of years fusing hydrogen within their cores, and only rarely manage to fuse carbon before collapsing into a stellar remnant, and even then merely for a timescale of ~1000 years. The radioisotopes of alkaline earth metals tend to be "bone seekers" as they behave chemically similar to calcium, an integral component of hydroxyapatite in compact bone, and gradually accumulate in the human skeleton. The incorporated radionuclides inflict significant damage to the bone marrow over time through the emission of ionizing radiation, primarily alpha particles. This property is made use of in a positive manner in the radiotherapy of certain bone cancers, since the radionuclides' chemical properties causes them to preferentially target cancerous growths in bone matter, leaving the rest of the body relatively unharmed.
Compared to their neighbors in the periodic table, alkaline earth metals tend to have a larger number of stable isotopes as they all possess an even number of protons, owing to their status as group 2 elements. Their isotopes are generally more stable due to nucleon pairing. This stability is further enhanced if the isotope also has an even number of neutrons, as both kinds of nucleons can then participate in pairing and contribute to nuclei stability.
History
Etymology
The alkaline earth metals are named after their oxides, the alkaline earths, whose old-fashioned names were beryllia, magnesia, lime, strontia, and baria. These oxides are basic (alkaline) when combined with water. "Earth" was a term applied by early chemists to nonmetallic substances that are insoluble in water and resistant to heating—properties shared by these oxides. The realization that these earths were not elements but compounds is attributed to the chemist Antoine Lavoisier. In his Traité Élémentaire de Chimie (Elements of Chemistry) of 1789 he called them salt-forming earth elements. Later, he suggested that the alkaline earths might be metal oxides, but admitted that this was mere conjecture. In 1808, acting on Lavoisier's idea, Humphry Davy became the first to obtain samples of the metals by electrolysis of their molten earths, thus supporting Lavoisier's hypothesis and causing the group to be named the alkaline earth metals.
Discovery
The calcium compounds calcite and lime have been known and used since prehistoric times. The same is true for the beryllium compounds beryl and emerald. The other compounds of the alkaline earth metals were discovered starting in the early 15th century. The magnesium compound magnesium sulfate was first discovered in 1618 by a farmer at Epsom in England. Strontium carbonate was discovered in minerals in the Scottish village of Strontian in 1790. The last element is the least abundant: radioactive radium, which was extracted from uraninite in 1898.
All elements except beryllium were isolated by electrolysis of molten compounds. Magnesium, calcium, and strontium were first produced by Humphry Davy in 1808, whereas beryllium was independently isolated by Friedrich Wöhler and Antoine Bussy in 1828 by reacting beryllium compounds with potassium. In 1910, radium was isolated as a pure metal by Curie and André-Louis Debierne also by electrolysis.
Beryllium
Beryl, a mineral that contains beryllium, has been known since the time of the Ptolemaic Kingdom in Egypt. Although it was originally thought that beryl was an aluminum silicate, beryl was later found to contain a then-unknown element when, in 1797, Louis-Nicolas Vauquelin dissolved aluminum hydroxide from beryl in an alkali. In 1828, Friedrich Wöhler and Antoine Bussy independently isolated this new element, beryllium, by the same method, which involved a reaction of beryllium chloride with metallic potassium; this reaction was not able to produce large ingots of beryllium. It was not until 1898, when Paul Lebeau performed an electrolysis of a mixture of beryllium fluoride and sodium fluoride, that large pure samples of beryllium were produced.
Magnesium
Magnesium was first produced by Humphry Davy in England in 1808 using electrolysis of a mixture of magnesia and mercuric oxide. Antoine Bussy prepared it in coherent form in 1831. Davy's first suggestion for a name was magnium, but the name magnesium is now used.
Calcium
Lime has been used as a material for building since 7000 to 14,000 BCE, and kilns used for lime have been dated to 2,500 BCE in Khafaja, Mesopotamia. Calcium as a material has been known since at least the first century, as the ancient Romans were known to have used calcium oxide by preparing it from lime. Calcium sulfate has been known to be able to set broken bones since the tenth century. Calcium itself, however, was not isolated until 1808, when Humphry Davy, in England, used electrolysis on a mixture of lime and mercuric oxide, after hearing that Jöns Jakob Berzelius had prepared a calcium amalgam from the electrolysis of lime in mercury.
Strontium
In 1790, physician Adair Crawford discovered ores with distinctive properties, which were named strontites in 1793 by Thomas Charles Hope, a chemistry professor at the University of Glasgow, who confirmed Crawford's discovery. Strontium was eventually isolated in 1808 by Humphry Davy by electrolysis of a mixture of strontium chloride and mercuric oxide. The discovery was announced by Davy on 30 June 1808 at a lecture to the Royal Society.
Barium
Barite, a mineral containing barium, was first recognized as containing a new element in 1774 by Carl Scheele, although he was able to isolate only barium oxide. Barium oxide was isolated again two years later by Johan Gottlieb Gahn. Later in the 18th century, William Withering noticed a heavy mineral in the Cumberland lead mines, which are now known to contain barium. Barium itself was finally isolated in 1808 when Humphry Davy used electrolysis with molten salts, and Davy named the element barium, after baryta. Later, Robert Bunsen and Augustus Matthiessen isolated pure barium by electrolysis of a mixture of barium chloride and ammonium chloride.
Radium
While studying uraninite, on 21 December 1898, Marie and Pierre Curie discovered that, even after uranium had decayed, the material created was still radioactive. The material behaved somewhat similarly to barium compounds, although some properties, such as the color of the flame test and spectral lines, were much different. They announced the discovery of a new element on 26 December 1898 to the French Academy of Sciences. Radium was named in 1899 from the word radius, meaning ray, as radium emitted power in the form of rays.
Occurrence
Beryllium occurs in the Earth's crust at a concentration of two to six parts per million (ppm), much of which is in soils, where it has a concentration of six ppm. Beryllium is one of the rarest elements in seawater, even rarer than elements such as scandium, with a concentration of 0.2 parts per trillion. However, in freshwater, beryllium is somewhat more common, with a concentration of 0.1 parts per billion.
Magnesium and calcium are very common in the Earth's crust, being respectively the fifth and eighth most abundant elements. None of the alkaline earth metals are found in their elemental state. Common magnesium-containing minerals are carnallite, magnesite, and dolomite. Common calcium-containing minerals are chalk, limestone, gypsum, and anhydrite.
Strontium is the 15th most abundant element in the Earth's crust. The principal minerals are celestite and strontianite. Barium is slightly less common, much of it in the mineral barite.
Radium, being a decay product of uranium, is found in all uranium-bearing ores. Due to its relatively short half-life, radium from the Earth's early history has decayed, and present-day samples have all come from the much slower decay of uranium.
Production
Most beryllium is extracted from beryllium hydroxide. One production method is sintering, done by mixing beryl, sodium fluorosilicate, and soda at high temperatures to form sodium fluoroberyllate, aluminum oxide, and silicon dioxide. A solution of sodium fluoroberyllate and sodium hydroxide in water is then used to form beryllium hydroxide by precipitation. Alternatively, in the melt method, powdered beryl is heated to high temperature, cooled with water, then heated again slightly in sulfuric acid, eventually yielding beryllium hydroxide. The beryllium hydroxide from either method then produces beryllium fluoride and beryllium chloride through a somewhat long process. Electrolysis or heating of these compounds can then produce beryllium.
In general, strontium carbonate is extracted from the mineral celestite through two methods: by leaching the celestite with sodium carbonate, or in a more complicated way involving coal.
To produce barium, barite (impure barium sulfate) is converted to barium sulfide by carbothermic reduction (such as with coke). The sulfide is water-soluble and easily reacted to form pure barium sulfate, used for commercial pigments, or other compounds, such as barium nitrate. These in turn are calcined into barium oxide, which eventually yields pure barium after reduction with aluminum. The most important supplier of barium is China, which produces more than 50% of world supply.
Applications
Beryllium is used mainly in military applications, but non-military uses exist. In electronics, beryllium is used as a p-type dopant in some semiconductors, and beryllium oxide is used as a high-strength electrical insulator and heat conductor. Beryllium alloys are used for mechanical parts when stiffness, light weight, and dimensional stability are required over a wide temperature range. Beryllium-9 is used in small-scale neutron sources that use the reaction , the reaction used by James Chadwick when he discovered the neutron. Its low atomic weight and low neutron absorption cross-section would make beryllium suitable as a neutron moderator, but its high price and the readily available alternatives such as water, heavy water and nuclear graphite have limited this to niche applications. In the FLiBe eutectic used in molten salt reactors, beryllium's role as a moderator is more incidental than the desired property leading to its use.
Magnesium has many uses. It offers advantages over other structural materials such as aluminum, but magnesium's usage is hindered by its flammability. Magnesium is often alloyed with aluminum, zinc and manganese to increase its strength and corrosion resistance. Magnesium has many other industrial applications, such as its role in the production of iron and steel, and in the Kroll process for production of titanium.
Calcium is used as a reducing agent in the separation of other metals such as uranium from ore. It is a major component of many alloys, especially aluminum and copper alloys, and is also used to deoxidize alloys. Calcium has roles in the making of cheese, mortars, and cement.
Strontium and barium have fewer applications than the lighter alkaline earth metals. Strontium carbonate is used in the manufacturing of red fireworks. Pure strontium is used in the study of neurotransmitter release in neurons. Radioactive strontium-90 finds some use in RTGs, which utilize its decay heat. Barium is used in vacuum tubes as a getter to remove gases. Barium sulfate has many uses in the petroleum industry, and other industries.
Radium has many former applications based on its radioactivity, but its use is no longer common because of the adverse health effects and long half-life. Radium was frequently used in luminous paints, although this use was stopped after it sickened workers. The nuclear quackery that alleged health benefits of radium formerly led to its addition to drinking water, toothpaste, and many other products. Radium is no longer used even when its radioactive properties are desired because its long half-life makes safe disposal challenging. For example, in brachytherapy, shorter-lived alternatives such as iridium-192 are usually used instead.
Representative reactions of alkaline earth metals
Reaction with halogens
Ca + Cl2 → CaCl2
Anhydrous calcium chloride is a hygroscopic substance that is used as a desiccant. Exposed to air, it will absorb water vapour from the air, forming a solution. This property is known as deliquescence.
Reaction with oxygen
Ca + 1/2O2 → CaO
Mg + 1/2O2 → MgO
Reaction with sulfur
Ca + 1/8S8 → CaS
Reaction with carbon
With carbon, they form acetylides
directly. Beryllium forms carbide.
2Be + C → Be2C
CaO + 3C → CaC2 + CO (at 2500 °C in furnace)
CaC2 + 2H2O → Ca(OH)2 + C2H2
Mg2C3 + 4H2O → 2Mg(OH)2 + C3H4
Reaction with nitrogen
Only Be and Mg form nitrides directly.
3Be + N2 → Be3N2
3Mg + N2 → Mg3N2
Reaction with hydrogen
Alkaline earth metals react with hydrogen to generate saline hydride that are unstable in water.
Ca + H2 → CaH2
Reaction with water
Ca, Sr, and Ba readily react with water to form hydroxide and hydrogen gas. Be and Mg are passivated by an impervious layer of oxide. However, amalgamated magnesium will react with water vapor.
Mg + H2O → MgO + H2
Reaction with acidic oxides
Alkaline earth metals reduce the nonmetal from its oxide.
2Mg + SiO2 → 2MgO + Si
2Mg + CO2 → 2MgO + C (in solid carbon dioxide)
Reaction with acids
Mg + 2HCl → MgCl2 + H2
Be + 2HCl → BeCl2 + H2
Reaction with bases
Be exhibits amphoteric properties. It dissolves in concentrated sodium hydroxide.
Be + NaOH + 2H2O → Na[Be(OH)3] + H2
Reaction with alkyl halides
Magnesium reacts with alkyl halides via an insertion reaction to generate Grignard reagents.
RX + Mg → RMgX (in anhydrous ether)
Identification of alkaline earth cations
The flame test
The table below presents the colors observed when the flame of a Bunsen burner is exposed to salts of alkaline earth metals. Be and Mg do not impart colour to the flame due to their small size.
In solution
Mg2+
Disodium phosphate is a very selective reagent for magnesium ions and, in the presence of ammonium salts and ammonia, forms a white precipitate of ammonium magnesium phosphate.
Mg2+ + NH3 + Na2HPO4 → (NH4)MgPO4 + 2Na+
Ca2+
Ca2+ forms a white precipitate with ammonium oxalate. Calcium oxalate is insoluble in water, but is soluble in mineral acids.
Ca2+ + (COO)2(NH4)2 → (COO)2Ca + NH4+
Sr2+
Strontium ions precipitate with soluble sulfate salts.
Sr2+ + Na2SO4 → SrSO4 + 2Na+
All ions of alkaline earth metals form white precipitate with ammonium carbonate in the presence of ammonium chloride and ammonia.
Compounds of alkaline earth metals
Oxides
The alkaline earth metal oxides are formed from the thermal decomposition of the corresponding carbonates.
CaCO3 → CaO + CO2 (at approx. 900°C)
In laboratory, they are obtained from hydroxides:
Mg(OH)2 → MgO + H2O
or nitrates:
Ca(NO3)2 → CaO + 2NO2 + 1/2O2
The oxides exhibit basic character: they turn phenolphthalein red and litmus, blue. They react with water to form hydroxides in an exothermic reaction.
CaO + H2O → Ca(OH)2 + Q
Calcium oxide reacts with carbon to form acetylide.
CaO + 3C → CaC2 + CO (at 2500°C)
CaC2 + N2 → CaCN2 + C
CaCN2 + H2SO4 → CaSO4 + H2N—CN
H2N—CN + H2O → (H2N)2CO (urea)
CaCN2 + 2H2O → CaCO3 + NH3
Hydroxides
They are generated from the corresponding oxides on reaction with water. They exhibit basic character: they turn phenolphthalein pink and litmus, blue. Beryllium hydroxide is an exception as it exhibits amphoteric character.
Be(OH)2 + 2HCl → BeCl2 + 2 H2O
Be(OH)2 + NaOH → Na[Be(OH)3]
Salts
Ca and Mg are found in nature in many compounds such as dolomite, aragonite, magnesite (carbonate rocks). Calcium and magnesium ions are found in hard water. Hard water represents a multifold issue. It is of great interest to remove these ions, thus softening the water. This procedure can be done using reagents such as calcium hydroxide, sodium carbonate or sodium phosphate. A more common method is to use ion-exchange aluminosilicates or ion-exchange resins that trap Ca2+ and Mg2+ and liberate Na+ instead:
Na2O·Al2O3·6SiO2 + Ca2+ → CaO·Al2O3·6SiO2 + 2Na+
Biological role and precautions
Magnesium and calcium are ubiquitous and essential to all known living organisms. They are involved in more than one role, with, for example, magnesium or calcium ion pumps playing a role in some cellular processes, magnesium functioning as the active center in some enzymes, and calcium salts taking a structural role, most notably in bones.
Strontium plays an important role in marine aquatic life, especially hard corals, which use strontium to build their exoskeletons. It and barium have some uses in medicine, for example "barium meals" in radiographic imaging, whilst strontium compounds are employed in some toothpastes. Excessive amounts of strontium-90 are toxic due to its radioactivity and strontium-90 mimics calcium (i.e. Behaves as a "bone seeker") where it bio-accumulates with a significant biological half life. While the bones themselves have higher radiation tolerance than other tissues, the rapidly dividing bone marrow does not and can thus be significantly harmed by Sr-90. The effect of ionizing radiation on bone marrow is also the reason why acute radiation syndrome can have anemia-like symptoms and why donation of red blood cells can increase survivability.
Beryllium and radium, however, are toxic. Beryllium's low aqueous solubility means it is rarely available to biological systems; it has no known role in living organisms and, when encountered by them, is usually highly toxic. Radium has a low availability and is highly radioactive, making it toxic to life.
Extensions
The next alkaline earth metal after radium is thought to be element 120, although this may not be true due to relativistic effects. The synthesis of element 120 was first attempted in March 2007, when a team at the Flerov Laboratory of Nuclear Reactions in Dubna bombarded plutonium-244 with iron-58 ions; however, no atoms were produced, leading to a limit of 400 fb for the cross-section at the energy studied. In April 2007, a team at the GSI attempted to create element 120 by bombarding uranium-238 with nickel-64, although no atoms were detected, leading to a limit of 1.6 pb for the reaction. Synthesis was again attempted at higher sensitivities, although no atoms were detected. Other reactions have been tried, although all have been met with failure.
The chemistry of element 120 is predicted to be closer to that of calcium or strontium instead of barium or radium. This noticeably contrasts with periodic trends, which would predict element 120 to be more reactive than barium and radium. This lowered reactivity is due to the expected energies of element 120's valence electrons, increasing element 120's ionization energy and decreasing the metallic and ionic radii.
The next alkaline earth metal after element 120 has not been definitely predicted. Although a simple extrapolation using the Aufbau principle would suggest that element 170 is a congener of 120, relativistic effects may render such an extrapolation invalid. The next element with properties similar to the alkaline earth metals has been predicted to be element 166, though due to overlapping orbitals and lower energy gap below the 9s subshell, element 166 may instead be placed in group 12, below copernicium.
| Physical sciences | Chemical element groups | null |
37421 | https://en.wikipedia.org/wiki/Bowline | Bowline | The bowline ( or ) is an ancient and simple knot used to form a fixed loop at the end of a rope. It has the virtues of being both easy to tie and untie; most notably, it is easy to untie after being subjected to a load. The bowline is sometimes referred to as king of the knots because of its importance. Along with the sheet bend and the clove hitch, the bowline is often considered one of the most essential knots.
The common bowline shares some structural similarity with the sheet bend. Virtually all end-to-end joining knots (i.e., bends) have a corresponding loop knot.
Although the bowline is generally considered a reliable knot, its main deficiencies are a tendency to work loose when not under load (or under cyclic loading), to slip when pulled sideways, and the bight portion of the knot to capsize in certain circumstances. To address these shortcomings, a number of more secure variations of the bowline have been developed for use in safety-critical applications, or by securing the knot with an overhand knot backup.
History
The bowline's name has an earlier meaning, dating to the age of sail. On a square-rigged ship, a bowline (sometimes spelled as two words, bow line) is a rope that holds the edge of a square sail towards the bow of the ship and into the wind, preventing it from being taken aback. A ship is said to be on a "taut bowline" when these lines are made as taut as possible in order to sail close-hauled to the wind.
The bowline knot is thought to have been first mentioned in John Smith's 1627 work A Sea Grammar under the name Boling knot. Smith considered the knot to be strong and secure, saying, "The Boling knot is also so firmly made and fastened by the bridles into the cringles of the sails, they will break, or the sail split before it will slip."
Another possible finding was discovered on the rigging of the Ancient Egyptian Pharaoh Khufu's solar ship during an excavation in 1954.
Usage
The bowline is used to make a loop at one end of a line. It is tied with the rope's working end also known as the "tail" or "end". The loop may pass around or through an object during the making of the knot. The knot tightens when loaded at (pulled by) the standing part of the line.
The bowline is commonly used in sailing small craft, for example to fasten a halyard to the head of a sail or to tie a jib sheet to a clew of a jib. The bowline is well known as a rescue knot for such purposes as rescuing people who might have fallen down a hole, or off a cliff onto a ledge. This knot is particularly useful in such a situation because it is possible to tie with one hand. As such, a person needing rescue could hold onto the rope with one hand and use the other to tie the knot around their waist before being pulled to safety by rescuers. The Federal Aviation Administration recommends the bowline knot for tying down light aircraft.
A rope with a bowline retains approximately 2/3 of its strength, with variances depending upon the nature of the rope, as in practice the exact strength depends on a variety of factors.
In the United Kingdom, the knot is listed as part of the training objectives for the Qualified Firefighter Assessment.
Tying
A mnemonic used to teach the tying of the bowline is to imagine the working end of the rope as a rabbit.
1,2 – a loop is made into the standing part which will act as the rabbit's hole
3 – the "rabbit" comes up the hole,
4 – goes round the tree (standing part) right to left
5 – and back down the hole
This can be taught to children with the rhyme: "Up through the rabbit hole, round the big tree; down through the rabbit hole and off goes he."
A single handed method can also be used; see this animation.
There is a potential with beginners to wrongly tie the bowline. This faulty knot stems from an incorrect first step while tying the rabbit hole. If the loop is made backwards so that the working end of the rope is on the bottom, the resulting knot will be the Eskimo bowline, looking like a sideways bowline, which is also a stable knot.
Security
As noted above, the simplicity of the bowline makes it a good knot for a general purpose end-of-line loop. However, in situations that require additional security, several variants have been developed:
Round turn bowline
The round turn bowline is made by the addition of an extra turn in the formation of the "rabbit hole" before the working end is threaded through.
Water bowline
Similar to the double bowline, the water bowline is made by forming a clove hitch before the working end is threaded through. It is said to be stronger and also more resistant to jamming than the other variations, especially when wet.
Yosemite bowline
In this variation the knot's working end is taken round the loop in the direction of the original round turn, then threaded back up through the original round turn before the knot is drawn tight. The Yosemite bowline is often used in climbing.
Other variants
The cowboy bowline (also called Dutch bowline), French bowline, and Portuguese bowline are variations of the bowline, each of which makes one loop. (Names of knots are mostly traditional and may not reflect their origins.) A running bowline can be used to make a noose which draws tighter as tension is placed on the standing part of the rope. The Birmingham bowline has two loops; the working part is passed twice around the standing part (the "rabbit" makes two trips out of the hole and around the tree). Other two-loop bowline knots include the Spanish bowline and the bowline on the bight; these can be tied in the middle of a rope without access to the ends. A triple bowline is used to make three loops. A Cossack knot is a bowline where the running end goes around the loop-start rather than the main part and has a more symmetric triangular shaped knot. A slipped version of the Cossack knot is called Kalmyk loop.
| Technology | Flexible components | null |
37427 | https://en.wikipedia.org/wiki/Le%20Chatelier%27s%20principle | Le Chatelier's principle | In chemistry, Le Chatelier's principle (pronounced or ) is a principle used to predict the effect of a change in conditions on chemical equilibrium. Other names include Chatelier's principle, Braun–Le Chatelier principle, Le Chatelier–Braun principle or the equilibrium law.
The principle is named after French chemist Henry Louis Le Chatelier who enunciated the principle in 1884 by extending the reasoning from the Van 't Hoff relation of how temperature variations changes the equilibrium to the variations of pressure and what's now called chemical potential, and sometimes also credited to Karl Ferdinand Braun, who discovered it independently in 1887. It can be defined as:
In scenarios outside thermodynamic equilibrium, there can arise phenomena in contradiction to an over-general statement of Le Chatelier's principle.
Le Chatelier's principle is sometimes alluded to in discussions of topics other than thermodynamics.
Thermodynamic statement
Le Chatelier–Braun principle analyzes the qualitative behaviour of a thermodynamic system when a particular one of its externally controlled state variables, say changes by an amount the 'driving change', causing a change the 'response of prime interest', in its conjugate state variable all other externally controlled state variables remaining constant. The response illustrates 'moderation' in ways evident in two related thermodynamic equilibria. Obviously, one of has to be intensive, the other extensive. Also as a necessary part of the scenario, there is some particular auxiliary 'moderating' state variable , with its conjugate state variable For this to be of interest, the 'moderating' variable must undergo a change or in some part of the experimental protocol; this can be either by imposition of a change , or with the holding of constant, written For the principle to hold with full generality, must be extensive or intensive accordingly as is so. Obviously, to give this scenario physical meaning, the 'driving' variable and the 'moderating' variable must be subject to separate independent experimental controls and measurements.
Explicit statement
The principle can be stated in two ways, formally different, but substantially equivalent, and, in a sense, mutually 'reciprocal'. The two ways illustrate the Maxwell relations, and the stability of thermodynamic equilibrium according to the second law of thermodynamics, evident as the spread of energy amongst the state variables of the system in response to an imposed change.
The two ways of statement differ in their experimental protocols. They share an index protocol (denoted that may be described as 'changed driver, moderation permitted'. Along with the driver change it imposes a constant with and allows the uncontrolled 'moderating' variable response along with the 'index' response of interest
The two ways of statement differ in their respective compared protocols. One form of compared protocol posits 'changed driver, no moderation' (denoted The other form of compared protocol posits 'fixed driver, imposed moderation' (denoted )
Forced 'driver' change, free or fixed 'moderation'
This way compares with to compare the effects of the imposed the change with and without moderation. The protocol prevents 'moderation' by enforcing that through an adjustment and it observes the 'no-moderation' response Provided that the observed response is indeed that then the principle states that .
In other words, change in the 'moderating' state variable moderates the effect of the driving change in on the responding conjugate variable
Forcedly changed or fixed 'driver', respectively free or forced 'moderation'
This way also uses two experimental protocols, and , to compare the index effect with the effect of 'moderation' alone. The index protocol is executed first; the response of prime interest, is observed, and the response of the 'moderating' variable is also measured. With that knowledge, then the fixed driver, imposed moderation protocol enforces that with the driving variable held fixed; the protocol also, through an adjustment imposes a change (learnt from the just previous measurement) in the 'moderating' variable, and measures the change Provided that the 'moderated' response is indeed that then the principle states that the signs of and are opposite.
Again, in other words, change in the 'moderating' state variable opposes the effect of the driving change in on the responding conjugate variable
Other statements
The duration of adjustment depends on the strength of the negative feedback to the initial shock. The principle is typically used to describe closed negative-feedback systems, but applies, in general, to thermodynamically closed and isolated systems in nature, since the second law of thermodynamics ensures that the disequilibrium caused by an instantaneous shock is eventually followed by a new equilibrium.
While well rooted in chemical equilibrium, Le Chatelier's principle can also be used in describing mechanical systems in that a system put under stress will respond in such a way as to reduce or minimize that stress. Moreover, the response will generally be via the mechanism that most easily relieves that stress. Shear pins and other such sacrificial devices are design elements that protect systems against stress applied in undesired manners to relieve it so as to prevent more extensive damage to the entire system, a practical engineering application of Le Chatelier's principle.
Chemistry
Effect of change in concentration
Changing the concentration of a chemical will shift the equilibrium to the side that would counter that change in concentration. The chemical system will attempt to partly oppose the change affected to the original state of equilibrium. In turn, the rate of reaction, extent, and yield of products will be altered corresponding to the impact on the system.
This can be illustrated by the equilibrium of carbon monoxide and hydrogen gas, reacting to form methanol.
CO + 2 H2 ⇌ CH3OH
Suppose we were to increase the concentration of CO in the system. Using Le Chatelier's principle, we can predict that the concentration of methanol will increase, decreasing the total change in CO. If we are to add a species to the overall reaction, the reaction will favor the side opposing the addition of the species. Likewise, the subtraction of a species would cause the reaction to "fill the gap" and favor the side where the species was reduced. This observation is supported by the collision theory. As the concentration of CO is increased, the frequency of successful collisions of that reactant would increase also, allowing for an increase in forward reaction, and generation of the product. Even if the desired product is not thermodynamically favored, the end-product can be obtained if it is continuously removed from the solution.
The effect of a change in concentration is often exploited synthetically for condensation reactions (i.e., reactions that extrude water) that are equilibrium processes (e.g., formation of an ester from carboxylic acid and alcohol or an imine from an amine and aldehyde). This can be achieved by physically sequestering water, by adding desiccants like anhydrous magnesium sulfate or molecular sieves, or by continuous removal of water by distillation, often facilitated by a Dean-Stark apparatus.
Effect of change in temperature
The effect of changing the temperature in the equilibrium can be made clear by 1) incorporating heat as either a reactant or a product, and 2) assuming that an increase in temperature increases the heat content of a system. When the reaction is exothermic (ΔH is negative and energy is released), heat is included as a product, and when the reaction is endothermic (ΔH is positive and energy is consumed), heat is included as a reactant. Hence, whether increasing or decreasing the temperature would favor the forward or the reverse reaction can be determined by applying the same principle as with concentration changes.
Take, for example, the reversible reaction of nitrogen gas with hydrogen gas to form ammonia:
N2(g) + 3 H2(g) ⇌ 2 NH3(g) ΔH = −92 kJ mol−1
Because this reaction is exothermic, it produces heat:
N2(g) + 3 H2(g) ⇌ 2 NH3(g) + heat
If the temperature were increased, the heat content of the system would increase, so the system would consume some of that heat by shifting the equilibrium to the left, thereby producing less ammonia. More ammonia would be produced if the reaction were run at a lower temperature, but a lower temperature also lowers the rate of the process, so, in practice (the Haber process) the temperature is set at a compromise value that allows ammonia to be made at a reasonable rate with an equilibrium concentration that is not too unfavorable.
In exothermic reactions, an increase in temperature decreases the equilibrium constant, K, whereas in endothermic reactions, an increase in temperature increases K.
Le Chatelier's principle applied to changes in concentration or pressure can be understood by giving K a constant value. The effect of temperature on equilibria, however, involves a change in the equilibrium constant. The dependence of K on temperature is determined by the sign of ΔH. The theoretical basis of this dependence is given by the Van 't Hoff equation.
Effect of change in pressure
The equilibrium concentrations of the products and reactants do not directly depend on the total pressure of the system. They may depend on the partial pressure of the products and reactants, but if the number of moles of gaseous reactants is equal to the number of moles of gaseous products, pressure has no effect on equilibrium.
Changing total pressure by adding an inert gas at constant volume does not affect the equilibrium concentrations (see ).
Changing total pressure by changing the volume of the system changes the partial pressures of the products and reactants and can affect the equilibrium concentrations (see below).
Effect of change in volume
Changing the volume of the system changes the partial pressures of the products and reactants and can affect the equilibrium concentrations. With a pressure increase due to a decrease in volume, the side of the equilibrium with fewer moles is more favorable and with a pressure decrease due to an increase in volume, the side with more moles is more favorable. There is no effect on a reaction where the number of moles of gas is the same on each side of the chemical equation.
Considering the reaction of nitrogen gas with hydrogen gas to form ammonia:
⇌ ΔH = −92kJ mol−1
Note the number of moles of gas on the left-hand side and the number of moles of gas on the right-hand side. When the volume of the system is changed, the partial pressures of the gases change. If we were to decrease pressure by increasing volume, the equilibrium of the above reaction will shift to the left, because the reactant side has a greater number of moles than does the product side. The system tries to counteract the decrease in partial pressure of gas molecules by shifting to the side that exerts greater pressure. Similarly, if we were to increase pressure by decreasing volume, the equilibrium shifts to the right, counteracting the pressure increase by shifting to the side with fewer moles of gas that exert less pressure. If the volume is increased because there are more moles of gas on the reactant side, this change is more significant in the denominator of the equilibrium constant expression, causing a shift in equilibrium.
Effect of adding an inert gas
An inert gas (or noble gas), such as helium, is one that does not react with other elements or compounds. Adding an inert gas into a gas-phase equilibrium at constant volume does not result in a shift. This is because the addition of a non-reactive gas does not change the equilibrium equation, as the inert gas appears on both sides of the chemical reaction equation. For example, if A and B react to form C and D, but X does not participate in the reaction: \mathit{a}A{} + \mathit{b}B{} + \mathit{x}X <=> \mathit{c}C{} + \mathit{d}D{} + \mathit{x}X. While it is true that the total pressure of the system increases, the total pressure does not have any effect on the equilibrium constant; rather, it is a change in partial pressures that will cause a shift in the equilibrium. If, however, the volume is allowed to increase in the process, the partial pressures of all gases would be decreased resulting in a shift towards the side with the greater number of moles of gas. The shift will never occur on the side with fewer moles of gas. It is also known as Le Chatelier's postulate.
Effect of a catalyst
A catalyst increases the rate of a reaction without being consumed in the reaction. The use of a catalyst does not affect the position and composition of the equilibrium of a reaction, because both the forward and backward reactions are sped up by the same factor.
For example, consider the Haber process for the synthesis of ammonia (NH3):
N2 + 3 H2 ⇌ 2 NH3
In the above reaction, iron (Fe) and molybdenum (Mo) will function as catalysts if present. They will accelerate any reactions, but they do not affect the state of the equilibrium.
General statements
Thermodynamic equilibrium processes
Le Chatelier's principle refers to states of thermodynamic equilibrium. The latter are stable against perturbations that satisfy certain criteria; this is essential to the definition of thermodynamic equilibrium.
OR
It states that changes in the temperature, pressure, volume, or concentration of a system will result in predictable and opposing changes in the system in order to achieve a new equilibrium state.
For this, a state of thermodynamic equilibrium is most conveniently described through a fundamental relation that specifies a cardinal function of state, of the energy kind, or of the entropy kind, as a function of state variables chosen to fit the thermodynamic operations through which a perturbation is to be applied.
In theory and, nearly, in some practical scenarios, a body can be in a stationary state with zero macroscopic flows and rates of chemical reaction (for example, when no suitable catalyst is present), yet not in thermodynamic equilibrium, because it is metastable or unstable; then Le Chatelier's principle does not necessarily apply.
Non-equilibrium processes
A simple body or a complex thermodynamic system can also be in a stationary state with non-zero rates of flow and chemical reaction; sometimes the word "equilibrium" is used in reference to such a state, though by definition it is not a thermodynamic equilibrium state. Sometimes, it is proposed to consider Le Chatelier's principle for such states. For this exercise, rates of flow and of chemical reaction must be considered. Such rates are not supplied by equilibrium thermodynamics. For such states, there are no simple statements that echo Le Chatelier's principle. Prigogine and Defay demonstrate that such a scenario may exhibit moderation, or may exhibit a measured amount of anti-moderation, though not a run-away anti-moderation that goes to completion. The example analysed by Prigogine and Defay is the Haber process.
This situation is clarified by considering two basic methods of analysis of a process. One is the classical approach of Gibbs, the other uses the near- or local equilibrium approach of De Donder. The Gibbs approach requires thermodynamic equilibrium. The Gibbs approach is reliable within its proper scope, thermodynamic equilibrium, though of course it does not cover non-equilibrium scenarios. The De Donder approach can cover equilibrium scenarios, but also covers non-equilibrium scenarios in which there is only local thermodynamic equilibrium, and not thermodynamic equilibrium proper. The De Donder approach allows state variables called extents of reaction to be independent variables, though in the Gibbs approach, such variables are not independent. Thermodynamic non-equilibrium scenarios can contradict an over-general statement of Le Chatelier's Principle.
Related system concepts
It is common to treat the principle as a more general observation of systems, such as
or, "roughly stated":
The concept of systemic maintenance of a stable steady state despite perturbations has a variety of names, and has been studied in a variety of contexts, chiefly in the natural sciences. In chemistry, the principle is used to manipulate the outcomes of reversible reactions, often to increase their yield. In pharmacology, the binding of ligands to receptors may shift the equilibrium according to Le Chatelier's principle, thereby explaining the diverse phenomena of receptor activation and desensitization. In biology, the concept of homeostasis is different from Le Chatelier's principle, in that homoeostasis is generally maintained by processes of active character, as distinct from the passive or dissipative character of the processes described by Le Chatelier's principle in thermodynamics. In economics, even further from thermodynamics, allusion to the principle is sometimes regarded as helping explain the price equilibrium of efficient economic systems. In some dynamic systems, the end-state cannot be determined from the shock or perturbation.
Economics
In economics, a similar concept also named after Le Chatelier was introduced by American economist Paul Samuelson in 1947. There the generalized Le Chatelier principle is for a maximum condition of economic equilibrium: Where all unknowns of a function are independently variable, auxiliary constraints—"just-binding" in leaving initial equilibrium unchanged—reduce the response to a parameter change. Thus, factor-demand and commodity-supply elasticities are hypothesized to be lower in the short run than in the long run because of the fixed-cost constraint in the short run.
Since the change of the value of an objective function in a neighbourhood of the maximum position is described by the envelope theorem, Le Chatelier's principle can be shown to be a corollary thereof.
| Physical sciences | Reaction | Chemistry |
37431 | https://en.wikipedia.org/wiki/Solvent | Solvent | A solvent (from the Latin solvō, "loosen, untie, solve") is a substance that dissolves a solute, resulting in a solution. A solvent is usually a liquid but can also be a solid, a gas, or a supercritical fluid. Water is a solvent for polar molecules, and the most common solvent used by living things; all the ions and proteins in a cell are dissolved in water within the cell.
Major uses of solvents are in paints, paint removers, inks, and dry cleaning. Specific uses for organic solvents are in dry cleaning (e.g. tetrachloroethylene); as paint thinners (toluene, turpentine); as nail polish removers and solvents of glue (acetone, methyl acetate, ethyl acetate); in spot removers (hexane, petrol ether); in detergents (citrus terpenes); and in perfumes (ethanol). Solvents find various applications in chemical, pharmaceutical, oil, and gas industries, including in chemical syntheses and purification processes
Solutions and solvation
When one substance is dissolved into another, a solution is formed. This is opposed to the situation when the compounds are insoluble like sand in water. In a solution, all of the ingredients are uniformly distributed at a molecular level and no residue remains. A solvent-solute mixture consists of a single phase with all solute molecules occurring as solvates (solvent-solute complexes), as opposed to separate continuous phases as in suspensions, emulsions and other types of non-solution mixtures. The ability of one compound to be dissolved in another is known as solubility; if this occurs in all proportions, it is called miscible.
In addition to mixing, the substances in a solution interact with each other at the molecular level. When something is dissolved, molecules of the solvent arrange around molecules of the solute. Heat transfer is involved and entropy is increased making the solution more thermodynamically stable than the solute and solvent separately. This arrangement is mediated by the respective chemical properties of the solvent and solute, such as hydrogen bonding, dipole moment and polarizability. Solvation does not cause a chemical reaction or chemical configuration changes in the solute. However, solvation resembles a coordination complex formation reaction, often with considerable energetics (heat of solvation and entropy of solvation) and is thus far from a neutral process.
When one substance dissolves into another, a solution is formed. A solution is a homogeneous mixture consisting of a solute dissolved into a solvent. The solute is the substance that is being dissolved, while the solvent is the dissolving medium. Solutions can be formed with many different types and forms of solutes and solvents.
Solvent classifications
Solvents can be broadly classified into two categories: polar and non-polar. A special case is elemental mercury, whose solutions are known as amalgams; also, other metal solutions exist which are liquid at room temperature.
Generally, the dielectric constant of the solvent provides a rough measure of a solvent's polarity. The strong polarity of water is indicated by its high dielectric constant of 88 (at 0 °C). Solvents with a dielectric constant of less than 15 are generally considered to be nonpolar.
The dielectric constant measures the solvent's tendency to partly cancel the field strength of the electric field of a charged particle immersed in it. This reduction is then compared to the field strength of the charged particle in a vacuum. Heuristically, the dielectric constant of a solvent can be thought of as its ability to reduce the solute's effective internal charge. Generally, the dielectric constant of a solvent is an acceptable predictor of the solvent's ability to dissolve common ionic compounds, such as salts.
Other polarity scales
Dielectric constants are not the only measure of polarity. Because solvents are used by chemists to carry out chemical reactions or observe chemical and biological phenomena, more specific measures of polarity are required. Most of these measures are sensitive to chemical structure.
The Grunwald–Winstein mY scale measures polarity in terms of solvent influence on buildup of positive charge of a solute during a chemical reaction.
Kosower's Z scale measures polarity in terms of the influence of the solvent on UV-absorption maxima of a salt, usually pyridinium iodide or the pyridinium zwitterion.
Donor number and donor acceptor scale measures polarity in terms of how a solvent interacts with specific substances, like a strong Lewis acid or a strong Lewis base.
The Hildebrand parameter is the square root of cohesive energy density. It can be used with nonpolar compounds, but cannot accommodate complex chemistry.
Reichardt's dye, a solvatochromic dye that changes color in response to polarity, gives a scale of ET(30) values. ET is the transition energy between the ground state and the lowest excited state in kcal/mol, and (30) identifies the dye. Another, roughly correlated scale (ET(33)) can be defined with Nile red.
Gregory's solvent ϸ parameter is a quantum chemically derived charge density parameter. This parameter seems to reproduce many of the experimental solvent parameters (especially the donor and acceptor numbers) using this charge decomposition analysis approach, with an electrostatic basis. The ϸ parameter was originally developed to quantify and explain the Hofmeister series by quantifying polyatomic ions and the monatomic ions in a united manner.
The polarity, dipole moment, polarizability and hydrogen bonding of a solvent determines what type of compounds it is able to dissolve and with what other solvents or liquid compounds it is miscible. Generally, polar solvents dissolve polar compounds best and non-polar solvents dissolve non-polar compounds best; hence "like dissolves like". Strongly polar compounds like sugars (e.g. sucrose) or ionic compounds, like inorganic salts (e.g. table salt) dissolve only in very polar solvents like water, while strongly non-polar compounds like oils or waxes dissolve only in very non-polar organic solvents like hexane. Similarly, water and hexane (or vinegar and vegetable oil) are not miscible with each other and will quickly separate into two layers even after being shaken well.
Polarity can be separated to different contributions. For example, the Kamlet-Taft parameters are dipolarity/polarizability (π*), hydrogen-bonding acidity (α) and hydrogen-bonding basicity (β). These can be calculated from the wavelength shifts of 3–6 different solvatochromic dyes in the solvent, usually including Reichardt's dye, nitroaniline and diethylnitroaniline. Another option, Hansen solubility parameters, separates the cohesive energy density into dispersion, polar, and hydrogen bonding contributions.
Polar protic and polar aprotic
Solvents with a dielectric constant (more accurately, relative static permittivity) greater than 15 (i.e. polar or polarizable) can be further divided into protic and aprotic. Protic solvents, such as water, solvate anions (negatively charged solutes) strongly via hydrogen bonding. Polar aprotic solvents, such as acetone or dichloromethane, tend to have large dipole moments (separation of partial positive and partial negative charges within the same molecule) and solvate positively charged species via their negative dipole. In chemical reactions the use of polar protic solvents favors the SN1 reaction mechanism, while polar aprotic solvents favor the SN2 reaction mechanism. These polar solvents are capable of forming hydrogen bonds with water to dissolve in water whereas non-polar solvents are not capable of strong hydrogen bonds.
Physical properties
Properties table of common solvents
The solvents are grouped into nonpolar, polar aprotic, and polar protic solvents, with each group ordered by increasing polarity. The properties of solvents which exceed those of water are bolded.
The ACS Green Chemistry Institute maintains a tool for the selection of solvents based on a principal component analysis of solvent properties.
Hansen solubility parameter values
The Hansen solubility parameter (HSP) values are based on dispersion bonds (δD), polar bonds (δP) and hydrogen bonds (δH). These contain information about the inter-molecular interactions with other solvents and also with polymers, pigments, nanoparticles, etc. This allows for rational formulations knowing, for example, that there is a good HSP match between a solvent and a polymer. Rational substitutions can also be made for "good" solvents (effective at dissolving the solute) that are "bad" (expensive or hazardous to health or the environment). The following table shows that the intuitions from "non-polar", "polar aprotic" and "polar protic" are put numerically – the "polar" molecules have higher levels of δP and the protic solvents have higher levels of δH. Because numerical values are used, comparisons can be made rationally by comparing numbers. For example, acetonitrile is much more polar than acetone but exhibits slightly less hydrogen bonding.
If, for environmental or other reasons, a solvent or solvent blend is required to replace another of equivalent solvency, the substitution can be made on the basis of the Hansen solubility parameters of each. The values for mixtures are taken as the weighted averages of the values for the neat solvents. This can be calculated by trial-and-error, a spreadsheet of values, or HSP software. A 1:1 mixture of toluene and 1,4 dioxane has δD, δP and δH values of 17.8, 1.6 and 5.5, comparable to those of chloroform at 17.8, 3.1 and 5.7 respectively. Because of the health hazards associated with toluene itself, other mixtures of solvents may be found using a full HSP dataset.
Boiling point
The boiling point is an important property because it determines the speed of evaporation. Small amounts of low-boiling-point solvents like diethyl ether, dichloromethane, or acetone will evaporate in seconds at room temperature, while high-boiling-point solvents like water or dimethyl sulfoxide need higher temperatures, an air flow, or the application of vacuum for fast evaporation.
Low boilers: boiling point below 100 °C (boiling point of water)
Medium boilers: between 100 °C and 150 °C
High boilers: above 150 °C
Density
Most organic solvents have a lower density than water, which means they are lighter than and will form a layer on top of water. Important exceptions are most of the halogenated solvents like dichloromethane or chloroform will sink to the bottom of a container, leaving water as the top layer. This is crucial to remember when partitioning compounds between solvents and water in a separatory funnel during chemical syntheses.
Often, specific gravity is cited in place of density. Specific gravity is defined as the density of the solvent divided by the density of water at the same temperature. As such, specific gravity is a unitless value. It readily communicates whether a water-insoluble solvent will float (SG < 1.0) or sink (SG > 1.0) when mixed with water.
Multicomponent solvents
Multicomponent solvents appeared after World War II in the USSR, and continue to be used and produced in the post-Soviet states. These solvents may have one or more applications, but they are not universal preparations.
Solvents
Thinners
Safety
Fire
Most organic solvents are flammable or highly flammable, depending on their volatility. Exceptions are some chlorinated solvents like dichloromethane and chloroform. Mixtures of solvent vapors and air can explode. Solvent vapors are heavier than air; they will sink to the bottom and can travel large distances nearly undiluted. Solvent vapors can also be found in supposedly empty drums and cans, posing a flash fire hazard; hence empty containers of volatile solvents should be stored open and upside down.
Both diethyl ether and carbon disulfide have exceptionally low autoignition temperatures which increase greatly the fire risk associated with these solvents. The autoignition temperature of carbon disulfide is below 100 °C (212 °F), so objects such as steam pipes, light bulbs, hotplates, and recently extinguished bunsen burners are able to ignite its vapors.
In addition some solvents, such as methanol, can burn with a very hot flame which can be nearly invisible under some lighting conditions. This can delay or prevent the timely recognition of a dangerous fire, until flames spread to other materials.
Explosive peroxide formation
Ethers like diethyl ether and tetrahydrofuran (THF) can form highly explosive organic peroxides upon exposure to oxygen and light. THF is normally more likely to form such peroxides than diethyl ether. One of the most susceptible solvents is diisopropyl ether, but all ethers are considered to be potential peroxide sources.
The heteroatom (oxygen) stabilizes the formation of a free radical which is formed by the abstraction of a hydrogen atom by another free radical. The carbon-centered free radical thus formed is able to react with an oxygen molecule to form a peroxide compound. The process of peroxide formation is greatly accelerated by exposure to even low levels of light, but can proceed slowly even in dark conditions.
Unless a desiccant is used which can destroy the peroxides, they will concentrate during distillation, due to their higher boiling point. When sufficient peroxides have formed, they can form a crystalline, shock-sensitive solid precipitate at the mouth of a container or bottle. Minor mechanical disturbances, such as scraping the inside of a vessel, the dislodging of a deposit, or merely twisting the cap may provide sufficient energy for the peroxide to detonate or explode violently.
Peroxide formation is not a significant problem when fresh solvents are used up quickly; they are more of a problem in laboratories which may take years to finish a single bottle. Low-volume users should acquire only small amounts of peroxide-prone solvents, and dispose of old solvents on a regular periodic schedule.
To avoid explosive peroxide formation, ethers should be stored in an airtight container, away from light, because both light and air can encourage peroxide formation.
A number of tests can be used to detect the presence of a peroxide in an ether; one is to use a combination of iron(II) sulfate and potassium thiocyanate. The peroxide is able to oxidize the Fe2+ ion to an Fe3+ ion, which then forms a deep-red coordination complex with the thiocyanate.
Peroxides may be removed by washing with acidic iron(II) sulfate, filtering through alumina, or distilling from sodium/benzophenone. Alumina degrades the peroxides but some could remain intact in it, therefore it must be disposed of properly. The advantage of using sodium/benzophenone is that moisture and oxygen are removed as well.
Health effects
General health hazards associated with solvent exposure include toxicity to the nervous system, reproductive damage, liver and kidney damage, respiratory impairment, cancer, hearing loss, and dermatitis.
Acute exposure
Many solvents can lead to a sudden loss of consciousness if inhaled in large amounts. Solvents like diethyl ether and chloroform have been used in medicine as anesthetics, sedatives, and hypnotics for a long time. Many solvents (e.g. from gasoline or solvent-based glues) are abused recreationally in glue sniffing, often with harmful long-term health effects such as neurotoxicity or cancer. Fraudulent substitution of 1,5-pentanediol by the psychoactive 1,4-butanediol by a subcontractor caused the Bindeez product recall.
Ethanol (grain alcohol) is a widely used and abused psychoactive drug. If ingested, the so-called "toxic alcohols" (other than ethanol) such as methanol, 1-propanol, and ethylene glycol metabolize into toxic aldehydes and acids, which cause potentially fatal metabolic acidosis. The commonly available alcohol solvent methanol can cause permanent blindness or death if ingested. The solvent 2-butoxyethanol, used in fracking fluids, can cause hypotension and metabolic acidosis.
Chronic exposure
Chronic solvent exposures are often caused by the inhalation of solvent vapors, or the ingestion of diluted solvents, repeated over the course of an extended period.
Some solvents can damage internal organs like the liver, the kidneys, the nervous system, or the brain. The cumulative brain effects of long-term or repeated exposure to some solvents is called chronic solvent-induced encephalopathy (CSE).
Chronic exposure to organic solvents in the work environment can produce a range of adverse neuropsychiatric effects. For example, occupational exposure to organic solvents has been associated with higher numbers of painters suffering from alcoholism. Ethanol has a synergistic effect when taken in combination with many solvents; for instance, a combination of toluene/benzene and ethanol causes greater nausea/vomiting than either substance alone.
Some organic solvents are known or suspected to be cataractogenic. A mixture of aromatic hydrocarbons, aliphatic hydrocarbons, alcohols, esters, ketones, and terpenes were found to greatly increase the risk of developing cataracts in the lens of the eye.
Environmental contamination
A major pathway of induced health effects arises from spills or leaks of solvents, especially chlorinated solvents, that reach the underlying soil. Since solvents readily migrate substantial distances, the creation of widespread soil contamination is not uncommon; this is particularly a health risk if aquifers are affected. Vapor intrusion can occur from sites with extensive subsurface solvent contamination.
| Physical sciences | Mixture | Chemistry |
37438 | https://en.wikipedia.org/wiki/Complex%20system | Complex system | A complex system is a system composed of many components which may interact with each other. Examples of complex systems are Earth's global climate, organisms, the human brain, infrastructure such as power grid, transportation or communication systems, complex software and electronic systems, social and economic organizations (like cities), an ecosystem, a living cell, and, ultimately, for some authors, the entire universe.
Complex systems are systems whose behavior is intrinsically difficult to model due to the dependencies, competitions, relationships, or other types of interactions between their parts or between a given system and its environment. Systems that are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others. Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links to their interactions.
The term complex systems often refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment. The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm to reductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them.
As an interdisciplinary domain, complex systems draw contributions from many different fields, such as the study of self-organization and critical phenomena from physics, of spontaneous order from the social sciences, chaos from mathematics, adaptation from biology, and many others. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, including statistical physics, information theory, nonlinear dynamics, anthropology, computer science, meteorology, sociology, economics, psychology, and biology.
Key concepts
Adaptation
Complex adaptive systems are special cases of complex systems that are adaptive in that they have the capacity to change and learn from experience. Examples of complex adaptive systems include the stock market, social insect and ant colonies, the biosphere and the ecosystem, the brain and the immune system, the cell and the developing embryo, cities, manufacturing businesses and any human social group-based endeavor in a cultural and social system such as political parties or communities.
Features
Complex systems may have the following features:
Complex systems may be open
Complex systems are usually open systems – that is, they exist in a thermodynamic gradient and dissipate energy. In other words, complex systems are frequently far from energetic equilibrium: but despite this flux, there may be pattern stability, see synergetics.
Complex systems may exhibit critical transitions
Critical transitions are abrupt shifts in the state of ecosystems, the climate, financial and economic systems or other complex systems that may occur when changing conditions pass a critical or bifurcation point. The 'direction of critical slowing down' in a system's state space may be indicative of a system's future state after such transitions when delayed negative feedbacks leading to oscillatory or other complex dynamics are weak.
Complex systems may be nested
The components of a complex system may themselves be complex systems. For example, an economy is made up of organisations, which are made up of people, which are made up of cells – all of which are complex systems. The arrangement of interactions within complex bipartite networks may be nested as well. More specifically, bipartite ecological and organisational networks of mutually beneficial interactions were found to have a nested structure. This structure promotes indirect facilitation and a system's capacity to persist under increasingly harsh circumstances as well as the potential for large-scale systemic regime shifts.
Dynamic network of multiplicity
As well as coupling rules, the dynamic network of a complex system is important. Small-world or scale-free networks which have many local interactions and a smaller number of inter-area connections are often employed. Natural complex systems often exhibit such topologies. In the human cortex for example, we see dense local connectivity and a few very long axon projections between regions inside the cortex and to other brain regions.
May produce emergent phenomena
Complex systems may exhibit behaviors that are emergent, which is to say that while the results may be sufficiently determined by the activity of the systems' basic constituents, they may have properties that can only be studied at a higher level. For example, empirical food webs display regular, scale-invariant features across aquatic and terrestrial ecosystems when studied at the level of clustered 'trophic' species. Another example is offered by the termites in a mound which have physiology, biochemistry and biological development at one level of analysis, whereas their social behavior and mound building is a property that emerges from the collection of termites and needs to be analyzed at a different level.
Relationships are non-linear
In practical terms, this means a small perturbation may cause a large effect (see butterfly effect), a proportional effect, or even no effect at all. In linear systems, the effect is always directly proportional to cause. See nonlinearity.
Relationships contain feedback loops
Both negative (damping) and positive (amplifying) feedback are always found in complex systems. The effects of an element's behavior are fed back in such a way that the element itself is altered.
History
In 1948, Dr. Warren Weaver published an essay on "Science and Complexity", exploring the diversity of problem types by contrasting problems of simplicity, disorganized complexity, and organized complexity. Weaver described these as "problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole."
While the explicit study of complex systems dates at least to the 1970s, the first research institute focused on complex systems, the Santa Fe Institute, was founded in 1984. Early Santa Fe Institute participants included physics Nobel laureates Murray Gell-Mann and Philip Anderson, economics Nobel laureate Kenneth Arrow, and Manhattan Project scientists George Cowan and Herb Anderson. Today, there are over 50 institutes and research centers focusing on complex systems.
Since the late 1990s, the interest of mathematical physicists in researching economic phenomena has been on the rise. The proliferation of cross-disciplinary research with the application of solutions originated from the physics epistemology has entailed a gradual paradigm shift in the theoretical articulations and methodological approaches in economics, primarily in financial economics. The development has resulted in the emergence of a new branch of discipline, namely "econophysics", which is broadly defined as a cross-discipline that applies statistical physics methodologies which are mostly based on the complex systems theory and the chaos theory for economics analysis.
The 2021 Nobel Prize in Physics was awarded to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi for their work to understand complex systems. Their work was used to create more accurate computer models of the effect of global warming on the Earth's climate.
Applications
Complexity in practice
The traditional approach to dealing with complexity is to reduce or constrain it. Typically, this involves compartmentalization: dividing a large system into separate parts. Organizations, for instance, divide their work into departments that each deal with separate issues. Engineering systems are often designed using modular components. However, modular designs become susceptible to failure when issues arise that bridge the divisions.
Complexity of cities
Jane Jacobs described cities as being a problem in organized complexity in 1961, citing Dr. Weaver's 1948 essay. As an example, she explains how an abundance of factors interplay into how various urban spaces lead to a diversity of interactions, and how changing those factors can change how the space is used, and how well the space supports the functions of the city. She further illustrates how cities have been severely damaged when approached as a problem in simplicity by replacing organized complexity with simple and predictable spaces, such as Le Corbusier's "Radiant City" and Ebenezer Howard's "Garden City". Since then, others have written at length on the complexity of cities.
Complexity economics
Over the last decades, within the emerging field of complexity economics, new predictive tools have been developed to explain economic growth. Such is the case with the models built by the Santa Fe Institute in 1989 and the more recent economic complexity index (ECI), introduced by the MIT physicist Cesar A. Hidalgo and the Harvard economist Ricardo Hausmann.
Recurrence quantification analysis has been employed to detect the characteristic of business cycles and economic development. To this end, Orlando et al. developed the so-called recurrence quantification correlation index (RQCI) to test correlations of RQA on a sample signal and then investigated the application to business time series. The said index has been proven to detect hidden changes in time series. Further, Orlando et al., over an extensive dataset, shown that recurrence quantification analysis may help in anticipating transitions from laminar (i.e. regular) to turbulent (i.e. chaotic) phases such as USA GDP in 1949, 1953, etc. Last but not least, it has been demonstrated that recurrence quantification analysis can detect differences between macroeconomic variables and highlight hidden features of economic dynamics.
Complexity and education
Focusing on issues of student persistence with their studies, Forsman, Moll and Linder explore the "viability of using complexity science as a frame to extend methodological applications for physics education research", finding that "framing a social network analysis within a complexity science perspective offers a new and powerful applicability across a broad range of PER topics".
Complexity in healthcare research and practice
Healthcare systems are prime examples of complex systems, characterized by interactions among diverse stakeholders, such as patients, providers, policymakers, and researchers, across various sectors like health, government, community, and education. These systems demonstrate properties like non-linearity, emergence, adaptation, and feedback loops. Complexity science in healthcare frames knowledge translation as a dynamic and interconnected network of processes—problem identification, knowledge creation, synthesis, implementation, and evaluation—rather than a linear or cyclical sequence. Such approaches emphasize the importance of understanding and leveraging the interactions within and between these processes and stakeholders to optimize the creation and movement of knowledge. By acknowledging the complex, adaptive nature of healthcare systems, complexity science advocates for continuous stakeholder engagement, transdisciplinary collaboration, and flexible strategies to effectively translate research into practice.
Complexity and biology
Complexity science has been applied to living organisms, and in particular to biological systems. Within the emerging field of fractal physiology, bodily signals, such as heart rate or brain activity, are characterized using entropy or fractal indices. The goal is often to assess the state and the health of the underlying system, and diagnose potential disorders and illnesses.
Complexity and chaos theory
Complex systems theory is related to chaos theory, which in turn has its origins more than a century ago in the work of the French mathematician Henri Poincaré. Chaos is sometimes viewed as extremely complicated information, rather than as an absence of order. Chaotic systems remain deterministic, though their long-term behavior can be difficult to predict with any accuracy. With perfect knowledge of the initial conditions and the relevant equations describing the chaotic system's behavior, one can theoretically make perfectly accurate predictions of the system, though in practice this is impossible to do with arbitrary accuracy.
The emergence of complex systems theory shows a domain between deterministic order and randomness which is complex. This is referred to as the "edge of chaos".
When one analyzes complex systems, sensitivity to initial conditions, for example, is not an issue as important as it is within chaos theory, in which it prevails. As stated by Colander, the study of complexity is the opposite of the study of chaos. Complexity is about how a huge number of extremely complicated and dynamic sets of relationships can generate some simple behavioral patterns, whereas chaotic behavior, in the sense of deterministic chaos, is the result of a relatively small number of non-linear interactions. For recent examples in economics and business see Stoop et al. who discussed Android's market position, Orlando who explained the corporate dynamics in terms of mutual synchronization and chaos regularization of bursts in a group of chaotically bursting cells and Orlando et al. who modelled financial data (Financial Stress Index, swap and equity, emerging and developed, corporate and government, short and long maturity) with a low-dimensional deterministic model.
Therefore, the main difference between chaotic systems and complex systems is their history. Chaotic systems do not rely on their history as complex ones do. Chaotic behavior pushes a system in equilibrium into chaotic order, which means, in other words, out of what we traditionally define as 'order'. On the other hand, complex systems evolve far from equilibrium at the edge of chaos. They evolve at a critical state built up by a history of irreversible and unexpected events, which physicist Murray Gell-Mann called "an accumulation of frozen accidents". In a sense chaotic systems can be regarded as a subset of complex systems distinguished precisely by this absence of historical dependence. Many real complex systems are, in practice and over long but finite periods, robust. However, they do possess the potential for radical qualitative change of kind whilst retaining systemic integrity. Metamorphosis serves as perhaps more than a metaphor for such transformations.
Complexity and network science
A complex system is usually composed of many components and their interactions. Such a system can be represented by a network where nodes represent the components and links represent their interactions. For example, the Internet can be represented as a network composed of nodes (computers) and links (direct connections between computers). Other examples of complex networks include social networks, financial institution interdependencies, airline networks, and biological networks.
Notable scholars
| Physical sciences | Science basics | Basics and measurement |
37441 | https://en.wikipedia.org/wiki/Nitrous%20oxide | Nitrous oxide | Nitrous oxide (dinitrogen oxide or dinitrogen monoxide), commonly known as laughing gas, nitrous, or factitious air, among others, is a chemical compound, an oxide of nitrogen with the formula . At room temperature, it is a colourless non-flammable gas, and has a slightly sweet scent and taste. At elevated temperatures, nitrous oxide is a powerful oxidiser similar to molecular oxygen.
Nitrous oxide has significant medical uses, especially in surgery and dentistry, for its anaesthetic and pain-reducing effects, and it is on the World Health Organization's List of Essential Medicines. Its colloquial name, "laughing gas", coined by Humphry Davy, describes the euphoric effects upon inhaling it, which cause it to be used as a recreational drug inducing a brief "high". When abused chronically, it may cause neurological damage through inactivation of vitamin B12. It is also used as an oxidiser in rocket propellants and motor racing fuels, and as a frothing gas for whipped cream.
Nitrous oxide is also an atmospheric pollutant, with a concentration of 333 parts per billion (ppb) in 2020, increasing at 1 ppb annually. It is a major scavenger of stratospheric ozone, with an impact comparable to that of CFCs. About 40% of human-caused emissions are from agriculture, as nitrogen fertilisers are digested into nitrous oxide by soil micro-organisms. As the third most important greenhouse gas, nitrous oxide substantially contributes to global warming. Reduction of emissions is an important goal in the politics of climate change.
Discovery and early use
The gas was first synthesised in 1772 by English natural philosopher and chemist Joseph Priestley who called it dephlogisticated nitrous air (see phlogiston theory) or inflammable nitrous air. Priestley published his discovery in the book Experiments and Observations on Different Kinds of Air (1775), where he described how to produce the preparation of "nitrous air diminished", by heating iron filings dampened with nitric acid.
The first important use of nitrous oxide was made possible by Thomas Beddoes and James Watt, who worked together to publish the book Considerations on the Medical Use and on the Production of Factitious Airs (1794). This book was important for two reasons. First, James Watt had invented a novel machine to produce "factitious airs" (including nitrous oxide) and a novel "breathing apparatus" to inhale the gas. Second, the book also presented the new medical theories by Thomas Beddoes, that tuberculosis and other lung diseases could be treated by inhalation of "Factitious Airs".
The machine to produce "Factitious Airs" had three parts: a furnace to burn the needed material, a vessel with water where the produced gas passed through in a spiral pipe (for impurities to be "washed off"), and finally the gas cylinder with a gasometer where the gas produced, "air", could be tapped into portable air bags (made of airtight oily silk). The breathing apparatus consisted of one of the portable air bags connected with a tube to a mouthpiece. With this new equipment being engineered and produced by 1794, the way was paved for clinical trials, which began in 1798 when Thomas Beddoes established the "Pneumatic Institution for Relieving Diseases by Medical Airs" in Hotwells (Bristol). In the basement of the building, a large-scale machine was producing the gases under the supervision of a young Humphry Davy, who was encouraged to experiment with new gases for patients to inhale. The first important work of Davy was examination of the nitrous oxide, and the publication of his results in the book: Researches, Chemical and Philosophical (1800). In that publication, Davy notes the analgesic effect of nitrous oxide at page 465 and its potential to be used for surgical operations at page 556. Davy coined the name "laughing gas" for nitrous oxide.
Despite Davy's discovery that inhalation of nitrous oxide could relieve a conscious person from pain, another 44 years elapsed before doctors attempted to use it for anaesthesia. The use of nitrous oxide as a recreational drug at "laughing gas parties", primarily arranged for the British upper class, became an immediate success beginning in 1799. While the effects of the gas generally make the user appear stuporous, dreamy and sedated, some people also "get the giggles" in a state of euphoria, and frequently erupt in laughter.
One of the earliest commercial producers in the U.S. was George Poe, cousin of the poet Edgar Allan Poe, who also was the first to liquefy the gas.
The first time nitrous oxide was used as an anaesthetic drug in the treatment of a patient was when dentist Horace Wells, with assistance by Gardner Quincy Colton and John Mankey Riggs, demonstrated insensitivity to pain from a dental extraction on 11 December 1844. In the following weeks, Wells treated the first 12 to 15 patients with nitrous oxide in Hartford, Connecticut, and, according to his own record, only failed in two cases. In spite of these convincing results having been reported by Wells to the medical society in Boston in December 1844, this new method was not immediately adopted by other dentists. The reason for this was most likely that Wells, in January 1845 at his first public demonstration to the medical faculty in Boston, had been partly unsuccessful, leaving his colleagues doubtful regarding its efficacy and safety. The method did not come into general use until 1863, when Gardner Quincy Colton successfully started to use it in all his "Colton Dental Association" clinics, that he had just established in New Haven and New York City. Over the following three years, Colton and his associates successfully administered nitrous oxide to more than 25,000 patients. Today, nitrous oxide is used in dentistry as an anxiolytic, as an adjunct to local anaesthetic.
Nitrous oxide was not found to be a strong enough anaesthetic for use in major surgery in hospital settings, however. Instead, diethyl ether, being a stronger and more potent anaesthetic, was demonstrated and accepted for use in October 1846, along with chloroform in 1847. When Joseph Thomas Clover invented the "gas-ether inhaler" in 1876, however, it became a common practice at hospitals to initiate all anaesthetic treatments with a mild flow of nitrous oxide, and then gradually increase the anaesthesia with the stronger ether or chloroform. Clover's gas-ether inhaler was designed to supply the patient with nitrous oxide and ether at the same time, with the exact mixture being controlled by the operator of the device. It remained in use by many hospitals until the 1930s. Although hospitals today use a more advanced anaesthetic machine, these machines still use the same principle launched with Clover's gas-ether inhaler, to initiate the anaesthesia with nitrous oxide, before the administration of a more powerful anaesthetic.
Colton's popularisation of nitrous oxide led to its adoption by a number of less than reputable quacksalvers, who touted it as a cure for consumption, scrofula, catarrh and other diseases of the blood, throat and lungs. Nitrous oxide treatment was administered and licensed as a patent medicine by the likes of C. L. Blood and Jerome Harris in Boston and Charles E. Barney of Chicago.
Chemical properties and reactions
Nitrous oxide is a colourless gas with a faint, sweet odour.
Nitrous oxide supports combustion by releasing the dipolar bonded oxygen radical, and can thus relight a glowing splint.
is inert at room temperature and has few reactions. At elevated temperatures, its reactivity increases. For example, nitrous oxide reacts with at to give :
This reaction is the route adopted by the commercial chemical industry to produce azide salts, which are used as detonators.
Mechanism of action
The pharmacological mechanism of action of inhaled is not fully known. However, it has been shown to directly modulate a broad range of ligand-gated ion channels, which likely plays a major role. It moderately blocks NMDAR and β-subunit-containing nACh channels, weakly inhibits AMPA, kainate, GABA and 5-HT receptors, and slightly potentiates GABA and glycine receptors. It also has been shown to activate two-pore-domain channels. While affects several ion channels, its anaesthetic, hallucinogenic and euphoriant effects are likely caused mainly via inhibition of NMDA receptor-mediated currents. In addition to its effects on ion channels, may act similarly to nitric oxide (NO) in the central nervous system. Nitrous oxide is 30 to 40 times more soluble than nitrogen.
The effects of inhaling sub-anaesthetic doses of nitrous oxide may vary unpredictably with settings and individual differences; however, Jay (2008) suggests that it reliably induces the following states and sensations:
Intoxication
Euphoria/dysphoria
Spatial disorientation
Temporal disorientation
Reduced pain sensitivity
A minority of users also experience uncontrolled vocalisations and muscular spasms. These effects generally disappear minutes after removal of the nitrous oxide source.
Anxiolytic effect
In behavioural tests of anxiety, a low dose of is an effective anxiolytic. This anti-anxiety effect is associated with enhanced activity of GABA receptors, as it is partially reversed by benzodiazepine receptor antagonists. Mirroring this, animals that have developed tolerance to the anxiolytic effects of benzodiazepines are partially tolerant to . Indeed, in humans given 30% , benzodiazepine receptor antagonists reduced the subjective reports of feeling "high", but did not alter psychomotor performance.
Analgesic effect
The analgesic effects of are linked to the interaction between the endogenous opioid system and the descending noradrenergic system. When animals are given morphine chronically, they develop tolerance to its pain-killing effects, and this also renders the animals tolerant to the analgesic effects of . Administration of antibodies that bind and block the activity of some endogenous opioids (not β-endorphin) also block the antinociceptive effects of . Drugs that inhibit the breakdown of endogenous opioids also potentiate the antinociceptive effects of . Several experiments have shown that opioid receptor antagonists applied directly to the brain block the antinociceptive effects of , but these drugs have no effect when injected into the spinal cord.
Apart from an indirect action, nitrous oxide, like morphine also interacts directly with the endogenous opioid system by binding at opioid receptor binding sites.
Conversely, α-adrenoceptor antagonists block the pain-reducing effects of when given directly to the spinal cord, but not when applied directly to the brain. Indeed, α-adrenoceptor knockout mice or animals depleted in norepinephrine are nearly completely resistant to the antinociceptive effects of . Apparently -induced release of endogenous opioids causes disinhibition of brainstem noradrenergic neurons, which release norepinephrine into the spinal cord and inhibit pain signalling. Exactly how causes the release of endogenous opioid peptides remains uncertain.
Production
Various methods of producing nitrous oxide are used.
Industrial methods
Nitrous oxide is prepared on an industrial scale by carefully heating ammonium nitrate at about 250 °C, which decomposes into nitrous oxide and water vapour.
The addition of various phosphate salts favours formation of a purer gas at slightly lower temperatures. This reaction may be difficult to control, resulting in detonation.
Laboratory methods
The decomposition of ammonium nitrate is also a common laboratory method for preparing the gas. Equivalently, it can be obtained by heating a mixture of sodium nitrate and ammonium sulfate:
Another method involves the reaction of urea, nitric acid and sulfuric acid:
Direct oxidation of ammonia with a manganese dioxide-bismuth oxide catalyst has been reported: cf. Ostwald process.
Hydroxylammonium chloride reacts with sodium nitrite to give nitrous oxide. If the nitrite is added to the hydroxylamine solution, the only remaining by-product is salt water. If the hydroxylamine solution is added to the nitrite solution (nitrite is in excess), however, then toxic higher oxides of nitrogen also are formed:
Treating with and HCl also has been demonstrated:
Hyponitrous acid decomposes to NO and water with a half-life of 16 days at 25 °C at pH 1–3.
Atmospheric occurrence
Nitrous oxide is a minor component of Earth's atmosphere and is an active part of the planetary nitrogen cycle. Based on analysis of air samples gathered from sites around the world, its concentration surpassed 330 ppb in 2017. The growth rate of about 1 ppb per year has also accelerated during recent decades. Nitrous oxide's atmospheric abundance has grown more than 20% from a base level of about 270 ppb in 1750.
Important atmospheric properties of are summarized in the following table:
In 2022 the IPCC reported that: "The human perturbation of the natural nitrogen cycle through the use of synthetic fertilizers and manure, as well as nitrogen deposition resulting from land-based agriculture and fossil fuel burning has been the largest driver of the increase in atmospheric N2O of 31.0 ± 0.5 ppb (10%) between 1980 and 2019."
Emissions by source
17.0 (12.2 to 23.5) million tonnes total annual average nitrogen in was emitted in 2007–2016. About 40% of emissions are from humans and the rest are part of the natural nitrogen cycle. The emitted each year by humans has a greenhouse effect equivalent to about 3 billion tonnes of carbon dioxide: for comparison humans emitted 37 billion tonnes of actual carbon dioxide in 2019, and methane equivalent to 9 billion tonnes of carbon dioxide.
Most of the emitted into the atmosphere, from natural and anthropogenic sources, is produced by microorganisms such as denitrifying bacteria and fungi in soils and oceans. Soils under natural vegetation are an important source of nitrous oxide, accounting for 60% of all naturally produced emissions. Other natural sources include the oceans (35%) and atmospheric chemical reactions (5%). Wetlands can also be emitters of nitrous oxide. Emissions from thawing permafrost may be significant, but as of 2022 this is not certain.
The main components of anthropogenic emissions are fertilised agricultural soils and livestock manure (42%), runoff and leaching of fertilisers (25%), biomass burning (10%), fossil fuel combustion and industrial processes (10%), biological degradation of other nitrogen-containing atmospheric emissions (9%) and human sewage (5%). Agriculture enhances nitrous oxide production through soil cultivation, the use of nitrogen fertilisers and animal waste handling. These activities stimulate naturally occurring bacteria to produce more nitrous oxide. Nitrous oxide emissions from soil can be challenging to measure as they vary markedly over time and space, and the majority of a year's emissions may occur when conditions are favorable during "hot moments" and/or at favorable locations known as "hotspots".
Among industrial emissions, the production of nitric acid and adipic acid are the largest sources of nitrous oxide emissions. The adipic acid emissions specifically arise from the degradation of the nitrolic acid intermediate derived from the nitration of cyclohexanone.
Biological processes
Microbial processes that generate nitrous oxide may be classified as nitrification and denitrification. Specifically, they include:
aerobic autotrophic nitrification, the stepwise oxidation of ammonia () to nitrite () and to nitrate ()
anaerobic heterotrophic denitrification, the stepwise reduction of to , nitric oxide (NO), and ultimately , where facultative anaerobe bacteria use as an electron acceptor in the respiration of organic material in the condition of insufficient oxygen ()
nitrifier denitrification, which is carried out by autotrophic -oxidising bacteria and the pathway whereby ammonia () is oxidised to nitrite (), followed by the reduction of to nitric oxide (NO), and molecular nitrogen ()
heterotrophic nitrification
aerobic denitrification by the same heterotrophic nitrifiers
fungal denitrification
non-biological chemodenitrification
These processes are affected by soil chemical and physical properties such as the availability of mineral nitrogen and organic matter, acidity and soil type, as well as climate-related factors such as soil temperature and water content.
The emission of the gas to the atmosphere is limited greatly by its consumption inside the cells, by a process catalysed by the enzyme nitrous oxide reductase.
Uses
Rocket motors
Nitrous oxide may be used as an oxidiser in a rocket motor. Compared to other oxidisers, it is much less toxic and more stable at room temperature, making it easier to store and safer to carry on a flight. Its high density and low storage pressure (when maintained at low temperatures) make it highly competitive with stored high-pressure gas systems.
In a 1914 patent, American rocket pioneer Robert Goddard suggested nitrous oxide and gasoline as possible propellants for a liquid-fuelled rocket. Nitrous oxide has been the oxidiser of choice in several hybrid rocket designs (using solid fuel with a liquid or gaseous oxidiser). The combination of nitrous oxide with hydroxyl-terminated polybutadiene fuel has been used by SpaceShipOne and others. It also is notably used in amateur and high power rocketry with various plastics as the fuel.
Nitrous oxide may also be used as a monopropellant. In the presence of a heated catalyst at a temperature of , decomposes exothermically into nitrogen and oxygen. Because of the large heat release, the catalytic action rapidly becomes secondary, as thermal autodecomposition becomes dominant. In a vacuum thruster, this may provide a monopropellant specific impulse (I) up to 180 s. While noticeably less than the I available from hydrazine thrusters (monopropellant, or bipropellant with dinitrogen tetroxide), the decreased toxicity makes nitrous oxide a worthwhile option.
The ignition of nitrous oxide depends critically on pressure. It deflagrates at approximately at a pressure of 309 psi (21 atmospheres). At 600 , the required ignition energy is only 6 joules, whereas at 130 psi a 2,500-joule ignition energy input is insufficient.
Internal combustion engine
In vehicle racing, nitrous oxide (often called "nitrous") increases engine power by providing more oxygen during combustion, thus allowing the engine to burn more fuel. It is an oxidising agent roughly equivalent to hydrogen peroxide, and much stronger than molecular oxygen. Nitrous oxide is not flammable at low pressure/temperature, but at about , its breakdown delivers more oxygen than atmospheric air. It often is mixed with another fuel that is easier to deflagrate.
Nitrous oxide is stored as a compressed liquid. In an engine intake manifold, the evaporation and expansion of the liquid causes a large drop in intake charge temperature, resulting in a denser charge and allowing more air/fuel mixture to enter the cylinder. Sometimes nitrous oxide is injected into (or prior to) the intake manifold, whereas other systems directly inject it just before the cylinder (direct port injection).
The technique was used during World War II by Luftwaffe aircraft with the GM-1 system to boost the power output of aircraft engines. Originally meant to provide the Luftwaffe standard aircraft with superior high-altitude performance, technological considerations limited its use to extremely high altitudes. Accordingly, it was only used by specialised planes such as high-altitude reconnaissance aircraft, high-speed bombers and high-altitude interceptor aircraft. It sometimes could be found on Luftwaffe aircraft also fitted with another engine-boost system, MW 50, a form of water injection for aviation engines that used methanol for its boost capabilities.
One of the major problems of nitrous oxide oxidant in a reciprocating engine is excessive power: if the mechanical structure of the engine is not properly reinforced, it may be severely damaged or destroyed. It is important with nitrous oxide augmentation of petrol engines to maintain proper and evenly spread operating temperatures and fuel levels to prevent pre-ignition (also called detonation or spark knock). However, most problems associated with nitrous oxide come not from excessive power but from excessive pressure, since the gas builds up a much denser charge in the cylinder. The increased pressure and temperature can melt, crack, or warp the piston, valve, and cylinder head.
Automotive-grade liquid nitrous oxide differs slightly from medical-grade. A small amount of sulfur dioxide () is added to prevent substance abuse.
Aerosol propellant for food
The gas is approved for use as a food additive (E number: E942), specifically as an aerosol spray propellant. It is commonly used in aerosol whipped cream canisters and cooking sprays.
The gas is extremely soluble in fatty compounds. In pressurised aerosol whipped cream, it is dissolved in the fatty cream until it leaves the can, when it becomes gaseous and thus creates foam. This produces whipped cream four times the volume of the liquid, whereas whipping air into cream only produces twice the volume. Unlike air, nitrous oxide inhibits rancidification of the butterfat. Carbon dioxide cannot be used for whipped cream because it is acidic in water, which would curdle the cream and give it a seltzer-like "sparkle".
Extra-frothed whipped cream produced with nitrous oxide is unstable, and will return to liquid within half an hour to one hour. Thus, it is not suitable for decorating food that will not be served immediately.
In December 2016, there was a shortage of aerosol whipped cream in the United States, with canned whipped cream use at its peak during the Christmas and holiday season, due to an explosion at the Air Liquide nitrous oxide facility in Florida in late August. The company prioritized the remaining supply of nitrous oxide to medical customers rather than to food manufacturing.
Also, cooking spray, made from various oils with lecithin emulsifier, may use nitrous oxide propellant, or alternatively food-grade alcohol or propane.
Medical
Nitrous oxide has been used in dentistry and surgery, as an anaesthetic and analgesic, since 1844. In the early days, the gas was administered through simple inhalers consisting of a breathing bag made of rubber cloth. Today, the gas is administered in hospitals by means of an automated relative analgesia machine, with an anaesthetic vaporiser and a medical ventilator, that delivers a precisely dosed and breath-actuated flow of nitrous oxide mixed with oxygen in a 2:1 ratio.
Nitrous oxide is a weak general anaesthetic, and so is generally not used alone in general anaesthesia, but used as a carrier gas (mixed with oxygen) for more powerful general anaesthetic drugs such as sevoflurane or desflurane. It has a minimum alveolar concentration of 105% and a blood/gas partition coefficient of 0.46. The use of nitrous oxide in anaesthesia can increase the risk of postoperative nausea and vomiting.
Dentists use a simpler machine which only delivers an / mixture for the patient to inhale while conscious but must still be a recognised purpose designed dedicated relative analgesic flowmeter with a minimum 30% of oxygen at all times and a maximum upper limit of 70% nitrous oxide. The patient is kept conscious throughout the procedure, and retains adequate mental faculties to respond to questions and instructions from the dentist.
Inhalation of nitrous oxide is used frequently to relieve pain associated with childbirth, trauma, oral surgery and acute coronary syndrome (including heart attacks). Its use during labour has been shown to be a safe and effective aid for birthing women. Its use for acute coronary syndrome is of unknown benefit.
In Canada and the UK, Entonox and Nitronox are used commonly by ambulance crews (including unregistered practitioners) as rapid and highly effective analgesic gas.
Fifty percent nitrous oxide can be considered for use by trained non-professional first aid responders in prehospital settings, given the relative ease and safety of administering 50% nitrous oxide as an analgesic. The rapid reversibility of its effect would also prevent it from precluding diagnosis.
Recreational
Recreational inhalation of nitrous oxide, to induce euphoria and slight hallucinations, began with the British upper class in 1799 in gatherings known as "laughing gas parties".
From the 19th century, the widespread availability of the gas for medical and culinary purposes allowed for recreational use to greatly expand globally. In the UK as of 2014, nitrous oxide was estimated to be used by almost half a million young people at nightspots, festivals and parties.
Widespread recreational use of the drug throughout the UK was featured in the 2017 Vice documentary Inside The Laughing Gas Black Market, in which journalist Matt Shea met with dealers of the drug who stole it from hospitals.
A significant issue cited in London's press is the effect of nitrous oxide canister littering, which is highly visible and causes significant complaints from communities.
Prior to 8 November 2023 in the UK, nitrous oxide was subject to the Psychoactive Substances Act 2016, making it illegal to produce, supply, import or export nitrous oxide for recreational use. The updated law prohibited possession of nitrous oxide, classifying it as a Class C drug under the Misuse of Drugs Act 1971.
While nitrous oxide is understood by most recreational users to give a "safe high", many are unaware that excessive consumption may cause neurological harm which, if left untreated, can cause permanent neurological damage. In Australia, recreation use became a public health concern following a rise in reports of neurotoxicity and emergency room admissions. In the state of South Australia, legislation was passed in 2020 to restrict canister sales.
In 2024, under the street name "Galaxy Gas", nitrous oxide has exploded in popularity among young people for recreational use. Most of the popularity has been fostered through TikTok.
Safety
Nitrous oxide is a significant occupational hazard for surgeons, dentists and nurses. Because the gas is minimally metabolised in humans (with a rate of 0.004%), it retains its potency when exhaled into the room by the patient, and can intoxicate the clinic staff if the room is poorly ventilated, with potential chronic exposure. A continuous-flow fresh-air ventilation system or scavenger system may be needed to prevent waste-gas buildup. The National Institute for Occupational Safety and Health recommends that workers' exposure to nitrous oxide should be controlled during the administration of anaesthetic gas in medical, dental and veterinary operators. It set a recommended exposure limit (REL) of 25 ppm (46 mg/m3) to escaped anaesthetic.
Exposure to nitrous oxide causes short-term impairment of cognition, audiovisual acuity, and manual dexterity, as well as spatial and temporal disorientation, putting the user at risk of accidental injury.
Nitrous oxide is neurotoxic, and medium or long-term habitual consumption of significant quantities can cause neurological harm with the potential for permanent damage if left untreated. It is believed that, like other NMDA receptor antagonists, produces Olney's lesions in rodents upon prolonged (several hour) exposure.
However, because it is normally expelled from the body rapidly, it is less likely to be neurotoxic than other NMDAR antagonists. In rodents, short-term exposure results in only mild injury that is rapidly reversible, and neuronal death occurs only after constant and sustained exposure. Nitrous oxide may also cause neurotoxicity after extended exposure because of hypoxia. This is especially true of non-medical formulations such as whipped-cream chargers ("whippits" or "nangs"), which contain no oxygen gas.
In reports to poison control centers, heavy users (≥400 g or ≥200 L of gas in one session) or frequent users (regular, i.e., daily or weekly) have developed signs of peripheral neuropathy: ataxia (gait abnormalities) or paresthesia (perception of sensations such as tingling, numbness, or prickling, mostly in the extremities). Such early signs of neurological damage indicate chronic toxicity.
Nitrous oxide might have therapeutic use in treating stroke. In a rodent model, nitrous oxide at 75% by volume reduced ischemia-induced neuronal death induced by occlusion of the middle cerebral artery, and decreased NMDA-induced Ca2+ influx in neuronal cell cultures, a cause of excitotoxicity.
Occupational exposure to ambient nitrous oxide has been associated with DNA damage, due to interruptions in DNA synthesis. This correlation is dose-dependent and does not appear to extend to casual recreational use; however, further research is needed to confirm the level of exposure needed to cause damage.
Inhalation of pure nitrous oxide causes oxygen deprivation, resulting in low blood pressure, fainting, and even heart attacks. This can occur if the user inhales large quantities continuously, as with a strap-on mask connected to a gas canister or other inhalation system, or prolonged breath-holding.
Long-term exposure to nitrous oxide may cause vitamin B deficiency. This can cause serious neurotoxicity if the user has preexisting vitamin B deficiency. It inactivates the cobalamin form of vitamin B by oxidation. Symptoms of vitamin B deficiency, including sensory neuropathy, myelopathy and encephalopathy, may occur within days or weeks of exposure to nitrous oxide anaesthesia in people with subclinical vitamin B deficiency. Symptoms are treated with high doses of vitamin B, but recovery can be slow and incomplete. People with normal vitamin B levels have stores to make the effects of nitrous oxide insignificant, unless exposure is repeated and prolonged (nitrous oxide abuse). Vitamin B levels should be checked in people with risk factors for vitamin B deficiency prior to using nitrous oxide anaesthesia.
Several experimental studies in rats indicate that chronic exposure of pregnant females to nitrous oxide may have adverse effects on the developing fetus.
At room temperature () the saturated vapour pressure is 50.525 bar, rising up to 72.45 bar at —the critical temperature. The pressure curve is thus unusually sensitive to temperature. As with many strong oxidisers, contamination of parts with fuels have been implicated in rocketry accidents, where small quantities of nitrous/fuel mixtures explode due to "water hammer"-like effects (sometimes called "dieseling"—heating due to adiabatic compression of gases can reach decomposition temperatures). Some common building materials such as stainless steel and aluminium can act as fuels with strong oxidisers such as nitrous oxide, as can contaminants that may ignite due to adiabatic compression. There also have been incidents where nitrous oxide decomposition in plumbing has led to the explosion of large tanks.
Environmental impact
Global accounting of sources and sinks over the decade ending 2016 indicates that about 40% of the average 17 TgN/yr (teragrams, or million metric tons, of nitrogen per year) of emissions originated from human activity, and shows that emissions growth chiefly came from expanding agriculture.
Nitrous oxide has significant global warming potential as a greenhouse gas. On a per-molecule basis, considered over a 100-year period, nitrous oxide has 265 times the atmospheric heat-trapping ability of carbon dioxide (). However, because of its low concentration (less than 1/1,000 of that of ), its contribution to the greenhouse effect is less than one third that of carbon dioxide, and also less than methane. On the other hand, since about 40% of the entering the atmosphere is the result of human activity, control of nitrous oxide is part of efforts to curb greenhouse gas emissions.
Most human caused nitrous oxide released into the atmosphere is a greenhouse gas emission from agriculture, when farmers add nitrogen-based fertilizers onto the fields, and through the breakdown of animal manure. Reduction of emissions can be a hot topic in the politics of climate change.
Nitrous oxide is also released as a by-product of burning fossil fuel, though the amount released depends on which fuel was used. It is also emitted through the manufacture of nitric acid, which is used in the synthesis of nitrogen fertilizers. The production of adipic acid, a precursor to nylon and other synthetic clothing fibres, also releases nitrous oxide.
A rise in atmospheric nitrous oxide concentrations has been implicated as a possible contributor to the extremely intense global warming during the Cenomanian-Turonian boundary event.
Nitrous oxide has also been implicated in thinning the ozone layer. A 2009 study suggested that emission was the single most important ozone-depleting emission and it was expected to remain the largest throughout the 21st century.
Legality
In India transfer of nitrous oxide from bulk cylinders to smaller, more transportable E-type, 1,590-litre-capacity tanks is legal when intended for medical anaesthesia.
The Ministry of Health has warned that nitrous oxide is a prescription medicine whose sale or possession without a prescription is an offense under the Medicines Act. This would seemingly prohibit all non-medicinal uses of nitrous oxide, although it is implied that only recreational use will be targeted.
In August 2015, the Council of the London Borough of Lambeth (UK) banned the use of the drug for recreational purposes, making offenders liable to an on-the-spot fine of up to £1,000. In September 2023, the UK Government announced that nitrous oxide would be made illegal by the end of the year, with possession potentially carrying up to a two-year prison sentence or an unlimited fine.
Possession of nitrous oxide is legal under United States federal law and is not subject to DEA purview. It is, however, regulated by the Food and Drug Administration under the Food Drug and Cosmetics Act; prosecution is possible under its "misbranding" clauses, prohibiting the sale or distribution of nitrous oxide for the purpose of human consumption without a proper medical license. Many states have laws regulating the possession, sale and distribution of nitrous oxide. Such laws usually ban distribution to minors or limit the amount that may be sold without special license. For example, in California, possession for recreational use is prohibited and qualifies as a misdemeanor.
| Physical sciences | Inorganic compounds | null |
37447 | https://en.wikipedia.org/wiki/Latin%20square | Latin square | In combinatorics and in experimental design, a Latin square is an n × n array filled with n different symbols, each occurring exactly once in each row and exactly once in each column. An example of a 3×3 Latin square is
The name "Latin square" was inspired by mathematical papers by Leonhard Euler (1707–1783), who used Latin characters as symbols, but any set of symbols can be used: in the above example, the alphabetic sequence A, B, C can be replaced by the integer sequence 1, 2, 3. Euler began the general theory of Latin squares.
History
The Korean mathematician Choi Seok-jeong was the first to publish an example of Latin squares of order nine, in order to construct a magic square in 1700, predating Leonhard Euler by 67 years.
Reduced form
A Latin square is said to be reduced (also, normalized or in standard form) if both its first row and its first column are in their natural order. For example, the Latin square above is not reduced because its first column is A, C, B rather than A, B, C.
Any Latin square can be reduced by permuting (that is, reordering) the rows and columns. Here switching the above matrix's second and third rows yields the following square:
This Latin square is reduced; both its first row and its first column are alphabetically ordered A, B, C.
Properties
Orthogonal array representation
If each entry of an n × n Latin square is written as a triple (r,c,s), where r is the row, c is the column, and s is the symbol, we obtain a set of n2 triples called the orthogonal array representation of the square. For example, the orthogonal array representation of the Latin square
is
{ (1, 1, 1), (1, 2, 2), (1, 3, 3), (2, 1, 2), (2, 2, 3), (2, 3, 1), (3, 1, 3), (3, 2, 1), (3, 3, 2) },
where for example the triple (2, 3, 1) means that in row 2 and column 3 there is the symbol 1. Orthogonal arrays are usually written in array form where the triples are the rows, such as:
The definition of a Latin square can be written in terms of orthogonal arrays:
A Latin square is a set of n2 triples (r, c, s), where 1 ≤ r, c, s ≤ n, such that all ordered pairs (r, c) are distinct, all ordered pairs (r, s) are distinct, and all ordered pairs (c, s) are distinct.
This means that the n2 ordered pairs (r, c) are all the pairs (i, j) with 1 ≤ i, j ≤ n, once each. The same is true of the ordered pairs (r, s) and the ordered pairs (c, s).
The orthogonal array representation shows that rows, columns and symbols play rather similar roles, as will be made clear below.
Equivalence classes of Latin squares
Many operations on a Latin square produce another Latin square (for example, turning it upside down).
If we permute the rows, permute the columns, or permute the names of the symbols of a Latin square, we obtain a new Latin square said to be isotopic to the first. Isotopism is an equivalence relation, so the set of all Latin squares is divided into subsets, called isotopy classes, such that two squares in the same class are isotopic and two squares in different classes are not isotopic.
Another type of operation is easiest to explain using the orthogonal array representation of the Latin square. If we systematically and consistently reorder the three items in each triple (that is, permute the three columns in the array form), another orthogonal array (and, thus, another Latin square) is obtained. For example, we can replace each triple (r,c,s) by (c,r,s) which corresponds to transposing the square (reflecting about its main diagonal), or we could replace each triple (r,c,s) by (c,s,r), which is a more complicated operation. Altogether there are 6 possibilities including "do nothing", giving us 6 Latin squares called the conjugates (also parastrophes) of the original square.
Finally, we can combine these two equivalence operations: two Latin squares are said to be paratopic, also main class isotopic, if one of them is isotopic to a conjugate of the other. This is again an equivalence relation, with the equivalence classes called main classes, species, or paratopy classes. Each main class contains up to six isotopy classes.
Number of Latin squares
There is no known easily computable formula for the number of Latin squares with symbols . The most accurate upper and lower bounds known for large are far apart. One classic result is that
A simple and explicit formula for the number of Latin squares was published in 1992, but it is still not easily computable due to the exponential increase in the number of terms. This formula for the number of Latin squares is
where is the set of all {0, 1}-matrices, is the number of zero entries in matrix , and is the permanent of matrix .
The table below contains all known exact values. It can be seen that the numbers grow exceedingly quickly. For each , the number of Latin squares altogether is times the number of reduced Latin squares .
For each , each isotopy class contains up to Latin squares (the exact number varies), while each main class contains either 1, 2, 3 or 6 isotopy classes.
The number of structurally distinct Latin squares (i.e. the squares cannot be made identical by means of rotation, reflection, and/or permutation of the symbols) for = 1 up to 7 is 1, 1, 1, 12, 192, 145164, 1524901344 respectively .
Examples
We give one example of a Latin square from each main class up to order five.
They present, respectively, the multiplication tables of the following groups:
{0} – the trivial 1-element group
– the binary group
– cyclic group of order 3
– the Klein four-group
– cyclic group of order 4
– cyclic group of order 5
the last one is an example of a quasigroup, or rather a loop, which is not associative.
Transversals and rainbow matchings
A transversal in a Latin square is a choice of n cells, where each row contains one cell, each column contains one cell, and there is one cell containing each symbol.
One can consider a Latin square as a complete bipartite graph in which the rows are vertices of one part, the columns are vertices of the other part, each cell is an edge (between its row and its column), and the symbols are colors. The rules of the Latin squares imply that this is a proper edge coloring. With this definition, a Latin transversal is a matching in which each edge has a different color; such a matching is called a rainbow matching.
Therefore, many results on Latin squares/rectangles are contained in papers with the term "rainbow matching" in their title, and vice versa.
Some Latin squares have no transversal. For example, when n is even, an n-by-n Latin square in which the value of cell i,j is (i+j) mod n has no transversal. Here are two examples:In 1967, H. J. Ryser conjectured that, when n is odd, every n-by-n Latin square has a transversal.
In 1975, S. K. Stein and Brualdi conjectured that, when n is even, every n-by-n Latin square has a partial transversal of size n−1.
A more general conjecture of Stein is that a transversal of size n−1 exists not only in Latin squares but also in any n-by-n array of n symbols, as long as each symbol appears exactly n times.
Some weaker versions of these conjectures have been proved:
Every n-by-n Latin square has a partial transversal of size 2n/3.
Every n-by-n Latin square has a partial transversal of size n − sqrt(n).
Every n-by-n Latin square has a partial transversal of size n − 11 log(n).
Every n-by-n Latin square has a partial transversal of size n − O(log n/loglog n).
Every large enough n-by-n Latin square has a partial transversal of size n −1.<ref>{{Cite arXiv |last=Montgomery |first=Richard |date=2023 |title=A proof of the Ryser-Brualdi-Stein conjecture for large even n |class=math.CO |eprint=2310.19779}}</ref> (Preprint)
Algorithms
For small squares it is possible to generate permutations and test whether the Latin square property is met. For larger squares, Jacobson and Matthews' algorithm allows sampling from a uniform distribution over the space of n × n Latin squares.
Applications
Statistics and mathematics
In the design of experiments, Latin squares are a special case of row-column designs for two blocking factors.
In algebra, Latin squares are related to generalizations of groups; in particular, Latin squares are characterized as being the multiplication tables (Cayley tables) of quasigroups. A binary operation whose table of values forms a Latin square is said to obey the Latin square property.
Error correcting codes
Sets of Latin squares that are orthogonal to each other have found an application as error correcting codes in situations where communication is disturbed by more types of noise than simple white noise, such as when attempting to transmit broadband Internet over powerlines.Euler's revolution, New Scientist, 24 March 2007, pp 48–51
Firstly, the message is sent by using several frequencies, or channels, a common method that makes the signal less vulnerable to noise at any one specific frequency. A letter in the message to be sent is encoded by sending a series of signals at different frequencies at successive time intervals. In the example below, the letters A to L are encoded by sending signals at four different frequencies, in four time slots. The letter C, for instance, is encoded by first sending at frequency 3, then 4, 1 and 2.
The encoding of the twelve letters are formed from three Latin squares that are orthogonal to each other. Now imagine that there's added noise in channels 1 and 2 during the whole transmission. The letter A would then be picked up as:
In other words, in the first slot we receive signals from both frequency 1 and frequency 2; while the third slot has signals from frequencies 1, 2 and 3. Because of the noise, we can no longer tell if the first two slots were 1,1 or 1,2 or 2,1 or 2,2. But the 1,2 case is the only one that yields a sequence matching a letter in the above table, the letter A.
Similarly, we may imagine a burst of static over all frequencies in the third slot:
Again, we are able to infer from the table of encodings that it must have been the letter A being transmitted. The number of errors this code can spot is one less than the number of time slots. It has also been proven that if the number of frequencies is a prime or a power of a prime, the orthogonal Latin squares produce error detecting codes that are as efficient as possible.
Mathematical puzzles
The problem of determining if a partially filled square can be completed to form a Latin square is NP-complete.
The popular Sudoku puzzles are a special case of Latin squares; any solution to a Sudoku puzzle is a Latin square. Sudoku imposes the additional restriction that nine particular 3×3 adjacent subsquares must also contain the digits 1–9 (in the standard version). | Mathematics | Combinatorics | null |
37461 | https://en.wikipedia.org/wiki/State%20of%20matter | State of matter | In physics, a state of matter is one of the distinct forms in which matter can exist. Four states of matter are observable in everyday life: solid, liquid, gas, and plasma. Many intermediate states are known to exist, such as liquid crystal, and some states only exist under extreme conditions, such as Bose–Einstein condensates and Fermionic condensates (in extreme cold), neutron-degenerate matter (in extreme density), and quark–gluon plasma (at extremely high energy).
Historically, the distinction is based on qualitative differences in properties. Matter in the solid state maintains a fixed volume (assuming no change in temperature or air pressure) and shape, with component particles (atoms, molecules or ions) close together and fixed into place. Matter in the liquid state maintains a fixed volume (assuming no change in temperature or air pressure), but has a variable shape that adapts to fit its container. Its particles are still close together but move freely. Matter in the gaseous state has both variable volume and shape, adapting both to fit its container. Its particles are neither close together nor fixed in place. Matter in the plasma state has variable volume and shape, and contains neutral atoms as well as a significant number of ions and electrons, both of which can move around freely.
The term phase is sometimes used as a synonym for state of matter, but it is possible for a single compound to form different phases that are in the same state of matter. For example, ice is the solid state of water, but there are multiple phases of ice with different crystal structures, which are formed at different pressures and temperatures.
Four classical states
Solid
In a solid, constituent particles (ions, atoms, or molecules) are closely packed together. The forces between particles are so strong that the particles cannot move freely but can only vibrate. As a result, a solid has a stable, definite shape, and a definite volume. Solids can only change their shape by an outside force, as when broken or cut.
In crystalline solids, the particles (atoms, molecules, or ions) are packed in a regularly ordered, repeating pattern. There are various different crystal structures, and the same substance can have more than one structure (or solid phase). For example, iron has a body-centred cubic structure at temperatures below , and a face-centred cubic structure between 912 and . Ice has fifteen known crystal structures, or fifteen solid phases, which exist at various temperatures and pressures.
Glasses and other non-crystalline, amorphous solids without long-range order are not thermal equilibrium ground states; therefore they are described below as nonclassical states of matter.
Solids can be transformed into liquids by melting, and liquids can be transformed into solids by freezing. Solids can also change directly into gases through the process of sublimation, and gases can likewise change directly into solids through deposition.
Liquid
A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a (nearly) constant volume independent of pressure. The volume is definite if the temperature and pressure are constant. When a solid is heated above its melting point, it becomes liquid, given that the pressure is higher than the triple point of the substance. Intermolecular (or interatomic or interionic) forces are still important, but the molecules have enough energy to move relative to each other and the structure is mobile. This means that the shape of a liquid is not definite but is determined by its container. The volume is usually greater than that of the corresponding solid, the best known exception being water, HO. The highest temperature at which a given liquid can exist is its critical temperature.
Gas
A gas is a compressible fluid. Not only will a gas conform to the shape of its container but it will also expand to fill the container.
In a gas, the molecules have enough kinetic energy so that the effect of intermolecular forces is small (or zero for an ideal gas), and the typical distance between neighboring molecules is much greater than the molecular size. A gas has no definite shape or volume, but occupies the entire container in which it is confined. A liquid may be converted to a gas by heating at constant pressure to the boiling point, or else by reducing the pressure at constant temperature.
At temperatures below its critical temperature, a gas is also called a vapor, and can be liquefied by compression alone without cooling. A vapor can exist in equilibrium with a liquid (or solid), in which case the gas pressure equals the vapor pressure of the liquid (or solid).
A supercritical fluid (SCF) is a gas whose temperature and pressure are above the critical temperature and critical pressure respectively. In this state, the distinction between liquid and gas disappears. A supercritical fluid has the physical properties of a gas, but its high density confers solvent properties in some cases, which leads to useful applications. For example, supercritical carbon dioxide is used to extract caffeine in the manufacture of decaffeinated coffee.
Plasma
A gas is usually converted to a plasma in one of two ways, either from a huge voltage difference between two points, or by exposing it to extremely high temperatures. Heating matter to high temperatures causes electrons to leave the atoms, resulting in the presence of free electrons. This creates a so-called partially ionised plasma. At very high temperatures, such as those present in stars, it is assumed that essentially all electrons are "free", and that a very high-energy plasma is essentially bare nuclei swimming in a sea of electrons. This forms the so-called fully ionised plasma.
The plasma state is often misunderstood, and although not freely existing under normal conditions on Earth, it is quite commonly generated by either lightning, electric sparks, fluorescent lights, neon lights or in plasma televisions. The Sun's corona, some types of flame, and stars are all examples of illuminated matter in the plasma state. Plasma is by far the most abundant of the four fundamental states, as 99% of all ordinary matter in the universe is plasma, as it composes all stars.
Phase transitions
A state of matter is also characterized by phase transitions. A phase transition indicates a change in structure and can be recognized by an abrupt change in properties. A distinct state of matter can be defined as any set of states distinguished from any other set of states by a phase transition. Water can be said to have several distinct solid states. The appearance of superconductivity is associated with a phase transition, so there are superconductive states. Likewise, ferromagnetic states are demarcated by phase transitions and have distinctive properties.
When the change of state occurs in stages the intermediate steps are called mesophases. Such phases have been exploited by the introduction of liquid crystal technology.
The state or phase of a given set of matter can change depending on pressure and temperature conditions, transitioning to other phases as these conditions change to favor their existence; for example, solid transitions to liquid with an increase in temperature. Near absolute zero, a substance exists as a solid. As heat is added to this substance it melts into a liquid at its melting point, boils into a gas at its boiling point, and if heated high enough would enter a plasma state in which the electrons are so energized that they leave their parent atoms.
Forms of matter that are not composed of molecules and are organized by different forces can also be considered different states of matter. Superfluids (like Fermionic condensate) and the quark–gluon plasma are examples.
In a chemical equation, the state of matter of the chemicals may be shown as (s) for solid, (l) for liquid, and (g) for gas. An aqueous solution is denoted (aq), for example,
Matter in the plasma state is seldom used (if at all) in chemical equations, so there is no standard symbol to denote it. In the rare equations that plasma is used it is symbolized as (p).
Non-classical states
Glass
Glass is a non-crystalline or amorphous solid material that exhibits a glass transition when heated towards the liquid state. Glasses can be made of quite different classes of materials: inorganic networks (such as window glass, made of silicate plus additives), metallic alloys, ionic melts, aqueous solutions, molecular liquids, and polymers.
Thermodynamically, a glass is in a metastable state with respect to its crystalline counterpart. The conversion rate, however, is practically zero.
Crystals with some degree of disorder
A plastic crystal is a molecular solid with long-range positional order but with constituent molecules retaining rotational freedom; in an orientational glass this degree of freedom is frozen in a quenched disordered state.
Similarly, in a spin glass magnetic disorder is frozen.
Liquid crystal states
Liquid crystal states have properties intermediate between mobile liquids and ordered solids. Generally, they are able to flow like a liquid, but exhibiting long-range order. For example, the nematic phase consists of long rod-like molecules such as para-azoxyanisole, which is nematic in the temperature range . In this state the molecules flow as in a liquid, but they all point in the same direction (within each domain) and cannot rotate freely. Like a crystalline solid, but unlike a liquid, liquid crystals react to polarized light.
Other types of liquid crystals are described in the main article on these states. Several types have technological importance, for example, in liquid crystal displays.
Microphase separation
Copolymers can undergo microphase separation to form a diverse array of periodic nanostructures, as shown in the example of the styrene-butadiene-styrene block copolymer shown at right. Microphase separation can be understood by analogy to the phase separation between oil and water. Due to chemical incompatibility between the blocks, block copolymers undergo a similar phase separation. However, because the blocks are covalently bonded to each other, they cannot demix macroscopically as water and oil can, and so instead the blocks form nanometre-sized structures. Depending on the relative lengths of each block and the overall block topology of the polymer, many morphologies can be obtained, each its own phase of matter.
Ionic liquids also display microphase separation. The anion and cation are not necessarily compatible and would demix otherwise, but electric charge attraction prevents them from separating. Their anions and cations appear to diffuse within compartmentalized layers or micelles instead of freely as in a uniform liquid.
Magnetically ordered states
Transition metal atoms often have magnetic moments due to the net spin of electrons that remain unpaired and do not form chemical bonds. In some solids the magnetic moments on different atoms are ordered and can form a ferromagnet, an antiferromagnet or a ferrimagnet.
In a ferromagnet—for instance, solid iron—the magnetic moment on each atom is aligned in the same direction (within a magnetic domain). If the domains are also aligned, the solid is a permanent magnet, which is magnetic even in the absence of an external magnetic field. The magnetization disappears when the magnet is heated to the Curie point, which for iron is .
An antiferromagnet has two networks of equal and opposite magnetic moments, which cancel each other out so that the net magnetization is zero. For example, in nickel(II) oxide (NiO), half the nickel atoms have moments aligned in one direction and half in the opposite direction.
In a ferrimagnet, the two networks of magnetic moments are opposite but unequal, so that cancellation is incomplete and there is a non-zero net magnetization. An example is magnetite (FeO), which contains Fe and Fe ions with different magnetic moments.
A quantum spin liquid (QSL) is a disordered state in a system of interacting quantum spins which preserves its disorder to very low temperatures, unlike other disordered states. It is not a liquid in physical sense, but a solid whose magnetic order is inherently disordered. The name "liquid" is due to an analogy with the molecular disorder in a conventional liquid. A QSL is neither a ferromagnet, where magnetic domains are parallel, nor an antiferromagnet, where the magnetic domains are antiparallel; instead, the magnetic domains are randomly oriented. This can be realized e.g. by geometrically frustrated magnetic moments that cannot point uniformly parallel or antiparallel. When cooling down and settling to a state, the domain must "choose" an orientation, but if the possible states are similar in energy, one will be chosen randomly. Consequently, despite strong short-range order, there is no long-range magnetic order.
Superfluids and condensates
Superconductor
Superconductors are materials which have zero electrical resistivity, and therefore perfect conductivity. This is a distinct physical state which exists at low temperature, and the resistivity increases discontinuously to a finite value at a sharply-defined transition temperature for each superconductor.
A superconductor also excludes all magnetic fields from its interior, a phenomenon known as the Meissner effect or perfect diamagnetism. Superconducting magnets are used as electromagnets in magnetic resonance imaging machines.
The phenomenon of superconductivity was discovered in 1911, and for 75 years was only known in some metals and metallic alloys at temperatures below 30 K. In 1986 so-called high-temperature superconductivity was discovered in certain ceramic oxides, and has now been observed in temperatures as high as 164 K.
Superfluid
Close to absolute zero, some liquids form a second liquid state described as superfluid because it has zero viscosity (or infinite fluidity; i.e., flowing without friction). This was discovered in 1937 for helium, which forms a superfluid below the lambda temperature of . In this state it will attempt to "climb" out of its container. It also has infinite thermal conductivity so that no temperature gradient can form in a superfluid. Placing a superfluid in a spinning container will result in quantized vortices.
These properties are explained by the theory that the common isotope helium-4 forms a Bose–Einstein condensate (see next section) in the superfluid state. More recently, fermionic condensate superfluids have been formed at even lower temperatures by the rare isotope helium-3 and by lithium-6.
Bose–Einstein condensate
In 1924, Albert Einstein and Satyendra Nath Bose predicted the "Bose–Einstein condensate" (BEC), sometimes referred to as the fifth state of matter. In a BEC, matter stops behaving as independent particles, and collapses into a single quantum state that can be described with a single, uniform wavefunction.
In the gas phase, the Bose–Einstein condensate remained an unverified theoretical prediction for many years. In 1995, the research groups of Eric Cornell and Carl Wieman, of JILA at the University of Colorado at Boulder, produced the first such condensate experimentally. A Bose–Einstein condensate is "colder" than a solid. It may occur when atoms have very similar (or the same) quantum levels, at temperatures very close to absolute zero, .
Fermionic condensate
A fermionic condensate is similar to the Bose–Einstein condensate but composed of fermions. The Pauli exclusion principle prevents fermions from entering the same quantum state, but a pair of fermions can behave as a boson, and multiple such pairs can then enter the same quantum state without restriction.
High-energy states
Degenerate matter
Under extremely high pressure, as in the cores of dead stars, ordinary matter undergoes a transition to a series of exotic states of matter collectively known as degenerate matter, which are supported mainly by quantum mechanical effects. In physics, "degenerate" refers to two states that have the same energy and are thus interchangeable. Degenerate matter is supported by the Pauli exclusion principle, which prevents two fermionic particles from occupying the same quantum state. Unlike regular plasma, degenerate plasma expands little when heated, because there are simply no momentum states left. Consequently, degenerate stars collapse into very high densities. More massive degenerate stars are smaller, because the gravitational force increases, but pressure does not increase proportionally.
Electron-degenerate matter is found inside white dwarf stars. Electrons remain bound to atoms but are able to transfer to adjacent atoms. Neutron-degenerate matter is found in neutron stars. Vast gravitational pressure compresses atoms so strongly that the electrons are forced to combine with protons via inverse beta-decay, resulting in a superdense conglomeration of neutrons. Normally free neutrons outside an atomic nucleus will decay with a half life of approximately 10 minutes, but in a neutron star, the decay is overtaken by inverse decay. Cold degenerate matter is also present in planets such as Jupiter and in the even more massive brown dwarfs, which are expected to have a core with metallic hydrogen. Because of the degeneracy, more massive brown dwarfs are not significantly larger. In metals, the electrons can be modeled as a degenerate gas moving in a lattice of non-degenerate positive ions.
Quark matter
In regular cold matter, quarks, fundamental particles of nuclear matter, are confined by the strong force into hadrons that consist of 2–4 quarks, such as protons and neutrons. Quark matter or quantum chromodynamical (QCD) matter is a group of phases where the strong force is overcome and quarks are deconfined and free to move. Quark matter phases occur at extremely high densities or temperatures, and there are no known ways to produce them in equilibrium in the laboratory; in ordinary conditions, any quark matter formed immediately undergoes radioactive decay.
Strange matter is a type of quark matter that is suspected to exist inside some neutron stars close to the Tolman–Oppenheimer–Volkoff limit (approximately 2–3 solar masses), although there is no direct evidence of its existence. In strange matter, part of the energy available manifests as strange quarks, a heavier analogue of the common down quark. It may be stable at lower energy states once formed, although this is not known.
Quark–gluon plasma is a very high-temperature phase in which quarks become free and able to move independently, rather than being perpetually bound into particles, in a sea of gluons, subatomic particles that transmit the strong force that binds quarks together. This is analogous to the liberation of electrons from atoms in a plasma. This state is briefly attainable in extremely high-energy heavy ion collisions in particle accelerators, and allows scientists to observe the properties of individual quarks. Theories predicting the existence of quark–gluon plasma were developed in the late 1970s and early 1980s, and it was detected for the first time in the laboratory at CERN in the year 2000. Unlike plasma, which flows like a gas, interactions within QGP are strong and it flows like a liquid.
At high densities but relatively low temperatures, quarks are theorized to form a quark liquid whose nature is presently unknown. It forms a distinct color-flavor locked (CFL) phase at even higher densities. This phase is superconductive for color charge. These phases may occur in neutron stars but they are presently theoretical.
Color-glass condensate
Color-glass condensate is a type of matter theorized to exist in atomic nuclei traveling near the speed of light. According to Einstein's theory of relativity, a high-energy nucleus appears length contracted, or compressed, along its direction of motion. As a result, the gluons inside the nucleus appear to a stationary observer as a "gluonic wall" traveling near the speed of light. At very high energies, the density of the gluons in this wall is seen to increase greatly. Unlike the quark–gluon plasma produced in the collision of such walls, the color-glass condensate describes the walls themselves, and is an intrinsic property of the particles that can only be observed under high-energy conditions such as those at RHIC and possibly at the Large Hadron Collider as well.
Very high energy states
Various theories predict new states of matter at very high energies. An unknown state has created the baryon asymmetry in the universe, but little is known about it. In string theory, a Hagedorn temperature is predicted for superstrings at about 1030 K, where superstrings are copiously produced. At Planck temperature (1032 K), gravity becomes a significant force between individual particles. No current theory can describe these states and they cannot be produced with any foreseeable experiment. However, these states are important in cosmology because the universe may have passed through these states in the Big Bang.
Other proposed states
Supersolid
A supersolid is a spatially ordered material (that is, a solid or crystal) with superfluid properties. Similar to a superfluid, a supersolid is able to move without friction but retains a rigid shape. Although a supersolid is a solid, it exhibits so many characteristic properties different from other solids that many argue it is another state of matter.
String-net liquid
In a string-net liquid, atoms have apparently unstable arrangement, like a liquid, but are still consistent in overall pattern, like a solid. When in a normal solid state, the atoms of matter align themselves in a grid pattern, so that the spin of any electron is the opposite of the spin of all electrons touching it. But in a string-net liquid, atoms are arranged in some pattern that requires some electrons to have neighbors with the same spin. This gives rise to curious properties, as well as supporting some unusual proposals about the fundamental conditions of the universe itself.
Superglass
A superglass is a phase of matter characterized, at the same time, by superfluidity and a frozen amorphous structure.
Chain-melted state
Metals, like potassium, in the chain-melted state appear to be in the liquid and solid state at the same time. This is a result of being subjected to high temperature and pressure, leading to the chains in the potassium to dissolve into liquid while the crystals remain solid.
Quantum Hall state
A quantum Hall state gives rise to quantized Hall voltage measured in the direction perpendicular to the current flow. A quantum spin Hall state is a theoretical phase that may pave the way for the development of electronic devices that dissipate less energy and generate less heat. This is a derivation of the Quantum Hall state of matter.
Photonic matter
Photonic matter is a phenomenon where photons interacting with a gas develop apparent mass, and can interact with each other, even forming photonic "molecules". The source of mass is the gas, which is massive. This is in contrast to photons moving in empty space, which have no rest mass, and cannot interact.
| Physical sciences | Physics | null |
37481 | https://en.wikipedia.org/wiki/Intranet | Intranet | An intranet is a computer network for sharing information, easier communication, collaboration tools, operational systems, and other computing services within an organization, usually to the exclusion of access by outsiders. The term is used in contrast to public networks, such as the Internet, but uses the same technology based on the Internet protocol suite.
An organization-wide intranet can constitute an important focal point of internal communication and collaboration, and provide a single starting point to access internal and external resources. In its simplest form, an intranet is established with the technologies for local area networks (LANs) and wide area networks (WANs). Many modern intranets have search engines, user profiles, blogs, mobile apps with notifications, and events planning within their infrastructure.
An intranet is sometimes contrasted to an extranet. While an intranet is generally restricted to employees of the organization, extranets may also be accessed by customers, suppliers, or other approved parties. Extranets extend a private network onto the Internet with special provisions for authentication, authorization and accounting (AAA protocol).
Uses
Intranets are increasingly being used to deliver tools, such as for collaboration (to facilitate working in groups and teleconferencing) or corporate directories, sales and customer relationship management, or project management. Intranets are also used as corporate culture-change platforms. For example, a large number of employees using an intranet forum application to host a discussion about key issues could come up with new ideas related to management, productivity, quality, and other corporate issues. In large intranets, website traffic is often similar to public website traffic and can be better understood by using web metrics software to track overall activity. User surveys also improve intranet website effectiveness.
Larger businesses allow users within their intranet to access public internet through firewall servers. They have the ability to screen incoming and outgoing messages, keeping security intact. When part of an intranet is made accessible to customers and others outside the business, it becomes part of an extranet. Businesses can send private messages through the public network using special encryption/decryption and other security safeguards to connect one part of their intranet to another.
Intranet user-experience, editorial, and technology teams work together to produce in-house sites. Most commonly, intranets are managed by the communications, HR or CIO departments of large organizations, or some combination of these.
Because of the scope and variety of content and the number of system interfaces, the intranets of many organizations are much more complex than their respective public websites. Intranets and the use of intranets are growing rapidly. According to the Intranet Design Annual 2007 from Nielsen Norman Group, the number of pages on participants' intranets averaged 200,000 over the years 2001 to 2003 and has grown to an average of 6 million pages over 2005–2007.
Benefits
Intranets can help users locate and view information faster and use applications relevant to their roles and responsibilities. With a web browser interface, users can access data held in any database the organization wants to make available at any time and — subject to security provisions — from anywhere within company workstations, increasing employees' ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps improve services provided to users.
Using hypermedia and Web technology, Web publishing allows for the maintenance of and easy access to cumbersome corporate knowledge, such as employee manuals, benefits documents, company policies, business standards, news feeds, and even training, all of which can be accessed throughout a company using common Internet standards (Acrobat files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is usually available to employees using the intranet. Intranets are also used as a platform for developing and deploying applications to support business operations and decisions across the internetworked enterprise.
Intranets allow organizations to distribute information to employees on an as-needed basis; Employees may link to relevant information at their convenience rather than being distracted indiscriminately by email. The intranet can also be linked to a company's management information system, such as a time keeping system.
Information is easily accessible to all authorised users, enabling collaboration. Being able to communicate in real-time through integrated third-party tools, such as an instant messenger, promotes the sharing of ideas and removes blockages to communication to help boost a business's productivity.
Intranets can serve as powerful tools for communicating (such as through chat, email and/or blogs) within a given organization about vertically strategic initiatives that have a global reach throughout said organization. The type of information that can easily be conveyed is the purpose of the initiative and what it is aiming to achieve, who is driving it, results achieved to date, and whom to speak to for more information. By providing this information on the intranet, staff can keep up-to-date with the strategic focus of their organization. For example, when Nestlé had a number of food processing plants in Scandinavia, their central support system had to deal with a number of queries every day. When Nestlé decided to invest in an intranet, they quickly realized the savings. Gerry McGovern says that the savings from the reduction in query calls was substantially greater than the investment in the intranet.
Users can view information and data via a web browser rather than maintaining physical documents such as procedure manuals, internal phone list and requisition forms. This can potentially save the business money on printing, duplicating documents, and the environment, as well as document maintenance overhead. For example, the HRM company PeopleSoft "derived significant cost savings by shifting HR processes to the intranet". McGovern goes on to say the manual cost of enrolling in benefits was found to be US$109.48 per enrollment. "Shifting this process to the intranet reduced the cost per enrollment to $21.79; a saving of 80 percent". Another company that saved money on expense reports was Cisco. "In 1996, Cisco processed 54,000 reports and the amount of dollars processed was USD19 million".
Many companies dictate computer specifications which, in turn, may allow Intranet developers to write applications that only have to work on one browser such that there are no cross-browser compatibility issues. Being able to specifically address one's "viewer" is a great advantage. Since intranets are user-specific (requiring database/network authentication prior to access), users know exactly who they are interfacing with and can personalize their intranet based on role (job title, department) or individual ("Congratulations Jane, on your 3rd year with our company!").
Since "involvement in decision making" is one of the main drivers of employee engagement, offering tools (like forums or surveys) that foster peer-to-peer collaboration and employee participation can make employees feel more valued and involved.
Planning and creation
Most organizations devote considerable resources into the planning and implementation of their intranet as it is of strategic importance to the organization's success. Some of the planning would include topics such as determining the purpose and goals of the intranet, identifying persons or departments responsible for implementation and management and devising functional plans, page layouts and designs.
The appropriate staff would also ensure that implementation schedules and phase-out of existing systems were organized, while defining and implementing security of the intranet and ensuring it lies within legal boundaries and other constraints. In order to produce a high-value end product, systems planners should determine the level of interactivity (e.g. wikis, on-line forms) desired.
Planners may also consider whether the input of new data and updating of existing data is to be centrally controlled or devolve. These decisions sit alongside to the hardware and software considerations (like content management systems), participation issues (like good taste, harassment, confidentiality), and features to be supported.
Intranets are often static sites; they are a shared drive, serving up centrally stored documents alongside internal articles or communications (often one-way communication). By leveraging firms which specialise in 'social' intranets, organisations are beginning to think of how their intranets can become a 'communication hub' for their entire team. The actual implementation would include steps such as securing senior management support and funding, conducting a business requirement analysis and identifying users' information needs.
From the technical perspective, there would need to be a coordinated installation of the web server and user access network, the required user/client applications and the creation of document framework (or template) for the content to be hosted.
The end-user should be involved in testing and promoting use of the company intranet, possibly through a parallel adoption methodology or pilot programme. In the long term, the company should carry out ongoing measurement and evaluation, including through benchmarking against other company services.
Maintenance
Some aspects are non-static.
Staying current
An intranet structure needs key personnel committed to maintaining the intranet and keeping content current. For feedback on the intranet, social networking can be done through a forum for users to indicate what they want and what they do not like.
Privacy protection
The European Union's General Data Protection Regulation went into effect May 2018. Since then, the protection of the privacy of employees, customers and other stakeholders (e.g. consultants) has become more and more a significant concern for most companies (at least, all those having an interest in markets and countries where regulations are in place to protect the privacy).
Enterprise private network
An enterprise private network is a computer network built by a business to interconnect its various company sites (such as production sites, offices and shops) in order to share computer resources.
Beginning with the digitalisation of telecommunication networks, started in the 1970s in the US by AT&T, and propelled by the growth in computer systems availability and demands, enterprise networks have been built for decades without the need to append the term private to them. The networks were operated over telecommunications networks and, as for voice communications, a certain amount of security and secrecy was expected and delivered.
But with the Internet in the 1990s came a new type of network, virtual private networks, built over this public infrastructure, using encryption to protect the data traffic from eaves-dropping. So the enterprise networks are now commonly referred to as enterprise private networks in order to clarify that these are private networks, in contrast to public networks.
| Technology | Networks | null |
37506 | https://en.wikipedia.org/wiki/Garnet | Garnet | Garnets () are a group of silicate minerals that have been used since the Bronze Age as gemstones and abrasives.
All species of garnets possess similar physical properties and crystal forms, but differ in chemical composition. The different species are pyrope, almandine, spessartine, grossular (varieties of which are hessonite or cinnamon-stone and tsavorite), uvarovite and andradite. The garnets make up two solid solution series: pyrope-almandine-spessartine (pyralspite), with the composition range ; and uvarovite-grossular-andradite (ugrandite), with the composition range .
Etymology
The word garnet comes from the 14th-century Middle English word gernet, meaning 'dark red'. It is borrowed from Old French grenate from Latin granatus, from granum ('grain, seed'). This is possibly a reference to mela granatum or even pomum granatum ('pomegranate', Punica granatum), a plant whose fruits contain abundant and vivid red seed covers (arils), which are similar in shape, size, and color to some garnet crystals. Hessonite garnet is also named 'gomed' in Indian literature and is one of the 9 jewels in Vedic astrology that compose the Navaratna.
Physical properties
Properties
Garnet species are found in every colour, with reddish shades most common. Blue garnets are the rarest and were first reported in the 1990s.
Garnet species' light transmission properties can range from the gemstone-quality transparent specimens to the opaque varieties used for industrial purposes as abrasives. The mineral's lustre is categorized as vitreous (glass-like) or resinous (amber-like).
Crystal structure
Garnets are nesosilicates having the general formula X3Y2()3. The X site is usually occupied by divalent cations (Ca, Mg, Fe, Mn)2+ and the Y site by trivalent cations (Al3+, Fe3+, Cr3+) in an octahedral/tetrahedral framework with [SiO4]4− occupying the tetrahedra. Garnets are most often found in the dodecahedral crystal habit, but are also commonly found in the trapezohedron habit as well as the hexoctahedral habit. They crystallize in the cubic system, having three axes that are all of equal length and perpendicular to each other, but are never actually cubic because, despite being isometric, the {100} and {111} families of planes are depleted. Garnets do not have any cleavage planes, so when they fracture under stress, sharp, irregular (conchoidal) pieces are formed.
Hardness
Because the chemical composition of garnet varies, the atomic bonds in some species are stronger than in others. As a result, this mineral group shows a range of hardness on the Mohs scale of about 6.0 to 7.5. The harder species like almandine are often used for abrasive purposes.
Magnetics used in garnet series identification
For gem identification purposes, a pick-up response to a strong neodymium magnet separates garnet from all other natural transparent gemstones commonly used in the jewelry trade. Magnetic susceptibility measurements in conjunction with refractive index can be used to distinguish garnet species and varieties, and determine the composition of garnets in terms of percentages of end-member species within an individual gem.
Garnet group end member species
Pyralspite garnets – aluminium in Y site
Almandine: Fe3Al2(SiO4)3
Pyrope: Mg3Al2(SiO4)3
Spessartine: Mn3Al2(SiO4)3
Almandine
Almandine, sometimes incorrectly called almandite, is the modern gem known as carbuncle (though originally almost any red gemstone was known by this name). The term "carbuncle" is derived from the Latin meaning "live coal" or burning charcoal. The name Almandine is a corruption of Alabanda, a region in Asia Minor where these stones were cut in ancient times. Chemically, almandine is an iron-aluminium garnet with the formula Fe3Al2(SiO4)3; the deep red transparent stones are often called precious garnet and are used as gemstones (being the most common of the gem garnets). Almandine occurs in metamorphic rocks like mica schists, associated with minerals such as staurolite, kyanite, andalusite, and others. Almandine has nicknames of Oriental garnet, almandine ruby, and carbuncle.
Pyrope
Pyrope (from the Greek pyrōpós meaning "firelike") is red in color and chemically an aluminium silicate with the formula Mg3Al2(SiO4)3, though the magnesium can be replaced in part by calcium and ferrous iron. The color of pyrope varies from deep red to black. Pyrope and spessartine gemstones have been recovered from the Sloan diamondiferous kimberlites in Colorado, from the Bishop Conglomerate and in a Tertiary age lamprophyre at Cedar Mountain in Wyoming.
A variety of pyrope from Macon County, North Carolina is a violet-red shade and has been called rhodolite, Greek for "rose". In chemical composition it may be considered as essentially an isomorphous mixture of pyrope and almandine, in the proportion of two parts pyrope to one part almandine. Pyrope has tradenames some of which are misnomers; Cape ruby, Arizona ruby, California ruby, Rocky Mountain ruby, and Bohemian ruby from the Czech Republic.
Pyrope is an indicator mineral for high-pressure rocks. Mantle-derived rocks (peridotites and eclogites) commonly contain a pyrope variety.
Spessartine
Spessartine or spessartite is manganese aluminium garnet, Mn3Al2(SiO4)3. Its name is derived from Spessart in Bavaria. It occurs most often in skarns, granite pegmatite and allied rock types, and in certain low grade metamorphic phyllites. Spessartine of an orange-yellow is found in Madagascar. Violet-red spessartines are found in rhyolites in Colorado
Pyrope–spessartine (blue garnet or color-change garnet)
Blue pyrope–spessartine garnets were discovered in the late 1990s in Bekily, Madagascar. This type has also been found in parts of the United States, Russia, Kenya, Tanzania, and Turkey. It changes color from blue-green to purple depending on the color temperature of viewing light, as a result of the relatively high amounts of vanadium (about 1 wt.% V2O3).
Other varieties of color-changing garnets exist. In daylight, their color ranges from shades of green, beige, brown, gray, and blue, but in incandescent light, they appear a reddish or purplish/pink color.
This is the rarest type of garnet. Because of its color-changing quality, this kind of garnet resembles alexandrite.
Ugrandite group – calcium in X site
Andradite: Ca3Fe2(SiO4)3
Grossular: Ca3Al2(SiO4)3
Uvarovite: Ca3Cr2(SiO4)3
Andradite
Andradite is a calcium-iron garnet, Ca3Fe2(SiO4)3, is of variable composition and may be red, yellow, brown, green or black. The recognized varieties are demantoid (green), melanite (black), and topazolite (yellow or green). The red-brown translucent variety of colophonite is recognized as a partially obsolete name. Andradite is found in skarns and in deep-seated igneous rocks like syenite as well as serpentines and greenschists. Demantoid is one of the most prized of garnet varieties.
Grossular
Grossular is a calcium-aluminium garnet with the formula Ca3Al2(SiO4)3, though the calcium may in part be replaced by ferrous iron and the aluminium by ferric iron. The name grossular is derived from the botanical name for the gooseberry, grossularia, in reference to the green garnet of this composition that is found in Siberia. Other shades include cinnamon brown (cinnamon stone variety), red, and yellow. Because of its inferior hardness to zircon, which the yellow crystals resemble, they have also been called hessonite from the Greek meaning inferior. Grossular is found in skarns, contact metamorphosed limestones with vesuvianite, diopside, wollastonite and wernerite.
Grossular garnet from Kenya and Tanzania has been called tsavorite. Tsavorite was first described in the 1960s in the Tsavo area of Kenya, from which the gem takes its name.
Uvarovite
Uvarovite is a calcium chromium garnet with the formula Ca3Cr2(SiO4)3. This is a rather rare garnet, bright green in color, usually found as small crystals associated with chromite in peridotite, serpentinite, and kimberlites. It is found in crystalline marbles and schists in the Ural Mountains of Russia and Outokumpu, Finland. Uvarovite is named for Count Uvaro, a Russian imperial statesman.
Less common species
Calcium in X site
Goldmanite:
Kimzeyite:
Morimotoite:
Schorlomite:
Hydroxide bearing – calcium in X site
Hydrogrossular:
Hibschite: (where x is between 0.2 and 1.5)
Katoite: (where x is greater than 1.5)
Magnesium or manganese in X site
Knorringite:
Majorite:
Calderite:
Knorringite
Knorringite is a magnesium-chromium garnet species with the formula Mg3Cr2(SiO4)3. Pure endmember knorringite never occurs in nature. Pyrope rich in the knorringite component is only formed under high pressure and is often found in kimberlites. It is used as an indicator mineral in the search for diamonds.
Garnet structural group
Formula: X3Z2(TO4)3 (X = Ca, Fe, etc., Z = Al, Cr, etc., T = Si, As, V, Fe, Al)
All are cubic or strongly pseudocubic.
IMA/CNMNC – Nickel-Strunz – Mineral subclass: 09.A Nesosilicate
Nickel-Strunz classification: 09.AD.25
| Physical sciences | Silicate minerals | Earth science |
37508 | https://en.wikipedia.org/wiki/Magma | Magma | Magma () is the molten or semi-molten natural material from which all igneous rocks are formed. Magma (sometimes colloquially but incorrectly referred to as lava) is found beneath the surface of the Earth, and evidence of magmatism has also been discovered on other terrestrial planets and some natural satellites. Besides molten rock, magma may also contain suspended crystals and gas bubbles.
Magma is produced by melting of the mantle or the crust in various tectonic settings, which on Earth include subduction zones, continental rift zones, mid-ocean ridges and hotspots. Mantle and crustal melts migrate upwards through the crust where they are thought to be stored in magma chambers or trans-crustal crystal-rich mush zones. During magma's storage in the crust, its composition may be modified by fractional crystallization, contamination with crustal melts, magma mixing, and degassing. Following its ascent through the crust, magma may feed a volcano and be extruded as lava, or it may solidify underground to form an intrusion, such as a dike, a sill, a laccolith, a pluton, or a batholith.
While the study of magma has relied on observing magma after its transition into a lava flow, magma has been encountered in situ three times during geothermal drilling projects, twice in Iceland (see Use in energy production) and once in Hawaii.
Physical and chemical properties
Magma consists of liquid rock that usually contains suspended solid crystals. As magma approaches the surface and the overburden pressure drops, dissolved gases bubble out of the liquid, so that magma near the surface consists of materials in solid, liquid, and gas phases.
Composition
Most magma is rich in silica. Rare nonsilicate magma can form by local melting of nonsilicate mineral deposits or by separation of a magma into separate immiscible silicate and nonsilicate liquid phases.
Silicate magmas are molten mixtures dominated by oxygen and silicon, the most abundant chemical elements in the Earth's crust, with smaller quantities of aluminium, calcium, magnesium, iron, sodium, and potassium, and minor amounts of many other elements. Petrologists routinely express the composition of a silicate magma in terms of the weight or molar mass fraction of the oxides of the major elements (other than oxygen) present in the magma.
Because many of the properties of a magma (such as its viscosity and temperature) are observed to correlate with silica content, silicate magmas are divided into four chemical types based on silica content: felsic, intermediate, mafic, and ultramafic.
Felsic magmas
Felsic or silicic magmas have a silica content greater than 63%. They include rhyolite and dacite magmas. With such a high silica content, these magmas are extremely viscous, ranging from 108 cP (105 Pa⋅s) for hot rhyolite magma at to 1011 cP (108 Pa⋅s) for cool rhyolite magma at . For comparison, water has a viscosity of about 1 cP (0.001 Pa⋅s). Because of this very high viscosity, felsic lavas usually erupt explosively to produce pyroclastic (fragmental) deposits. However, rhyolite lavas occasionally erupt effusively to form lava spines, lava domes or "coulees" (which are thick, short lava flows). The lavas typically fragment as they extrude, producing block lava flows. These often contain obsidian.
Felsic lavas can erupt at temperatures as low as . Unusually hot (>950 °C; >1,740 °F) rhyolite lavas, however, may flow for distances of many tens of kilometres, such as in the Snake River Plain of the northwestern United States.
Intermediate magmas
Intermediate or andesitic magmas contain 52% to 63% silica, and are lower in aluminium and usually somewhat richer in magnesium and iron than felsic magmas. Intermediate lavas form andesite domes and block lavas, and may occur on steep composite volcanoes, such as in the Andes. They are also commonly hotter, in the range of ). Because of their lower silica content and higher eruptive temperatures, they tend to be much less viscous, with a typical viscosity of 3.5 × 106 cP (3,500 Pa⋅s) at . This is slightly greater than the viscosity of smooth peanut butter. Intermediate magmas show a greater tendency to form phenocrysts. Higher iron and magnesium tends to manifest as a darker groundmass, including amphibole or pyroxene phenocrysts.
Mafic magmas
Mafic or basaltic magmas have a silica content of 52% to 45%. They are typified by their high ferromagnesian content, and generally erupt at temperatures of . Viscosities can be relatively low, around 104 to 105 cP (10 to 100 Pa⋅s), although this is still many orders of magnitude higher than water. This viscosity is similar to that of ketchup. Basalt lavas tend to produce low-profile shield volcanoes or flood basalts, because the fluidal lava flows for long distances from the vent. The thickness of a basalt lava, particularly on a low slope, may be much greater than the thickness of the moving lava flow at any one time, because basalt lavas may "inflate" by supply of lava beneath a solidified crust. Most basalt lavas are of ʻAʻā or pāhoehoe types, rather than block lavas. Underwater, they can form pillow lavas, which are rather similar to entrail-type pahoehoe lavas on land.
Ultramafic magmas
Ultramafic magmas, such as picritic basalt, komatiite, and highly magnesian magmas that form boninite, take the composition and temperatures to the extreme. All have a silica content under 45%. Komatiites contain over 18% magnesium oxide, and are thought to have erupted at temperatures of . At this temperature there is practically no polymerization of the mineral compounds, creating a highly mobile liquid. Viscosities of komatiite magmas are thought to have been as low as 100 to 1000 cP (0.1 to 1 Pa⋅s), similar to that of light motor oil. Most ultramafic lavas are no younger than the Proterozoic, with a few ultramafic magmas known from the Phanerozoic in Central America that are attributed to a hot mantle plume. No modern komatiite lavas are known, as the Earth's mantle has cooled too much to produce highly magnesian magmas.
Alkaline magmas
Some silicic magmas have an elevated content of alkali metal oxides (sodium and potassium), particularly in regions of continental rifting, areas overlying deeply subducted plates, or at intraplate hotspots. Their silica content can range from ultramafic (nephelinites, basanites and tephrites) to felsic (trachytes). They are more likely to be generated at greater depths in the mantle than subalkaline magmas. Olivine nephelinite magmas are both ultramafic and highly alkaline, and are thought to have come from much deeper in the mantle of the Earth than other magmas.
Non-silicate magmas
Some lavas of unusual composition have erupted onto the surface of the Earth. These include:
Carbonatite and natrocarbonatite lavas are known from Ol Doinyo Lengai volcano in Tanzania, which is the sole example of an active carbonatite volcano. Carbonatites in the geologic record are typically 75% carbonate minerals, with lesser amounts of silica-undersaturated silicate minerals (such as micas and olivine), apatite, magnetite, and pyrochlore. This may not reflect the original composition of the lava, which may have included sodium carbonate that was subsequently removed by hydrothermal activity, though laboratory experiments show that a calcite-rich magma is possible. Carbonatite lavas show stable isotope ratios indicating they are derived from the highly alkaline silicic lavas with which they are always associated, probably by separation of an immiscible phase. Natrocarbonatite lavas of Ol Doinyo Lengai are composed mostly of sodium carbonate, with about half as much calcium carbonate and half again as much potassium carbonate, and minor amounts of halides, fluorides, and sulphates. The lavas are extremely fluid, with viscosities only slightly greater than water, and are very cool, with measured temperatures of .
Iron oxide magmas are thought to be the source of the iron ore at Kiruna, Sweden which formed during the Proterozoic. Iron oxide lavas of Pliocene age occur at the El Laco volcanic complex on the Chile-Argentina border. Iron oxide lavas are thought to be the result of immiscible separation of iron oxide magma from a parental magma of calc-alkaline or alkaline composition. When erupted, the temperature of the molten iron oxide magma is about .
Sulfur lava flows up to long and wide occur at Lastarria volcano, Chile. They were formed by the melting of sulfur deposits at temperatures as low as .
Magmatic gases
The concentrations of different gases can vary considerably. Water vapor is typically the most abundant magmatic gas, followed by carbon dioxide and sulfur dioxide. Other principal magmatic gases include hydrogen sulfide, hydrogen chloride, and hydrogen fluoride.
The solubility of magmatic gases in magma depends on pressure, magma composition, and temperature. Magma that is extruded as lava is extremely dry, but magma at depth and under great pressure can contain a dissolved water content in excess of 10%. Water is somewhat less soluble in low-silica magma than high-silica magma, so that at 1,100 °C and 0.5 GPa, a basaltic magma can dissolve 8% while a granite pegmatite magma can dissolve 11% . However, magmas are not necessarily saturated under typical conditions.
Carbon dioxide is much less soluble in magmas than water, and frequently separates into a distinct fluid phase even at great depth. This explains the presence of carbon dioxide fluid inclusions in crystals formed in magmas at great depth.
Rheology
Viscosity is a key melt property in understanding the behaviour of magmas. Whereas temperatures in common silicate lavas range from about for felsic lavas to for mafic lavas, the viscosity of the same lavas ranges over seven orders of magnitude, from 104 cP (10 Pa⋅s) for mafic lava to 1011 cP (108 Pa⋅s) for felsic magmas. The viscosity is mostly determined by composition but is also dependent on temperature. The tendency of felsic lava to be cooler than mafic lava increases the viscosity difference.
The silicon ion is small and highly charged, and so it has a strong tendency to coordinate with four oxygen ions, which form a tetrahedral arrangement around the much smaller silicon ion. This is called a silica tetrahedron. In a magma that is low in silicon, these silica tetrahedra are isolated, but as the silicon content increases, silica tetrahedra begin to partially polymerize, forming chains, sheets, and clumps of silica tetrahedra linked by bridging oxygen ions. These greatly increase the viscosity of the magma.
The tendency towards polymerization is expressed as NBO/T, where NBO is the number of non-bridging oxygen ions and T is the number of network-forming ions. Silicon is the main network-forming ion, but in magmas high in sodium, aluminium also acts as a network former, and ferric iron can act as a network former when other network formers are lacking. Most other metallic ions reduce the tendency to polymerize and are described as network modifiers. In a hypothetical magma formed entirely from melted silica, NBO/T would be 0, while in a hypothetical magma so low in network formers that no polymerization takes place, NBO/T would be 4. Neither extreme is common in nature, but basalt magmas typically have NBO/T between 0.6 and 0.9, andesitic magmas have NBO/T of 0.3 to 0.5, and rhyolitic magmas have NBO/T of 0.02 to 0.2. Water acts as a network modifier, and dissolved water drastically reduces melt viscosity. Carbon dioxide neutralizes network modifiers, so dissolved carbon dioxide increases the viscosity. Higher-temperature melts are less viscous, since more thermal energy is available to break bonds between oxygen and network formers.
Most magmas contain solid crystals of various minerals, fragments of exotic rocks known as xenoliths and fragments of previously solidified magma. The crystal content of most magmas gives them thixotropic and shear thinning properties. In other words, most magmas do not behave like Newtonian fluids, in which the rate of flow is proportional to the shear stress. Instead, a typical magma is a Bingham fluid, which shows considerable resistance to flow until a stress threshold, called the yield stress, is crossed. This results in plug flow of partially crystalline magma. A familiar example of plug flow is toothpaste squeezed out of a toothpaste tube. The toothpaste comes out as a semisolid plug, because shear is concentrated in a thin layer in the toothpaste next to the tube, and only here does the toothpaste behave as a fluid. Thixotropic behavior also hinders crystals from settling out of the magma. Once the crystal content reaches about 60%, the magma ceases to behave like a fluid and begins to behave like a solid. Such a mixture of crystals with melted rock is sometimes described as crystal mush.
Magma is typically also viscoelastic, meaning it flows like a liquid under low stresses, but once the applied stress exceeds a critical value, the melt cannot dissipate the stress fast enough through relaxation alone, resulting in transient fracture propagation. Once stresses are reduced below the critical threshold, the melt viscously relaxes once more and heals the fracture.
Temperature
Temperatures of molten lava, which is magma extruded onto the surface, are almost all in the range , but very rare carbonatite magmas may be as cool as , and komatiite magmas may have been as hot as . Magma has occasionally been encountered during drilling in geothermal fields, including drilling in Hawaii that penetrated a dacitic magma body at a depth of . The temperature of this magma was estimated at . Temperatures of deeper magmas must be inferred from theoretical computations and the geothermal gradient.
Most magmas contain some solid crystals suspended in the liquid phase. This indicates that the temperature of the magma lies between the solidus, which is defined as the temperature at which the magma completely solidifies, and the liquidus, defined as the temperature at which the magma is completely liquid. Calculations of solidus temperatures at likely depths suggests that magma generated beneath areas of rifting starts at a temperature of about . Magma generated from mantle plumes may be as hot as . The temperature of magma generated in subduction zones, where water vapor lowers the melting temperature, may be as low as .
Density
Magma densities depend mostly on composition, iron content being the most important parameter.
Magma expands slightly at lower pressure or higher temperature. When magma approaches the surface, its dissolved gases begin to bubble out of the liquid. These bubbles had significantly reduced the density of the magma at depth and helped drive it toward the surface in the first place.
Origins
The temperature within the interior of the earth is described by the geothermal gradient, which is the rate of temperature change with depth. The geothermal gradient is established by the balance between heating through radioactive decay in the Earth's interior and heat loss from the surface of the earth. The geothermal gradient averages about 25 °C/km in the Earth's upper crust, but this varies widely by region, from a low of 5–10 °C/km within oceanic trenches and subduction zones to 30–80 °C/km along mid-ocean ridges or near mantle plumes. The gradient becomes less steep with depth, dropping to just 0.25 to 0.3 °C/km in the mantle, where slow convection efficiently transports heat. The average geothermal gradient is not normally steep enough to bring rocks to their melting point anywhere in the crust or upper mantle, so magma is produced only where the geothermal gradient is unusually steep or the melting point of the rock is unusually low. However, the ascent of magma towards the surface in such settings is the most important process for transporting heat through the crust of the Earth.
Rocks may melt in response to a decrease in pressure, to a change in composition (such as an addition of water), to an increase in temperature, or to a combination of these processes. Other mechanisms, such as melting from a meteorite impact, are less important today, but impacts during the accretion of the Earth led to extensive melting, and the outer several hundred kilometers of the early Earth was probably a magma ocean. Impacts of large meteorites in the last few hundred million years have been proposed as one mechanism responsible for the extensive basalt magmatism of several large igneous provinces.
Decompression
Decompression melting occurs because of a decrease in pressure. It is the most important mechanism for producing magma from the upper mantle.
The solidus temperatures of most rocks (the temperatures below which they are completely solid) increase with increasing pressure in the absence of water. Peridotite at depth in the Earth's mantle may be hotter than its solidus temperature at some shallower level. If such rock rises during the convection of solid mantle, it will cool slightly as it expands in an adiabatic process, but the cooling is only about 0.3 °C per kilometer. Experimental studies of appropriate peridotite samples document that the solidus temperatures increase by 3 °C to 4 °C per kilometer. If the rock rises far enough, it will begin to melt. Melt droplets can coalesce into larger volumes and be intruded upwards. This process of melting from the upward movement of solid mantle is critical in the evolution of the Earth.
Decompression melting creates the ocean crust at mid-ocean ridges, making it by far the most important source of magma on Earth. It also causes volcanism in intraplate regions, such as Europe, Africa and the Pacific sea floor. Intraplate volcanism is attributed to the rise of mantle plumes or to intraplate extension, with the importance of each mechanism being a topic of continuing research.
Effects of water and carbon dioxide
The change of rock composition most responsible for the creation of magma is the addition of water. Water lowers the solidus temperature of rocks at a given pressure. For example, at a depth of about 100 kilometers, peridotite begins to melt near 800 °C in the presence of excess water, but near 1,500 °C in the absence of water. Water is driven out of the oceanic lithosphere in subduction zones, and it causes melting in the overlying mantle. Hydrous magmas with the composition of basalt or andesite are produced directly and indirectly as results of dehydration during the subduction process. Such magmas, and those derived from them, build up island arcs such as those in the Pacific Ring of Fire. These magmas form rocks of the calc-alkaline series, an important part of the continental crust. With low density and viscosity, hydrous magmas are highly buoyant and will move upwards in Earth's mantle.
The addition of carbon dioxide is relatively a much less important cause of magma formation than the addition of water, but genesis of some silica-undersaturated magmas has been attributed to the dominance of carbon dioxide over water in their mantle source regions. In the presence of carbon dioxide, experiments document that the peridotite solidus temperature decreases by about 200 °C in a narrow pressure interval at pressures corresponding to a depth of about 70 km. At greater depths, carbon dioxide can have more effect: at depths to about 200 km, the temperatures of initial melting of a carbonated peridotite composition were determined to be 450 °C to 600 °C lower than for the same composition with no carbon dioxide. Magmas of rock types such as nephelinite, carbonatite, and kimberlite are among those that may be generated following an influx of carbon dioxide into mantle at depths greater than about 70 km.
Temperature increase
Increase in temperature is the most typical mechanism for formation of magma within continental crust. Such temperature increases can occur because of the upward intrusion of magma from the mantle. Temperatures can also exceed the solidus of a crustal rock in continental crust thickened by compression at a plate boundary. The plate boundary between the Indian and Asian continental masses provides a well-studied example, as the Tibetan Plateau just north of the boundary has crust about 80 kilometers thick, roughly twice the thickness of normal continental crust. Studies of electrical resistivity deduced from magnetotelluric data have detected a layer that appears to contain silicate melt and that stretches for at least 1,000 kilometers within the middle crust along the southern margin of the Tibetan Plateau. Granite and rhyolite are types of igneous rock commonly interpreted as products of the melting of continental crust because of increases in temperature. Temperature increases also may contribute to the melting of lithosphere dragged down in a subduction zone.
The melting process
When rocks melt, they do so over a range of temperature, because most rocks are made of several minerals, which all have different melting points. The temperature at which the first melt appears (the solidus) is lower than the melting temperature of any one of the pure minerals. This is similar to the lowering of the melting point of ice when it is mixed with salt. The first melt is called the eutectic and has a composition that depends on the combination of minerals present.
For example, a mixture of anorthite and diopside, which are two of the predominant minerals in basalt, begins to melt at about 1274 °C. This is well below the melting temperatures of 1392 °C for pure diopside and 1553 °C for pure anorthite. The resulting melt is composed of about 43 wt% anorthite. As additional heat is added to the rock, the temperature remains at 1274 °C until either the anorthite or diopside is fully melted. The temperature then rises as the remaining mineral continues to melt, which shifts the melt composition away from the eutectic. For example, if the content of anorthite is greater than 43%, the entire supply of diopside will melt at 1274 °C., along with enough of the anorthite to keep the melt at the eutectic composition. Further heating causes the temperature to slowly rise as the remaining anorthite gradually melts and the melt becomes increasingly rich in anorthite liquid. If the mixture has only a slight excess of anorthite, this will melt before the temperature rises much above 1274 °C. If the mixture is almost all anorthite, the temperature will reach nearly the melting point of pure anorthite before all the anorthite is melted. If the anorthite content of the mixture is less than 43%, then all the anorthite will melt at the eutectic temperature, along with part of the diopside, and the remaining diopside will then gradually melt as the temperature continues to rise.
Because of eutectic melting, the composition of the melt can be quite different from the source rock. For example, a mixture of 10% anorthite with diopside could experience about 23% partial melting before the melt deviated from the eutectic, which has the composition of about 43% anorthite. This effect of partial melting is reflected in the compositions of different magmas. A low degree of partial melting of the upper mantle (2% to 4%) can produce highly alkaline magmas such as melilitites, while a greater degree of partial melting (8% to 11%) can produce alkali olivine basalt. Oceanic magmas likely result from partial melting of 3% to 15% of the source rock. Some calk-alkaline granitoids may be produced by a high degree of partial melting, as much as 15% to 30%.
High-magnesium magmas, such as komatiite and picrite, may also be the products of a high degree of partial melting of mantle rock.
Certain chemical elements, called incompatible elements, have a combination of ionic radius and ionic charge that is unlike that of the more abundant elements in the source rock. The ions of these elements fit rather poorly in the structure of the minerals making up the source rock, and readily leave the solid minerals to become highly concentrated in melts produced by a low degree of partial melting. Incompatible elements commonly include potassium, barium, caesium, and rubidium, which are large and weakly charged (the large-ion lithophile elements, or LILEs), as well as elements whose ions carry a high charge (the high-field-strength elements, or HSFEs), which include such elements as zirconium, niobium, hafnium, tantalum, the rare-earth elements, and the actinides. Potassium can become so enriched in melt produced by a very low degree of partial melting that, when the magma subsequently cools and solidifies, it forms unusual potassic rock such as lamprophyre, lamproite, or kimberlite.
When enough rock is melted, the small globules of melt (generally occurring between mineral grains) link up and soften the rock. Under pressure within the earth, as little as a fraction of a percent of partial melting may be sufficient to cause melt to be squeezed from its source. Melt rapidly separates from its source rock once the degree of partial melting exceeds 30%. However, usually much less than 30% of a magma source rock is melted before the heat supply is exhausted.
Pegmatite may be produced by low degrees of partial melting of the crust. Some granite-composition magmas are eutectic (or cotectic) melts, and they may be produced by low to high degrees of partial melting of the crust, as well as by fractional crystallization.
Evolution of magmas
Most magmas are fully melted only for small parts of their histories. More typically, they are mixes of melt and crystals, and sometimes also of gas bubbles. Melt, crystals, and bubbles usually have different densities, and so they can separate as magmas evolve.
As magma cools, minerals typically crystallize from the melt at different temperatures. This resembles the original melting process in reverse. However, because the melt has usually separated from its original source rock and moved to a shallower depth, the reverse process of crystallization is not precisely identical. For example, if a melt was 50% each of diopside and anorthite, then anorthite would begin crystallizing from the melt at a temperature somewhat higher than the eutectic temperature of 1274 °C. This shifts the remaining melt towards its eutectic composition of 43% diopside. The eutectic is reached at 1274 °C, the temperature at which diopside and anorthite begin crystallizing together. If the melt was 90% diopside, the diopside would begin crystallizing first until the eutectic was reached.
If the crystals remained suspended in the melt, the crystallization process would not change the overall composition of the melt plus solid minerals. This situation is described as equillibrium crystallization. However, in a series of experiments culminating in his 1915 paper, Crystallization-differentiation in silicate liquids, Norman L. Bowen demonstrated that crystals of olivine and diopside that crystallized out of a cooling melt of forsterite, diopside, and silica would sink through the melt on geologically relevant time scales. Geologists subsequently found considerable field evidence of such fractional crystallization.
When crystals separate from a magma, then the residual magma will differ in composition from the parent magma. For instance, a magma of gabbroic composition can produce a residual melt of granitic composition if early formed crystals are separated from the magma. Gabbro may have a liquidus temperature near 1,200 °C, and the derivative granite-composition melt may have a liquidus temperature as low as about 700 °C. Incompatible elements are concentrated in the last residues of magma during fractional crystallization and in the first melts produced during partial melting: either process can form the magma that crystallizes to pegmatite, a rock type commonly enriched in incompatible elements. Bowen's reaction series is important for understanding the idealised sequence of fractional crystallisation of a magma.
Magma composition can be determined by processes other than partial melting and fractional crystallization. For instance, magmas commonly interact with rocks they intrude, both by melting those rocks and by reacting with them. Assimilation near the roof of a magma chamber and fractional crystallization near its base can even take place simultaneously. Magmas of different compositions can mix with one another. In rare cases, melts can separate into two immiscible melts of contrasting compositions.
Primary magmas
When rock melts, the liquid is a primary magma. Primary magmas have not undergone any differentiation and represent the starting composition of a magma. In practice, it is difficult to unambiguously identify primary magmas, though it has been suggested that boninite is a variety of andesite crystallized from a primary magma. The Great Dyke of Zimbabwe has also been interpreted as rock crystallized from a primary magma. The interpretation of leucosomes of migmatites as primary magmas is contradicted by zircon data, which suggests leucosomes are a residue (a cumulate rock) left by extraction of a primary magma.
Parental magma
When it is impossible to find the primitive or primary magma composition, it is often useful to attempt to identify a parental magma. A parental magma is a magma composition from which the observed range of magma chemistries has been derived by the processes of igneous differentiation. It need not be a primitive melt.
For instance, a series of basalt flows are assumed to be related to one another. A composition from which they could reasonably be produced by fractional crystallization is termed a parental magma. Fractional crystallization models would be produced to test the hypothesis that they share a common parental magma.
Migration and solidification
Magma develops within the mantle or crust where the temperature and pressure conditions favor the molten state. After its formation, magma buoyantly rises toward the Earth's surface, due to its lower density than the source rock. As it migrates through the crust, magma may collect and reside in magma chambers (though recent work suggests that magma may be stored in trans-crustal crystal-rich mush zones rather than dominantly liquid magma chambers ). Magma can remain in a chamber until it either cools and crystallizes to form intrusive rock, it erupts as a volcano, or it moves into another magma chamber.
Plutonism
When magma cools it begins to form solid mineral phases. Some of these settle at the bottom of the magma chamber forming cumulates that might form mafic layered intrusions. Magma that cools slowly within a magma chamber usually ends up forming bodies of plutonic rocks such as gabbro, diorite and granite, depending upon the composition of the magma. Alternatively, if the magma is erupted it forms volcanic rocks such as basalt, andesite and rhyolite (the extrusive equivalents of gabbro, diorite and granite, respectively).
Volcanism
Magma that is extruded onto the surface during a volcanic eruption is called lava. Lava cools and solidifies relatively quickly compared to underground bodies of magma. This fast cooling does not allow crystals to grow large, and a part of the melt does not crystallize at all, becoming glass. Rocks largely composed of volcanic glass include obsidian, scoria and pumice.
Before and during volcanic eruptions, volatiles such as CO2 and H2O partially leave the melt through a process known as exsolution. Magma with low water content becomes increasingly viscous. If massive exsolution occurs when magma heads upwards during a volcanic eruption, the resulting eruption is usually explosive.
Use in energy production
The Iceland Deep Drilling Project, while drilling several 5,000 m holes in an attempt to harness the heat in the volcanic bedrock below the surface of Iceland, struck a pocket of magma at 2,100 m in 2009. Because this was only the third time in recorded history that magma had been reached, IDDP decided to invest in the hole, naming it IDDP-1.
A cemented steel case was constructed in the hole with a perforation at the bottom close to the magma. The high temperatures and pressure of the magma steam were used to generate 36 MW of power, making IDDP-1 the world's first magma-enhanced geothermal system.
| Physical sciences | Petrology | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.