id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
1559728 | https://en.wikipedia.org/wiki/Low%20surface%20brightness%20galaxy | Low surface brightness galaxy | A low-surface-brightness galaxy, or LSB galaxy, is a diffuse galaxy with a surface brightness that, when viewed from Earth, is at least one magnitude lower than the ambient night sky.
Most LSBs are dwarf galaxies, and most of their baryonic matter is in the form of neutral gaseous hydrogen, rather than stars. They appear to have over 95% of their mass as non-baryonic dark matter. There appears to be little supernova (SN) activity in these galaxies, although LSB galaxy IC 217 hosted 2014cl.
Rotation curve measurements indicate an extremely high mass-to-light ratio, meaning that stars and luminous gas contribute only very little to the overall mass balance of an LSB. The centers of LSBs show no large overdensities in stars, unlike e.g. the bulges of normal spiral galaxies. Therefore, they seem to be dark-matter-dominated even in their centers, which makes them excellent laboratories for the study of dark matter.
In comparison to the high-surface-brightness galaxies, LSBs are mainly isolated field galaxies, found in regions devoid of other galaxies. In their past, they had fewer tidal interactions or mergers with other galaxies, which could have triggered enhanced star formation. This is an explanation for the small stellar content.
LSB galaxies were theorized to exist in 1976 by Mike Disney.
Giant low-surface-brightness galaxies
Giant low surface brightness (GLSB) galaxies are among the most massive known spiral galaxies in the Universe. They have very faint stellar disks that are very rich in neutral hydrogen but low in star formation and thus low in surface brightness. Such galaxies often have bright bulges that can host low luminosity active galactic nuclei. GLSB galaxies are usually isolated systems that rarely interact with other galaxies. The first LSB galaxy verified to exist was Malin 1, discovered in 1986. As such, it was also the first giant LSB galaxy identified. At the time of its discovery, it was the largest spiral galaxy known (by scale-length measurement).
UGC 1382 was previously thought to be an elliptical galaxy, but low-brightness spiral arms were later detected. UGC 1382 is much closer to Earth than Malin 1.
Examples
Andromeda V
Pegasus Dwarf Spheroidal Galaxy
IC 10
NGC 45
Eridanus II
Malin 1
Malin 2
Phoenix Dwarf
Sagittarius Dwarf Irregular Galaxy (SagDIG)
Sextans A
Sextans B
Wolf–Lundmark–Melotte galaxy (WLM)
UGC 477
| Physical sciences | Galaxy classification | Astronomy |
1560152 | https://en.wikipedia.org/wiki/Slimehead | Slimehead | Slimeheads, also known as roughies and redfish, are mostly small, exceptionally long-lived, deep-sea beryciform fish constituting the family Trachichthyidae (derived from the Greek trachys – "rough" and ichthys – "fish"). Found in temperate to tropical waters of the Atlantic, Indian, and Pacific Oceans, the family comprises about 50 species in eight genera. Slimeheads are named for the network of muciferous canals riddling their heads.
The larger species – namely the orange roughy (Hoplostethus atlanticus) and Darwin's slimehead (Gephyroberyx darwinii) – are the target of extensive commercial fisheries off Australia and New Zealand. Many populations have already crashed, while others are showing signs of severe overfishing; due to slimeheads' slow rate of reproduction, the future viability of these fisheries has been put into question. Orange roughies are food fish and are marketed fresh and frozen, whereas Darwin's slimeheads are used for their oil and made into fishmeal.
Description
With a typically deep-bodied, laterally compressed form, slimeheads are conspicuous for their large, titular heads, large eyes, and (in some species) bright colours. The head is especially notable for its network of mucus-filled canals, which constitute the cranial portion of the lateral line system. Similar cranial networks are found in the beryciform fangtooths (Anoplogastridae) and the stephanoberyciform ridgeheads (Melamphaidae). The trachichthyid head is typically blunt with a large and oblique mouth; the snout may project slightly in front of the upper jaw. A short, sharp spine is present on the preoperculum and/or operculum and post-temporal bone, the latter spine directed posteriorly. Species of the genera Optivus, Paratrachichthys, and Sorosichthys differ in form from other members of the family; their bodies are more elongated.
All fins are spinous (excluding the low-slung pectoral fins) and rounded: the single dorsal fin has three to eight spines and 10–19 soft rays; the pelvic fins are thoracic with one spine and six or seven soft rays; the anal fin has two or three spines and eight to 12 soft rays; and even the forked caudal fin possesses four to seven procurrent spines on each lobe. The scales of slimeheads are ctenoid, but vary interspecifically; they range from deciduous to adherent. In most species, the ventral scales between the pelvic fin and anus have been modified into a median ridge of large, bony scutes. The lateral line is uninterrupted and fairly obvious; its pores are largely obscured by the scales' well-developed spinules or ctenii.
Slimeheads range from a bright brick red with identically shaded fins, to dusky grey or silver, to black with dusky grey to transparent fins. The reds quickly fade to orange following death. Some species (e.g., Aulotrachichthys latus) are reported to be bioluminescent, probably by symbiotic bacteria as is found in other beryciform fish. The largest species is the orange roughy at a maximum standard length (SL; a measurement excluding the caudal fin) of 75 cm and a weight of 7 kg; however, most slimeheads are well under 30 cm SL.
Life history
Most slimeheads are sluggish and demersal, spending most of their time near the bottom of continental slopes. Cold, moderate benthopelagic depths (about 100 – 1,500 m) with usually hard, rocky substrates are frequented. The most elongate species are typically the most active and frequent the shallowest depths; for example, the slender roughy (Optivus elongatus) is found in photic coastal waters and is associated with rocky reefs. This species is nocturnal and hides in crevices during the day. Trachichthys australis is of the same habitus, but is rather deep-bodied and resembles a soldierfish. Both young and adult slimeheads feed primarily upon zooplankton such as mysid shrimp, amphipods, euphausiids, prawns, and other crustaceans, as well as larval fish. Slimeheads store energy as extracellular wax esters, which aid the fish in maintaining neutral buoyancy.
Slimehead behaviour is not well studied, but some species sporadically form dense aggregations. In the case of the orange roughy, these aggregations (possibly segregated according to sex) may reach a population density of 2.5/m2;. The aggregations form in and around geologic structures, such as undersea canyons and seamounts, likely where water movement and mixing is high, ensuring dense concentrations of prey items. The aggregations do not necessarily form for the purpose of spawning; it is thought that the fish cycle through metabolic phases (feeding and resting) and seek areas with ideal hydrologic conditions to congregate during their inactive and active phases. Observations of orange roughy aggregations during submersible dives have also shown the fish lose almost all pigmentation while inactive, during which time they are very approachable. The orange roughy's metabolic phases are thought to be related to seasonal variations in the fish's prey concentrations, with the inactive phase being a means to conserve energy during lean periods.
Slimeheads are pelagic spawners; that is, spawning aggregations are formed and the fish release eggs and sperm en masse directly into the water. Evidence of oceanodromy (seasonal migration) is seen in some species. The fertilized eggs (and later the larvae) are planktonic, floating with the currents until the larvae develop the strength to determine their own way. Only the economically important species have had their reproduction studied in any detail; the larvae and juveniles of Darwin's slimehead are pelagic and frequent rather shallow waters near the coast, whereas in orange roughy, the early life stages are apparently confined to deeper water (around 200 m). Slimeheads are very slow-growing and long-lived fish; the orange roughy ranks among the longest-lived animals known, with a maximum reported age of 149 years (however, this age is disputed). Predators of slimeheads are not well known, but include large deep-roving sharks, cutthroat eels, merluccid hakes, and snake mackerels.
| Biology and health sciences | Acanthomorpha | Animals |
1560437 | https://en.wikipedia.org/wiki/Portable%20media%20player | Portable media player | A portable media player (PMP) or digital audio player (DAP) is a portable consumer electronics device capable of storing and playing digital media such as audio, images, and video files. The data is typically stored on a compact disc (CD), Digital Versatile Disc (DVD), Blu-ray Disc (BD), flash memory, microdrive, SD cards or hard disk drive; most earlier PMPs used physical media, but modern players mostly use flash memory. In contrast, analogue portable audio players play music from non-digital media that use analogue media, such as cassette tapes or vinyl records.
Digital audio players (DAP) were often marketed as MP3 players even if they also supported other file formats and media types. The PMP term was introduced later for devices that had additional capabilities such as video playback. Generally speaking, they are portable, employing internal or replaceable batteries, equipped with a 3.5 mm headphone jack which can be used for headphones or to connect to a boombox, shelf stereo system, or connect to car audio and home stereos wired or via a wireless connection such as Bluetooth. Some players also include radio tuners, voice recording and other features.
DAPs appeared in the late 1990s following the creation of the MP3 codec in Germany. MP3-playing devices were mostly pioneered by South Korean startups, who by 2002 would control the majority of global sales. However the industry would eventually be defined by the popular Apple iPod. In 2006, 20% of Americans owned a PMP, a figure strongly driven by the young; more than half (54%) of American teens owned one, as did 30% of young adults aged 18 to 34. In 2007, 210 million PMPs were sold worldwide, worth US$19.5 billion. In 2008, video-enabled players would overtake audio-only players. Increasing sales of smartphones and tablet computers have led to a decline in sales of PMPs, leading to most devices being phased out, such as the iPod Touch on May 10, 2022, though certain flagship devices like the Sony Walkman are still in production. Portable DVD and BD players are still manufactured.
Types
Digital audio players are generally categorised by storage media:
Flash-based players: These are non-mechanical solid state devices that hold digital audio files on internal flash memory, removable flash memory cards or a USB flash drive. Due to technological advances in flash memory, these originally low-capacity storage devices are now available commercially ranging up to 128 GB. Because they are solid state and do not have moving parts they require less battery power, will not skip during playback, and may be more resilient to hazards such as mechanical shock or fragmentation than hard disk drive-based players.
Hard-disk-drive-based players: Devices that read digital audio files from a hard disk drive. These players have higher capacities ranging up to 500 GB. At typical encoding rates, this means that tens of thousands of songs can be stored on one player. The disadvantages with these units is that a hard drive consumes more power, is larger and heavier and is inherently more fragile than solid-state storage.
MP3 CD/DVD players: Portable CD players that can decode and play MP3 audio files stored on CDs. Such players were typically a less expensive alternative than either the hard drive or flash-based players when the first units of these were released. The blank CD-R media they use is inexpensive. These devices have the feature of being able to play standard audio CDs. A disadvantage is that due to the low rotational disk speed of these devices, they are even more susceptible to skipping or other misreads if they are subjected to acceleration (shaking) during playback. Since a CD can typically hold only around 700 megabytes of data, a large library will require multiple disks to contain. However, some higher-end units are also capable of reading and playing back files stored on larger-capacity DVD; some also have the ability to play video content, such as movies. An additional consideration can be the relatively large width of these devices since they have to be able to fit a CD.
Networked audio players: Players that connect via (Wi-Fi) network to receive and play audio. These types of units typically do not have any local storage of their own and must rely on a server, typically a personal computer also on the same network, to provide the audio files for playback.
Some MP3 players can encode directly to MP3 or other digital audio formats directly from a line-level audio signal (radio, voice, etc.). Devices such as CD players can be connected to the MP3 player (using the USB port) in order to directly play music from the memory of the player without the use of a computer.
Modular MP3 keydrive players are composed of two detachable parts: the head (or reader/writer) and the body (the memory). They can be independently obtained and upgradable (one can change the head or the body; i.e. to add more memory).
History
Today, every smartphone also serves as a portable media player; however, prior to the rise of smartphones in the 20072012 time frame, a variety of handheld players were available to store and play music. The immediate predecessor to the portable media player was the portable CD player and prior to that, the personal stereo. In particular, Sony's Walkman and Discman are the ancestors of digital audio players such as the Apple iPod.
There are several types of MP3 players:
Devices that play CDs. Often, they can be used to play both audio CDs and homemade data CDs containing MP3 or other digital audio files.
Pocket devices. These are solid-state devices that hold digital audio files on internal or external media, such as memory cards. These are generally low-storage devices, typically ranging from 128MB to 1GB, which can often be extended with additional memory. As they are solid state and do not have moving parts, they can be very resilient. Such players may be integrated into USB flash drives.
Devices that read digital audio files from a hard drive. These players have higher capacities, ranging from 1.5 to 100 GB, depending on the hard drive technology. At typical encoding rates, this means that thousands of songs—perhaps an entire music collection—can be stored in one MP3 player. Apple's popular iPod player is the best-known example.
Early digital audio players
British scientist Kane Kramer invented the first digital audio player, which he called the IXI. His 1979 prototypes were capable of up to one hour of audio playback but did not enter commercial production. His UK patent application was not filed until 1981 and was issued in 1985 in the UK and 1987 in the US. However, in 1988 Kramer's failure to raise the £60,000 required to renew the patent meant it entered the public domain. Apple Inc. hired Kramer as a consultant and presented his work as an example of prior art in the field of digital audio players during their litigation with Burst.com almost two decades later. In 2008, Apple acknowledged Kramer as the inventor of the digital audio player
The Listen Up Player was released in 1996 by Audio Highway, an American company led by Nathan Schulhof. It could store up to an hour of music, but despite getting an award at CES 1997 only 25 of the devices were made. That same year AT&T developed the FlashPAC digital audio player which initially used AT&T's Perceptual Audio Coder (PAC) for music compression, but in 1997 switched to AAC. At about the same time AT&T also developed an internal Web-based music streaming service that had the ability to download music to FlashPAC. AAC and such music downloading services later formed the foundation for the Apple iPod and iTunes.
The first production-volume portable digital audio player was (also known as MobilePlayer, or Digital Words To Go) from Audible.com available for sale in January 1998, for $200. It only supported playback of digital audio in Audible's proprietary, low-bitrate format which was developed for spoken word recordings. Capacity was limited to 4 MB of internal flash memory, or about 2 hours of play, using a custom rechargeable battery pack. The unit had no display and rudimentary controls.
The MP3 standard
MP3 was introduced as an audio coding standard in 1992. It was based on several audio data compression techniques, including the modified discrete cosine transform (MDCT), FFT and psychoacoustic methods. MP3 became a popular standard format and as a result most digital audio players after this supported it and hence were often called MP3 players.
While popularly being called MP3 players at the time, most players could play more than just the MP3 file format. Players also sometimes supported Windows Media Audio (WMA), Advanced Audio Coding (AAC), Vorbis, FLAC, Speex and Ogg.
First portable MP3 player
The first portable MP3 player was launched in 1997 by SaeHan Information Systems, which sold its MPMan F10 player in South Korea in spring 1998. In mid-1998, the South Korean company licensed the players for North American distribution to Eiger Labs, which rebranded them as the EigerMan F10 and F20. The flash-based players were available in 32 MB or 64 MB (6 or 12 songs) storage capacity and had a LCD screen to tell the user the song currently playing.
The first car audio hard drive-based MP3 player was also released in 1997 by MP32Go and was called the MP32Go Player. It consisted of a 3 GB IBM 2.5" hard drive that was housed in a trunk-mounted enclosure connected to the car's radio system. It retailed for $599 and was a commercial failure.
The Rio PMP300 from Diamond Multimedia was introduced in September 1998, a few months after the MPMan, and also featured a 32 MB storage capacity. It was a success during the holiday season, with sales exceeding expectations. Interest and investment in digital music were subsequently spurred from it. The RIAA soon filed a lawsuit alleging that the device abetted illegal copying of music, but Diamond won a legal victory on the shoulders of Sony Corp. of America v. Universal City Studios, Inc. and MP3 players were ruled legal devices. Because of the player's notoriety as the target of a major lawsuit, the Rio is erroneously assumed to be the first digital audio player.
Eiger Labs and Diamond went on to establish a new segment in the portable audio player market and the following year saw several new manufacturers enter this market. The PMP300 would be the start of the Rio line of players. Noticeably, major technology companies did not catch on with the new technology, and instead young startups would come to dominate the early era of MP3 players.
Other early MP3 portables
Other early MP3 portables included the Creative Labs Nomad and the RCA Lyra. These portables were small and light, but had only enough memory to hold around 7 to 20 songs at normal 128 kbit/s compression rates. They also used slower parallel port connections to transfer files from PC to player, necessary as most PCs then used the Windows 95 and NT operating systems, which did not have native support for USB connections.
Emergence of hard-drive-based players
In 1999 the first hard drive based DAP using a 2.5" laptop drive, the Personal Jukebox (PJB-100) designed by Compaq and released by Hango Electronics Co with 4.8 GB storage, which held about 1,200 songs, and pioneered what would be called the jukebox segment of digital music portables. This segment eventually became the dominant type of digital music player.
Also at the end of 1999 the first in-dash MP3 player appeared. The Empeg Car offered players in several capacities ranging from 5 to 28 GB. The unit did not catch on and was discontinued in the fall of 2001.
Rise of South Korean companies
For the next couple of years, there were offerings from South Korean companies, namely the startups iRiver (brand of Reigncom), Mpio (brand of DigitalWay) and Cowon. At its peak, these Korean makers held as much as 40% world market share in MP3 players. These manufacturers however lost their way after 2004 as they failed to compete with new iPods. By 2006 they were also overtaken by the South Korean giant Samsung Electronics.
Sony's entry in the market
Sony entered the digital audio player market in 1999 with the Vaio Music Clip and Memory Stick Walkman, however they were technically not MP3 players as it did not support the MP3 format but instead Sony's own ATRAC format and WMA. The company's first MP3-supporting Walkman player did not come until 2004. Over the years, various hard-drive-based and flash-based DAPs and PMPs have been released under the Walkman range.
Samsung's YEPP line and Creative's NOMAD Jukebox
The Samsung YEPP line was first released in 1999 with the aim of making the smallest music players on the market. In 2000, Creative released the 6 GB hard-drive-based Creative NOMAD Jukebox. The name borrowed the jukebox metaphor popularised by Remote Solution, also used by Archos. Later players in the Creative NOMAD range used microdrives rather than laptop drives. In October 2000, South Korean software company Cowon Systems released their first MP3 player, the CW100, under the brand name iAUDIO. In December 2000, some months after the Creative's NOMAD Jukebox, Archos released its Jukebox 6000 with a 6 GB hard drive. Philips also released a player called the Rush.
Growth of market
On 23 October 2001, Apple unveiled the first generation iPod, a 5 GB hard drive based DAP with a 1.8" hard drive and a 2" monochrome display. With the development of a spartan user interface and a smaller form factor, the iPod was initially popular within the Macintosh community. In July 2002, Apple introduced the second generation update to the iPod, which was compatible with Windows computers through Musicmatch Jukebox. iPods quickly became the most popular DAP product and led the fast growth of this market during the early and mid 2000s.
In 2002, Archos released the first PMP, the Archos Jukebox Multimedia with a little 1.5" colour screen. The next year, Archos released another multimedia jukebox, the AV300, with a 3.8" screen and a 20 GB hard drive. In the same year, Toshiba released the first Gigabeat. In 2003, Dell launched a line of portable digital music players called Dell DJ. They were discontinued by 2006.
The name MP4 player was a marketing term for inexpensive portable media players, usually from little-known or generic device manufacturers. The name itself is a misnomer, since most MP4 players through 2007 were incompatible with the MPEG-4 Part 14 or the .mp4 container format. Instead, the term refers to their ability to play more file types than just MP3. In this sense, in some markets like Brazil, any new function added to a given media player is followed by an increase in the number, for example an MP5 or MP12 Player, despite there being no such corresponding MPEG standards.
iRiver of South Korea originally made portable CD players and then started making digital audio players and portable media players in 2002. Creative also introduced the ZEN line. Both of these attained high popularity in some regions.
In 2004, Microsoft attempted to take advantage of the growing PMP market by launching the Portable Media Center (PMC) platform. It was introduced at the 2004 Consumer Electronics Show with the announcement of the Zen Portable Media Center, which was co-developed by Creative. The Microsoft Zune series would later be based on the Gigabeat S, one of the PMC-implemented players.
In May 2005, flash memory maker SanDisk entered the PMP market with the Sansa line of players, starting with the e100 series, and then following up with the m200 series, and c100 series.
In 2007, Apple introduced the iPod Touch, the first iPod with a multi-touch screen. Some similar products existed before such as the iRiver Clix in 2006. In South Korea, sales of MP3 players peaked in 2006, but started declining afterwards. This was driven partly by the launch of mobile television services (DMB), which along with increased demand of movies on the go led to a transition away from music-only players to PMPs. By 2008, more video-enabled PMPs were sold than audio-only players.
Brands and popularity throughout the world
By the mid-2000s and the years after, Apple with its iPod was the best-selling DAP or PMP by a significant margin, with one out of four sold worldwide being an iPod. It was especially dominant in the United States where it had over 70% of sales at different points in time, is nearest competitor in 2006 being SanDisk. Apple also led in Japan over homegrown makers Sony and Panasonic during this time, although the gap between Apple and Sony had closed by about 2010. In South Korea, the market was led by local brands iRiver, Samsung and Cowon as of 2005.
European buying patterns differed; while Apple was in a particularly strong position in the United Kingdom, continental Western Europe generally preferred cheaper, often Chinese rebranded players under local brands such as Grundig. Meanwhile, in Eastern Europe including Russia, higher priced players with improved design or functionality were preferred instead. In South Korea makers like iRiver and Samsung were particularly popular, as well as such OEM models under local brands. Creative was the top-selling maker in its home country of Singapore. In China, local brands Newman, DEC and Aigo were noted as the top vendors as of 2006.
PMPs in other categories
Samsung SPH-M2100, the first mobile phone with built-in MP3 player was produced in South Korea in August 1999. Samsung SPH-M100 (UpRoar) launched in 2000 was the first mobile phone to have MP3 music capabilities in the US market. The innovation spread rapidly across the globe and by 2005, more than half of all music sold in South Korea was sold directly to mobile phones and all major handset makers in the world had released MP3 playing phones. By 2006, more MP3 playing mobile phones were sold than all stand-alone MP3 players put together. The rapid rise of the media player in phones was quoted by Apple as a primary reason for developing the iPhone. In 2007, the number of phones that could play media was over 1 billion. Some companies have created music-centric sub-brands for mobile phones, for example the former Sony Ericsson's Walkman range or Nokia's XpressMusic range, which have extra emphasis on music playback and typically have features such as dedicated music buttons.
Mobile phones with PMP functionalities such as video playback also started appearing in the 2000s. Other non-phone products such as the PlayStation Portable and PlayStation Vita have also been considered to be PMPs.
Decline and contemporary
DAPs and PMPs have declined in popularity after the late 2000s due to increasing worldwide adoption of smartphones that already come with PMP functionalities. Sales peaked in 2007 and market revenue (worth $21.6 billion) peaked in 2008, albeit notably mobile phones that could play music outsold DAPs by almost three to one as of 2007.
In the EU, demand for MP3 players peaked in 2007 with 43.5 million devices sold totalling 3.8 billion euros. Both sales and revenue experienced a double-digit shrinkage for the first time in 2010. In India, sales of PMPs decreased for the first time in 2012, a few years after developed economies. The market was led by Apple with a share of about 50%, while Sony and Philips were the other major brands.
Meanwhile, sales of Apple's best selling product, the iPod, were eclipsed by the iPhone in 2011.
DAPs continue to be made in lower volumes by manufacturers such as SanDisk, Sony, iRiver, Philips, Cowon, and a range of Chinese manufacturers namely Aigo, Newsmy, PYLE and ONDA. They often have specific selling points in the smartphone era, such as portability (for small sized players) or for high quality sound suited for audiophiles.
Typical features
PMPs are capable of playing digital audio, images, and/or video. Usually, a colour liquid crystal display (LCD) or organic light-emitting diode (OLED) screen is used as a display for PMPs that have a screen. Various players include the ability to record video, usually with the aid of optional accessories or cables, and audio, with a built-in microphone or from a line out cable or FM tuner. Some players include readers for memory cards, which are advertised to equip players with extra storage or transferring media. In some players, features of a personal organiser are emulated, or support for video games, like the iRiver Clix (through compatibility of Adobe Flash Lite) or the PlayStation Portable, is included. Only mid-range to high-end players support "savestating" for power-off (i.e. leaves off song/video in progress similar to tape-based media).
Audio playback
Nearly all players are compatible with the MP3 audio format, and many others support Windows Media Audio (WMA), Advanced Audio Coding (AAC) and WAV. Some players are compatible with open-source formats like Ogg Vorbis and the Free Lossless Audio Codec (FLAC). Audio files purchased from online stores may include digital rights management (DRM) copy protection, which many modern players support.
Image viewing
The JPEG format is widely supported by players. Some players, like the iPod series, provide compatibility to display additional file formats like GIF, PNG, and TIFF, while others are bundled with conversion software.
Video playback
Most newer players support the MPEG-4 Part 2 video format, and many other players are compatible with Windows Media Video (WMV) and AVI. Software included with the players may be able to convert video files into a compatible format.
Recording
Many players have a built-in electret microphone which allows recording. Usually recording quality is poor, suitable for speech but not music. There are also professional-quality recorders suitable for high-quality music recording with external microphones, at prices starting at a few hundred dollars.
Radio
Some DAPs have FM radio tuners built in. Many also have an option to change the band from the usual 87.5 – 108.0 MHz to the Japanese band of 76.0 – 90.0 MHz. DAPs typically never have an AM band, or even HD Radio since such features would be either cost-prohibitive for the application, or because of AM's sensitivity to interference.
Internet access
Newer portable media players are now coming with Internet access via Wi-Fi. Examples of such devices are Android OS devices by various manufacturers, and iOS devices on Apple products like the iPhone, iPod Touch, and iPad. Internet access has even enabled people to use the Internet as an underlying communications layer for their choice of music for automated music randomisation services like Pandora, to on-demand video access (which also has music available) such as YouTube. This technology has enabled casual and hobbyist DJs to cue their tracks from a smaller package from an Internet connection, sometimes they will use two identical devices on a crossfade mixer. Many such devices also tend to be smartphones.
Last position memory
Many mobile digital media players have last position memory, in which when it is powered off, a user does not have to worry about starting at the first track again, or even hearing repeats of others songs when a playlist, album, or whole library is cued for shuffle play, in which shuffle play is a common feature, too. Early playback devices to even remotely have "last position memory" that predated solid-state digital media playback devices were tape-based media, except this kind suffered from having to be "rewound", whereas disc-based media suffered from no native "last position memory", unless disc-players had their own last position memory. However, some models of solid-state flash memory (or hard drive ones with some moving parts) are somewhat the "best of both worlds" in the market.
Miscellaneous
Media players' firmware may be equipped with a basic file manager and a text reader.
Common audio formats
There are three categories of audio formats:
Uncompressed PCM audio: Most players can also play uncompressed PCM in a container such as WAV or AIFF.
Lossless audio formats: These formats maintain the Hi-fi quality of every song or disc. These are the ones used by CDs, many people recommend the use of lossless audio formats to preserve the CD quality in audio files on a desktop. Lossless formats include Apple Lossless and FLAC.
Lossy compression formats: Most audio formats use lossy compression, to produce as small as possible a file compatible with the desired sound quality. There is a trade-off between size and sound quality of lossily compressed files; most formats allow different combinations—e.g., MP3 files may use between 32 (worst), 128 (reasonable) and 320 (best) kilobits per second.
There are also royalty-free lossy formats like Vorbis for general music and Speex and Opus used for voice recordings. When "ripping" music from CDs, many people recommend the use of lossless audio formats to preserve the CD quality in audio files on a desktop, and to transcode the music to lossy compression formats when they are copied to a portable player. The formats supported by a particular audio player depends upon its firmware; sometimes a firmware update adds more formats. MP3 and AAC are dominant formats, and are almost universally supported.
Software
PMPs were earlier packaged with an installation CD/DVD that inserts device drivers (and for some players, software that is capable of seamlessly transferring files between the player and the computer). For later players, however, these are usually available online via the manufacturers' websites, or increasingly natively recognised by the operating system through Universal Mass Storage (UMS) or Media Transfer Protocol (MTP).
Hardware
Storage
As with DAPs, PMPs come in either flash or hard disk storage. Storage capacities have reached up to 64 GB for flash memory based PMPs, first reached by the 3rd Generation iPod Touch, and up to 1 TB for hard disk drive PMPs, first achieved by the Archos 5 Internet Tablet.
A number of players support memory card slots, including CompactFlash (CF), Secure Digital (SD), and Memory Sticks. They are used to directly transfer content from external devices, and expand the storage capacity of PMPs.
Interface
A standard PMP uses a 5-way D-pad to navigate. Many alternatives have been used, most notably the wheel and touch mechanisms seen on players from the iPod and Sansa series. Another popular mechanism is the swipe-pad, or 'squircle', first seen on the Zune. Additional buttons are commonly seen for features such as volume control.
Screen
Sizes range all the way up to 7 inches (18 cm). Resolutions also vary, going up to WVGA. Most screens come with a colour depth of 16-bit, but higher quality video-oriented devices may range all the way to 24-bit, otherwise known as true colour, with the ability to display 16.7 million distinct colours. Screens commonly have a matte finish but may also come in glossy to increase colour intensity and contrast. More and more devices are now also coming with touch screen as a form of primary or alternate input. This can be for convenience and/or aesthetic purposes. Certain devices, on the other hand, have no screen whatsoever, reducing costs at the expense of ease of browsing through the media library.
Radio
Some portable media players include a radio receiver, most frequently receiving FM. Features for receiving signals from FM stations on MP3 players are common on more premium models.
Other features
Some portable media players have recently added features such as simple camera, built-in game emulation (playing Nintendo Entertainment System or other game formats from ROM images) and simple text readers and editors. Newer PMPs have been able to tell time, and even automatically adjust time according to radio reception, and some devices like the 6th-gen iPod Nano even have wristwatch bands available.
Modern MP4 players can play video in a multitude of video formats without the need to pre-convert them or downsize them prior to playing them. Some MP4 Players possess USB ports, to allow users to connect it to a personal computer to sideload files. Some models also have memory card slots to expand the memory of the player instead of storing files in the built-in memory.
Chipsets
Chipsets and file formats that are particular to some PMPs:
Anyka is a chip that's used by many MP4 Players. It supports the same formats as Rockchip.
Fuzhou Rockchip Electronics's video processing Rockchip has been incorporated into many MP4 players, supporting AVI with no B frames in MPEG-4 Part 2 (not Part 14), while MP2 audio compression is used. The clip must be padded out, if necessary, to fit the resolution of the display. Any slight deviation from the supported format results in a Format Not Supported error message.
Some players, like the Onda VX979+, have started to use chipsets from Ingenic, which are capable of supporting RealNetworks's video formats. Also, players with SigmaTel-based technology are compatible with SMV (SigmaTel Video).
AMV
The image compression algorithm of this format is inefficient by modern standards (about 4 pixels per byte, compared with over 10 pixels per byte for MPEG-2). There are a fixed range of resolutions (96 × 96 to 208 × 176 pixels) and framerates (12 or 16 frames) available. However it can be used with limited hardware requirements. A 30-minute video would have a filesize of approximately 100 MB at a 160 × 120 resolution.
MTV
The MTV video format (no relation to the cable network) consists of a 512-byte file header that operates by displaying a series of raw image frames during MP3 playback. During this process, audio frames are passed to the chipset's decoder, while the memory pointer of the display's hardware is adjusted to the next image within the video stream. This method does not require additional hardware for decoding, though it will lead to a higher amount of memory consumption. For that reason, the storage capacity of an MP4 player that uses MTV files is effectively less than that of a player that decompresses files on the fly.
Operation
Digital sampling is used to convert an audio wave to a sequence of binary numbers that can be stored in a digital format, such as MP3. Common features of all MP3 players are a memory storage device, such as flash memory or a miniature hard disk drive, an embedded processor, and an audio codec microchip to convert the compressed file into an analogue sound signal. During playback, audio files are read from storage into a RAM based memory buffer, and then streamed through an audio codec to produce decoded PCM audio. Typically audio formats decode at double to more than 20 times real speed on portable electronic processors, requiring that the codec output be stored for a time until the DAC can play it. To save power, portable devices may spend much or nearly all of their time in a low power idle state while waiting for the DAC to deplete the output PCM buffer before briefly powering up to decode additional audio.
Most DAPs are powered by rechargeable batteries, some of which are not user-replaceable. They have a 3.5 mm stereo jack; music can be listened to with earbuds or headphones, or played via an external amplifier and speakers. Some devices also contain internal speakers, through which music can be listened to, although these built-in speakers are typically of very low quality.
Nearly all DAPs consists of some kind of display screen, although there are exceptions, such as the iPod Shuffle, and a set of controls with which the user can browse through the library of music contained in the device, select a track, and play it back. The display, if the unit even has one, can be anything from a simple one or two line monochrome LCD display, similar to what are found on typical pocket calculators, to large, high-resolution, full-color displays capable of displaying photographs or viewing video content on. The controls can range anywhere from the simple buttons as are found on most typical CD players, such as for skipping through tracks or stopping/starting playback to full touch-screen controls, such as that found on the iPod Touch or the Zune HD. One of the more common methods of control is some type of the scroll wheel with associated buttons. This method of control was first introduced with the Apple iPod and many other manufacturers have created variants of this control scheme for their respective devices.
Content is placed on DAPs typically through a process called "syncing", by connecting the device to a personal computer, typically via USB, and running any special software that is often provided with the DAP on a CD-ROM included with the device, or downloaded from the manufacturer's website. Some devices simply appear as an additional disk drive on the host computer, to which music files are simply copied like any other type of file. Other devices, most notably the Apple iPod or Microsoft Zune, requires the use of special management software, such as iTunes or Zune Software, respectively. The music, or other content such as TV episodes or movies, is added to the software to create a "library". The library is then "synced" to the DAP via the software. The software typically provides options for managing situations when the library is too large to fit on the device being synced to. Such options include allowing manual syncing, in that the user can manually "drag-n-drop" the desired tracks to the device, or allow for the creation of playlists. In addition to the USB connection, some of the more advanced units are now starting to allow syncing through a wireless connection, such as via Wi-Fi or Bluetooth.
Content can also be obtained and placed on some DAPs, such as the iPod Touch or Zune HD by allowing access to a "store" or "marketplace", most notably the iTunes Store or Zune Marketplace, from which content, such as music and video, and even games, can be purchased and downloaded directly to the device.
Digital signal processing
A growing number of portable media players are including audio processing chips that allow digital effects like 3D audio effects, dynamic range compression and equalisation of the frequency response. Some devices adjust loudness based on Fletcher–Munson curves. Some media players are used with Noise-cancelling headphones that use Active noise reduction to remove background noise.
De-noise mode
De-noise mode is an alternative to Active noise reduction. It provides for relatively noise-free listening to audio in a noisy environment. In this mode, audio intelligibility is improved due to selective gain reduction of the ambient noise. This method splits external signals into frequency components by "filterbank" (according to the peculiarities of human perception of specific frequencies) and processing them using adaptive audio compressors. Operation thresholds in adaptive audio compressors (in contrast to "ordinary" compressors) are regulated depending on ambient noise levels for each specific bandwidth. Reshaping of the processed signal from adaptive compressor outputs is realised in a synthesis filterbank. This method improves the intelligibility of speech signals and music. The best effect is obtained while listening to audio in the environment with constant noise (in trains, automobiles, planes), or in environments with fluctuating noise level (e.g. in a metro). Improvement of signal intelligibility in condition of ambient noise allows users to hear audio well and preserve hearing ability, in contrast to regular volume amplification.
Natural mode
Natural mode is characterised by subjective effect of balance of different frequency sounds, regardless of level of distortion, appearing in the reproduction device. It is also regardless of personal user's ability to perceive specific sound frequencies (excluding obvious hearing loss). The natural effect is obtained due to special sound processing algorithm (i.e. "formula of subjective equalisation of frequency-response function"). Its principle is to assess frequency response function (FRF) of mediaplayer or any other sound reproduction device, in accordance with audibility threshold in silence (subjective for each person), and to apply gain modifying factor. The factor is determined with the help of integrated function to test audibility threshold: the program generates tone signals (with divergent oscillations – from minimum volume 30–45 Hz to maximum volume appr. 16 kHz), and user assess their subjective audibility. The principle is similar to in situ audiometry, used in medicine to prescribe a hearing aid. However, the results of test may be used to a limited extent as far as FRF of sound devices depends on reproduction volume. It means correction coefficient should be determined several times – for various signal strengths, which is not a particular problem from a practical standpoint.
Sound around mode
Sound around mode allows for real time overlapping of music and the sounds surrounding the listener in their environment, which are captured by a microphone and mixed into the audio signal. As a result, the user may hear playing music and external sounds of the environment at the same time. This can increase user safety (especially in big cities and busy streets), as a user can hear a mugger following them or hear an oncoming car.
Controversy
Although these issues are not usually controversial within digital audio players, they are matters of continuing controversy and litigation, including but not limited to content distribution and protection, and digital rights management (DRM).
Lawsuit with RIAA
The Recording Industry Association of America (RIAA) filed a lawsuit in late 1998 against Diamond Multimedia for its Rio players, alleging that the device encouraged copying music illegally. But Diamond won a legal victory on the shoulders of the Sony Corp. v. Universal City Studios case and DAPs were legally ruled as electronic devices.
Risk of hearing damage
According to the Scientific Committee on Emerging and Newly Identified Health Risks, the risk of hearing damage from digital audio players depends on both sound level and listening time. The listening habits of most users are unlikely to cause hearing loss, but some people are putting their hearing at risk, because they set the volume control very high or listen to music at high levels for many hours per day. Such listening habits may result in temporary or permanent hearing loss, tinnitus, and difficulties understanding speech in noisy environments.
The World Health Organization warns that increasing use of headphones and earphones puts 1.1 billion teenagers and young adults at risk of hearing loss due to unsafe use of personal audio devices. Many smartphones and personal media players are sold with earphones that do a poor job of blocking ambient noise, leading some users to turn up the volume to the maximum level to drown out street noise. People listening to their media players on crowded commutes sometimes play music at high volumes feel a sense of separation, freedom and escape from their surroundings.
The World Health Organization recommends that "the highest permissible level of noise exposure in the workplace is 85 dB up to a maximum of eight hours per day" and time in "nightclubs, bars and sporting events" should be limited because they can expose patrons to noise levels of 100 dB. The report states
The report also recommends that governments raise awareness of hearing loss, and to recommend people visit a hearing specialist if they experience symptoms of hearing loss, which include pain, ringing or buzzing in the ears.
A study by the National Institute for Occupational Safety & Health found that employees at bars, nightclubs or other music venues were exposed to noise levels above the internationally recommended limits of 82–85 dBA per eight hours. This growing phenomena has led to the coining of the term music-induced hearing loss, which includes hearing loss as a result of overexposure to music on personal media players.
FCC issues
Some MP3 players have electromagnet transmitters, as well as receivers. Many MP3 players have built-in FM radios, but personal FM transmitters are not usually built-in due to liability of transmitter feedback from simultaneous transmission and reception of FM. Also, certain features like Wi-Fi and Bluetooth can interfere with professional-grade communications systems such as aircraft at airports.
| Technology | Media and communication: Basics | null |
1560443 | https://en.wikipedia.org/wiki/Fossil%20fuel%20power%20station | Fossil fuel power station | A fossil fuel power station is a thermal power station which burns a fossil fuel, such as coal, oil, or natural gas, to produce electricity. Fossil fuel power stations have machinery to convert the heat energy of combustion into mechanical energy, which then operates an electrical generator. The prime mover may be a steam turbine, a gas turbine or, in small plants, a reciprocating gas engine. All plants use the energy extracted from the expansion of a hot gas, either steam or combustion gases. Although different energy conversion methods exist, all thermal power station conversion methods have their efficiency limited by the Carnot efficiency and therefore produce waste heat.
Fossil fuel power stations provide most of the electrical energy used in the world. Some fossil-fired power stations are designed for continuous operation as baseload power plants, while others are used as peaker plants. However, starting from the 2010s, in many countries plants designed for baseload supply are being operated as dispatchable generation to balance increasing generation by variable renewable energy.
By-products of fossil fuel power plant operation must be considered in their design and operation. Flue gas from combustion of the fossil fuels contains carbon dioxide and water vapor, as well as pollutants such as nitrogen oxides (NOx), sulfur oxides (SOx), and, for coal-fired plants, mercury, traces of other metals, and fly ash. Usually all of the carbon dioxide and some of the other pollution is discharged to the air. Solid waste ash from coal-fired boilers must also be removed.
Fossil fueled power stations are major emitters of carbon dioxide (CO2), a greenhouse gas which is a major contributor to global warming.
The results of a recent study show that the net income available to shareholders of large companies could see a significant reduction from the greenhouse gas emissions liability related to only natural disasters in the United States from a single coal-fired power plant.
However, as of 2015, no such cases have awarded damages in the United States.
Per unit of electric energy, brown coal emits nearly twice as much CO2 as natural gas, and black coal emits somewhat less than brown.
, carbon capture and storage of emissions is not economically viable for fossil fuel power stations, and keeping global warming below 1.5 °C is still possible but only if no more fossil fuel power plants are built and some existing fossil fuel power plants are shut down early, together with other measures such as reforestation.
Basic concepts: heat into mechanical energy
In a fossil fuel power plant the chemical energy stored in fossil fuels such as coal, fuel oil, natural gas or oil shale and oxygen of the air is converted successively into thermal energy, mechanical energy and, finally, electrical energy. Each fossil fuel power plant is a complex, custom-designed system. Multiple generating units may be built at a single site for more efficient use of land, natural resources and labor. Most thermal power stations in the world use fossil fuel, outnumbering nuclear, geothermal, biomass, or concentrated solar power plants.
The second law of thermodynamics states that any closed-loop cycle can only convert a fraction of the heat produced during combustion into mechanical work. The rest of the heat, called waste heat, must be released into a cooler environment during the return portion of the cycle. The fraction of heat released into a cooler medium must be equal or larger than the ratio of absolute temperatures of the cooling system (environment) and the heat source (combustion furnace). Raising the furnace temperature improves the efficiency but complicates the design, primarily by the selection of alloys used for construction, making the furnace more expensive. The waste heat cannot be converted into mechanical energy without a cooler cooling system. However, it may be used in cogeneration plants to heat buildings, produce hot water, or to heat materials on an industrial scale, such as in some oil refineries, plants, and chemical synthesis plants.
Typical thermal efficiency for utility-scale electrical generators is around 37% for coal and oil-fired plants, and 56 – 60% (LEV) for combined-cycle gas-fired plants. Plants designed to achieve peak efficiency while operating at capacity will be less efficient when operating off-design (i.e. temperatures too low.)
Practical fossil fuels stations operating as heat engines cannot exceed the Carnot cycle limit for conversion of heat energy into useful work. Fuel cells do not have the same thermodynamic limits as they are not heat engines.
The efficiency of a fossil fuel plant may be expressed as its heat rate, expressed in BTU/kilowatthour or megajoules/kilowatthour.
Plant types
Steam
In a steam turbine power plant, fuel is burned in a furnace and the hot gasses flow through a boiler. Water is converted to steam in the boiler; additional heating stages may be included to superheat the steam. The hot steam is sent through controlling valves to a turbine. As the steam expands and cools, its energy is transferred to the turbine blades which turn a generator. The spent steam has very low pressure and energy content; this water vapor is fed through a condenser, which removes heat from the steam. The condensed water is then pumped into the boiler to repeat the cycle.
Emissions from the boiler include carbon dioxide, oxides of sulfur, and in the case of coal fly ash from non-combustible substances in the fuel. Waste heat from the condenser is transferred either to the air, or sometimes to a cooling pond, lake or river.
Gas turbine and combined gas/steam
One type of fossil fuel power plant uses a gas turbine in conjunction with a heat recovery steam generator (HRSG). It is referred to as a combined cycle power plant because it combines the Brayton cycle of the gas turbine with the Rankine cycle of the HRSG. The turbines are fueled either with natural gas or fuel oil.
Reciprocating engines
Diesel engine generator sets are often used for prime power in communities not connected to a widespread power grid. Emergency (standby) power systems may use reciprocating internal combustion engines operated by fuel oil or natural gas. Standby generators may serve as emergency power for a factory or data center, or may also be operated in parallel with the local utility system to reduce peak power demand charge from the utility. Diesel engines can produce strong torque at relatively low rotational speeds, which is generally desirable when driving an alternator, but diesel fuel in long-term storage can be subject to problems resulting from water accumulation and chemical decomposition. Rarely used generator sets may correspondingly be installed as natural gas or LPG to minimize the fuel system maintenance requirements.
Spark-ignition internal combustion engines operating on gasoline (petrol), propane, or LPG are commonly used as portable temporary power sources for construction work, emergency power, or recreational uses.
Reciprocating external combustion engines such as the Stirling engine can be run on a variety of fossil fuels, as well as renewable fuels or industrial waste heat. Installations of Stirling engines for power production are relatively uncommon.
Historically, the first central stations used reciprocating steam engines to drive generators. As the size of the electrical load to be served grew, reciprocating units became too large and cumbersome to install economically. The steam turbine rapidly displaced all reciprocating engines in central station service.
Fuels
Coal
Coal is the most abundant fossil fuel on the planet, and widely used as the source of energy in thermal power stations and is a relatively cheap fuel. Coal is an impure fuel and produces more greenhouse gas and pollution than an equivalent amount of petroleum or natural gas. For instance, the operation of a 1000-MWe coal-fired power plant results in a nuclear radiation dose of 490 person-rem/year, compared to 136 person-rem/year for an equivalent nuclear power plant, including uranium mining, reactor operation and waste disposal.
Coal is delivered by highway truck, rail, barge, collier ship or coal slurry pipeline. Generating stations adjacent to a mine may receive coal by conveyor belt or massive diesel-electric-drive trucks.
Coal is usually prepared for use by crushing the rough coal to pieces less than in size.
Natural gas
Gas is a very common fuel and has mostly replaced coal in countries where gas was found in the late 20th century or early 21st century, such as the US and UK. Sometimes coal-fired steam plants are refitted to use natural gas to reduce net carbon dioxide emissions. Oil-fuelled plants may be converted to natural gas to lower operating cost.
Oil
Heavy fuel oil was once a significant source of energy for electric power generation. After oil price increases of the 1970s, oil was displaced by coal and later natural gas. Distillate oil is still important as the fuel source for diesel engine power plants used especially in isolated communities not interconnected to a grid. Liquid fuels may also be used by gas turbine power plants, especially for peaking or emergency service. Of the three fossil fuel sources, oil has the advantages of easier transportation and handling than solid coal, and easier on-site storage than natural gas.
Combined heat and power
Combined heat and power (CHP), also known as cogeneration, is the use of a thermal power station to provide both electric power and heat (the latter being used, for example, for district heating purposes). This technology is practiced not only for domestic heating (low temperature) but also for industrial process heat, which is often high temperature heat. Calculations show that Combined Heat and Power District Heating (CHPDH) is the cheapest method in reducing (but not eliminating) carbon emissions, if conventional fossil fuels remain to be burned.
Environmental impacts
Thermal power plants are one of the main artificial sources of producing toxic gases and particulate matter. Fossil fuel power plants cause the emission of pollutants such as , SOx, , CO, PM, organic gases and polycyclic aromatic hydrocarbons. World organizations and international agencies, like the IEA, are concerned about the environmental impact of burning fossil fuels, and coal in particular. The combustion of coal contributes the most to acid rain and air pollution, and has been connected with global warming. Due to the chemical composition of coal there are difficulties in removing impurities from the solid fuel prior to its combustion. Modern day coal power plants pollute less than older designs due to new "scrubber" technologies that filter the exhaust air in smoke stacks. However, emission levels of various pollutants are still on average several times greater than natural gas power plants and the scrubbers transfer the captured pollutants to wastewater, which still requires treatment in order to avoid pollution of receiving water bodies. In these modern designs, pollution from coal-fired power plants comes from the emission of gases such as carbon dioxide, nitrogen oxides, and sulfur dioxide into the air, as well a significant volume of wastewater which may contain lead, mercury, cadmium and chromium, as well as arsenic, selenium and nitrogen compounds (nitrates and nitrites).
Acid rain is caused by the emission of nitrogen oxides and sulfur dioxide. These gases may be only mildly acidic themselves, yet when they react with the atmosphere, they create acidic compounds such as sulfurous acid, nitric acid and sulfuric acid which fall as rain, hence the term acid rain. In Europe and the US, stricter emission laws and decline in heavy industries have reduced the environmental hazards associated with this problem, leading to lower emissions after their peak in 1960s.
In 2008, the European Environment Agency (EEA) documented fuel-dependent emission factors based on actual emissions from power plants in the European Union.
Carbon dioxide
Electricity generation using carbon-based fuels is responsible for a large fraction of carbon dioxide (CO2) emissions worldwide and for 34% of U.S. man-made carbon dioxide emissions in 2010. In the U.S. 70% of electricity is generated by combustion of fossil fuels.
Coal contains more carbon than oil or natural gas fossil fuels, resulting in greater volumes of carbon dioxide emissions per unit of electricity generated. In 2010, coal contributed about 81% of CO2 emissions from generation and contributed about 45% of the electricity generated in the United States. In 2000, the carbon intensity (CO2 emissions) of U.S. coal thermal combustion was 2249 lbs/MWh (1,029 kg/MWh) while the carbon intensity of U.S. oil thermal generation was 1672 lb/MWh (758 kg/MWh or 211 kg/GJ) and the carbon intensity of U.S. natural gas thermal production was 1135 lb/MWh (515 kg/MWh or 143 kg/GJ).
The Intergovernmental Panel on Climate Change (IPCC) reports that increased quantities of the greenhouse gas carbon dioxide within the atmosphere will "very likely" lead to higher average temperatures on a global scale (global warming). Concerns regarding the potential for such warming to change the global climate prompted IPCC recommendations calling for large cuts to CO2 emissions worldwide.
Emissions can be reduced with higher combustion temperatures, yielding more efficient production of electricity within the cycle. the price of emitting CO2 to the atmosphere is much lower than the cost of adding carbon capture and storage (CCS) to fossil fuel power stations, so owners have not done so.
Estimation of carbon dioxide emissions
The CO2 emissions from a fossil fuel power station can be estimated with the following formula:
CO2 emissions = capacity x capacity factor x heat rate x emission intensity x time
where "capacity" is the "nameplate capacity" or the maximum allowed output of the plant, "capacity factor" or "load factor" is a measure of the amount of power that a plant produces compared with the amount it would produce if operated at its rated capacity nonstop, heat rate is thermal energy in/electrical energy out, emission intensity (also called emission factor) is the CO2 emitted per unit of heat generated for a particular fuel.
As an example, a new 1500 MW supercritical lignite-fueled power station running on average at half its capacity might have annual CO2 emissions estimated as:
= 1500MW x 0.5 x 100/40 x 101000 kg/TJ x 1year
= 1500MJ/s x 0.5 x 2.5 x 0.101 kg/MJ x 365x24x60x60s
= 1.5x103 x 5x10−1 x 2.5 x 1.01−1 x 3.1536x107 kg
= 59.7 x103-1-1+7 kg
= 5.97 Mt
Thus the example power station is estimated to emit about 6 megatonnes of carbon dioxide each year.
The results of similar estimations are mapped by organisations such as Global Energy Monitor, Carbon Tracker and ElectricityMap.
Alternatively it may be possible to measure emissions (perhaps indirectly via another gas) from satellite observations.
Particulate matter
Another problem related to coal combustion is the emission of particulates that have a serious impact on public health. Power plants remove particulate from the flue gas with the use of a bag house or electrostatic precipitator. Several newer plants that burn coal use a different process, Integrated Gasification Combined Cycle in which synthesis gas is made out of a reaction between coal and water. The synthesis gas is processed to remove most pollutants and then used initially to power gas turbines. Then the hot exhaust gases from the gas turbines are used to generate steam to power a steam turbine. The pollution levels of such plants are drastically lower than those of "classic" coal power plants.
Particulate matter from coal-fired plants can be harmful and have negative health impacts. Studies have shown that exposure to particulate matter is related to an increase of respiratory and cardiac mortality. Particulate matter can irritate small airways in the lungs, which can lead to increased problems with asthma, chronic bronchitis, airway obstruction, and gas exchange.
There are different types of particulate matter, depending on the chemical composition and size. The dominant form of particulate matter from coal-fired plants is coal fly ash, but secondary sulfate and nitrate also comprise a major portion of the particulate matter from coal-fired plants. Coal fly ash is what remains after the coal has been combusted, so it consists of the incombustible materials that are found in the coal.
The size and chemical composition of these particles affects the impacts on human health. Currently coarse (diameter greater than 2.5 μm) and fine (diameter between 0.1 μm and 2.5 μm) particles are regulated, but ultrafine particles (diameter less than 0.1 μm) are currently unregulated, yet they pose many dangers. Unfortunately much is still unknown as to which kinds of particulate matter pose the most harm, which makes it difficult to come up with adequate legislation for regulating particulate matter.
There are several methods of helping to reduce the particulate matter emissions from coal-fired plants. Roughly 80% of the ash falls into an ash hopper, but the rest of the ash then gets carried into the atmosphere to become coal-fly ash. Methods of reducing these emissions of particulate matter include:
a baghouse
an electrostatic precipitator (ESP)
cyclone collector
The baghouse has a fine filter that collects the ash particles, electrostatic precipitators use an electric field to trap ash particles on high-voltage plates, and cyclone collectors use centrifugal force to trap particles to the walls. A recent study indicates that sulfur emissions from fossil fueled power stations in China may have caused a 10-year lull in global warming (1998–2008).
Wastewater
Fossil-fuel power stations, particularly coal-fired plants, are a major source of industrial wastewater. Wastewater streams include flue-gas desulfurization, fly ash, bottom ash and flue gas mercury control. Plants with air pollution controls such as wet scrubbers typically transfer the captured pollutants to the wastewater stream.
Ash ponds, a type of surface impoundment, are a widely used treatment technology at coal-fired plants. These ponds use gravity to settle out large particulates (measured as total suspended solids) from power plant wastewater. This technology does not treat dissolved pollutants. Power stations use additional technologies to control pollutants, depending on the particular wastestream in the plant. These include dry ash handling, closed-loop ash recycling, chemical precipitation, biological treatment (such as an activated sludge process), membrane systems, and evaporation-crystallization systems. In 2015 EPA published a regulation pursuant to the Clean Water Act that requires US power plants to use one or more of these technologies. Technological advancements in ion exchange membranes and electrodialysis systems has enabled high efficiency treatment of flue-gas desulfurization wastewater to meet the updated EPA discharge limits.
Radioactive trace elements
Coal is a sedimentary rock formed primarily from accumulated plant matter, and it includes many inorganic minerals and elements which were deposited along with organic material during its formation. As the rest of the Earth's crust, coal also contains low levels of uranium, thorium, and other naturally occurring radioactive isotopes whose release into the environment leads to radioactive contamination. While these substances are present as very small trace impurities, enough coal is burned that significant amounts of these substances are released. A 1,000 MW coal-burning power plant could have an uncontrolled release of as much as 5.2 metric tons per year of uranium (containing of uranium-235) and 12.8 metric tons per year of thorium. In comparison, a 1,000 MW nuclear plant will generate about 30 metric tons of high-level radioactive solid packed waste per year. It is estimated that during 1982, US coal burning released 155 times as much uncontrolled radioactivity into the atmosphere as the Three Mile Island incident. The collective radioactivity resulting from all coal burning worldwide between 1937 and 2040 is estimated to be 2,700,000 curies or 0.101 EBq. During normal operation, the effective dose equivalent from coal plants is 100 times that from nuclear plants. Normal operation however, is a deceiving baseline for comparison: just the Chernobyl nuclear disaster released, in iodine-131 alone, an estimated 1.76 EBq. of radioactivity, a value one order of magnitude above this value for total emissions from all coal burned within a century, while the iodine-131, the major radioactive substance which comes out in accident situations, has a half life of just 8 days.
Water and air contamination by coal ash
A study released in August 2010 that examined state pollution data in the United States by the organizations Environmental Integrity Project, the Sierra Club and Earthjustice found that coal ash produced by coal-fired power plants dumped at sites across 21 U.S. states has contaminated ground water with toxic elements. The contaminants including the poisons arsenic and lead. The study concluded that the problem of coal ash-caused water contamination is even more extensive in the United States than has been estimated. The study brought to 137 the number of ground water sites across the United States that are contaminated by power plant-produced coal ash.
Arsenic has been shown to cause skin cancer, bladder cancer and lung cancer, and lead damages the nervous system. Coal ash contaminants are also linked to respiratory diseases and other health and developmental problems, and have disrupted local aquatic life. Coal ash also releases a variety of toxic contaminants into nearby air, posing a health threat to those who breathe in fugitive coal dust.
Mercury contamination
U.S. government scientists tested fish in 291 streams around the country for mercury contamination. They found mercury in every fish tested, according to the study by the U.S. Department of the Interior. They found mercury even in fish of isolated rural waterways. Twenty five percent of the fish tested had mercury levels above the safety levels determined by the United States Environmental Protection Agency (EPA) for people who eat the fish regularly. The largest source of mercury contamination in the United States is coal-fueled power plant emissions.
Conversion of fossil fuel power plants
Several methods exist to reduce pollution and reduce or eliminate carbon emissions of fossil fuel power plants. A frequently used and cost-efficient method is to convert a plant to run on a different fuel. This includes conversions of coal power plants to energy crops/biomass or waste and conversions of natural gas power plants to biogas or hydrogen. Conversions of coal powered power plants to waste-fired power plants have an extra benefit in that they can reduce landfilling. In addition, waste-fired power plants can be equipped with material recovery, which is also beneficial to the environment. In some instances, torrefaction of biomass may benefit the power plant if energy crops/biomass is the material the converted fossil fuel power plant will be using. Also, when using energy crops as the fuel, and if implementing biochar production, the thermal power plant can even become carbon negative rather than just carbon neutral. Improving the energy efficiency of a coal-fired power plant can also reduce emissions.
Besides simply converting to run on a different fuel, some companies also offer the possibility to convert existing fossil-fuel power stations to grid energy storage systems which use electric thermal energy storage (ETES)
Coal pollution mitigation
Coal pollution mitigation is a process whereby coal is chemically washed of minerals and impurities, sometimes gasified, burned and the resulting flue gases treated with steam, with the purpose of removing sulfur dioxide, and reburned so as to make the carbon dioxide in the flue gas economically recoverable, and storable underground (the latter of which is called "carbon capture and storage"). The coal industry uses the term "clean coal" to describe technologies designed to enhance both the efficiency and the environmental acceptability of coal extraction, preparation and use, but has provided no specific quantitative limits on any emissions, particularly carbon dioxide. Whereas contaminants like sulfur or mercury can be removed from coal, carbon cannot be effectively removed while still leaving a usable fuel, and clean coal plants without carbon sequestration and storage do not significantly reduce carbon dioxide emissions. James Hansen in an open letter to then U.S. President Barack Obama advocated a "moratorium and phase-out of coal plants that do not capture and store CO2". In his book Storms of My Grandchildren, similarly, Hansen discusses his Declaration of Stewardship, the first principle of which requires "a moratorium on coal-fired power plants that do not capture and sequester carbon dioxide".
Running the power station on hydrogen converted from natural gas
Gas-fired power plants can also be modified to run on hydrogen.
Hydrogen can at first be created from natural gas through steam reforming, as a step towards a hydrogen economy, thus eventually reducing carbon emissions.
Since 2013, the conversion process has been improved by scientists at Karlsruhe Liquid-metal Laboratory (KALLA), using a process called methane pyrolysis.
They succeeded in allowing the soot to be easily removed (soot is a byproduct of the process and damaged the working parts in the past -most notably the nickel-iron-cobaltcatalyst-). The soot (which contains the carbon) can then be stored underground and is not released into the atmosphere.
Phase out of fossil fuel power plants
there is still a chance of keeping global warming below 1.5 °C if no more fossil fuel power plants are built and some existing fossil fuel power plants are shut down early, together with other measures such as reforestation.
Alternatives to fossil fuel power plants include nuclear power, solar power, geothermal power, wind power, hydropower, biomass power plants and other renewable energies (see non-carbon economy). Most of these are proven technologies on an industrial scale, but others are still in prototype form.
Some countries only include the cost to produce the electrical energy, and do not take into account the social cost of carbon or the indirect costs associated with the many pollutants created by burning coal (e.g. increased hospital admissions due to respiratory diseases caused by fine smoke particles).
Relative cost by generation source
When comparing power plant costs, it is customary to start by calculating the cost of power at the generator terminals by considering several main factors. External costs such as connections costs, the effect of each plant on the distribution grid are considered separately as an additional cost to the calculated power cost at the terminals.
Initial factors considered are:
Capital costs, including waste disposal and decommissioning costs for nuclear energy.
Operating and maintenance costs.
Fuel costs for fossil fuel and biomass sources, and which may be negative for wastes.
Likely annual hours per year run or load factor, which may be as low as 30% for wind energy, or as high as 90% for nuclear energy.
Offset sales of heat, for example in combined heat and power district heating (CHP/DH).
These costs occur over the 30–50 year life of the fossil fuel power plants, using discounted cash flows.
| Technology | Power generation | null |
14553266 | https://en.wikipedia.org/wiki/Allotropes%20of%20oxygen | Allotropes of oxygen | There are several known allotropes of oxygen. The most familiar is molecular oxygen (), present at significant levels in Earth's atmosphere and also known as dioxygen or triplet oxygen. Another is the highly reactive ozone (). Others are:
Atomic oxygen (), a free radical.
Singlet oxygen (), one of two metastable states of molecular oxygen.
Tetraoxygen (), another metastable form.
Solid oxygen, existing in six variously colored phases, of which one is octaoxygen (, red oxygen) and another one metallic (ζ-oxygen).
Atomic oxygen
Atomic oxygen, denoted O or O1, is very reactive, as the individual atoms of oxygen tend to quickly bond with nearby molecules. Its lowest-energy electronic state is a spin triplet, designated by the term symbol 3P. On Earth's surface, it exists naturally for a very short time. In outer space, the presence of ample ultraviolet radiation results in a low Earth orbit atmosphere in which 96% of the oxygen occurs in atomic form.
Atomic oxygen has been detected on Mars by Mariner, Viking, and the SOFIA observatory.
Dioxygen
The common allotrope of elemental oxygen on Earth, , is generally known as oxygen, but may be called dioxygen, diatomic oxygen, molecular oxygen, dioxidene or oxygen gas to distinguish it from the element itself and from the triatomic allotrope ozone, . As a major component (about 21% by volume) of Earth's atmosphere, elemental oxygen is most commonly encountered in the diatomic form. Aerobic organisms use atmospheric dioxygen as the terminal oxidant in cellular respiration in order to obtain chemical energy. The ground state of dioxygen is known as triplet oxygen, , because it has two unpaired electrons. The first excited state, singlet oxygen, , has no unpaired electrons and is metastable. The doublet state requires an odd number of electrons, and so cannot occur in dioxygen without gaining or losing electrons, such as in the superoxide ion () or the dioxygenyl ion ().
The ground state of has a bond length of 121 pm and a bond energy of 498 kJ/mol. It is a colourless gas with a boiling point of . It can be condensed from air by cooling with liquid nitrogen, which has a boiling point of . Liquid oxygen is pale blue in colour, and is quite markedly paramagnetic due to the unpaired electrons; liquid oxygen contained in a flask suspended by a string is attracted to a magnet.
Singlet oxygen
Singlet oxygen is the common name used for the two metastable states of molecular oxygen () with higher energy than the ground state triplet oxygen. Because of the differences in their electron shells, singlet oxygen has different chemical and physical properties than triplet oxygen, including absorbing and emitting light at different wavelengths. It can be generated in a photosensitized process by energy transfer from dye molecules such as rose bengal, methylene blue or porphyrins, or by chemical processes such as spontaneous decomposition of hydrogen trioxide in water or the reaction of hydrogen peroxide with hypochlorite.
Ozone
Triatomic oxygen (ozone, ) is a very reactive allotrope of oxygen that is a pale blue gas at standard temperature and pressure. Liquid and solid have a deeper blue color than ordinary , and they are unstable and explosive. In its gas phase, ozone is destructive to materials like rubber and fabric and is damaging to lung tissue. Traces of it can be detected as a pungent, chlorine-like smell, coming from electric motors, laser printers, and photocopiers, as it is formed whenever air is subjected to an electrical discharge. It was named "ozon" in 1840 by Christian Friedrich Schönbein, from ancient Greek ὄζειν (ozein: "to smell") plus the suffix -on, commonly used at the time to designate a derived compound and anglicized as -one.
Ozone is thermodynamically unstable and tends to react toward the more common dioxygen form. It is formed by reaction of intact with atomic oxygen produced when UV radiation in the upper atmosphere splits . Ozone absorbs strongly in the ultraviolet and in the stratosphere functions as a shield for the biosphere against mutagenic and other damaging effects of solar UV radiation (see ozone layer). Tropospheric ozone is formed near the Earth's surface by the photochemical disintegration of nitrogen dioxide in the exhaust of automobiles. Ground-level ozone is an air pollutant that is especially harmful for senior citizens, children, and people with heart and lung conditions such as emphysema, bronchitis, and asthma. The immune system produces ozone as an antimicrobial (see below).
Cyclic ozone
Cyclic ozone is a theoretically predicted molecule in which its three atoms of oxygen bond in an equilateral triangle instead of an open angle.
Tetraoxygen
Tetraoxygen had been suspected to exist since the early 1900s, when it was known as oxozone. It was identified in 2001 by a team led by Fulvio Cacace at the University of Rome. The molecule was thought to be in one of the phases of solid oxygen later identified as . Cacace's team suggested that probably consists of two dumbbell-like molecules loosely held together by induced dipole dispersion forces.
Phases of solid oxygen
There are six known distinct phases of solid oxygen. One of them is a dark-red cluster. When oxygen is subjected to a pressure of 96 GPa, it becomes metallic, in a similar manner to hydrogen, and becomes more similar to the heavier chalcogens, such as selenium (exhibiting a pink-red color in its elemental state), tellurium and polonium, both of which show significant metallic character. At very low temperatures, this phase also becomes superconducting.
| Physical sciences | Group 16 | Chemistry |
14563934 | https://en.wikipedia.org/wiki/Flour%20beetle | Flour beetle | Flour beetles are members of several darkling beetle genera including Tribolium and Tenebrio. They are pests of cereal silos and are widely used as laboratory animals, as they are easy to keep. The flour beetles consume wheat and other grains, are adapted to survive in very dry environments, and can withstand even higher amounts of radiation than cockroaches.
Red flour beetles infest multiple different types of products such as grains, cereals, spices, seeds, and even cake mixes. They are also very susceptible to insecticides, which makes their damage very impactful on the economy of milling industries.
The larvae of T. molitor, when full-grown, are known as mealworms; small specimens and the larvae of the other species are called mini mealworms.
Female reproduction is distributed over their adult life-span which lasts about a year. Flour beetles also display pre-mating discrimination among potential mates. Female flour beetles, specifically of T. castaneum, can mate with different males and may choose more attractive males over the course of their adult life-span.
Description
Flour beetles are a reddish-brown, oval-shaped insect. They have clubbed antennae on their head. They range from around 1/8 to 3/16 inch. Tribolium castaneum, more commonly known as red flour beetles, are known to fly. Other species of flour beetles crawl.
Selected species
Aphanotus brevicornis – North American flour beetle
Tribolium castaneum – red flour beetle
Tribolium confusum – confused flour beetle
Tribolium destructor – destructive flour beetle
Tenebrio molitor – yellow mealworm beetle
Tenebrio obscurus – dark mealworm beetle
Gnatocerus cornutus - broad horned flour beetle
Diet
Flour beetles consume a number of foods to survive. Flour beetles feed on many grain products, cereal, chocolate, and a number of powdered foods; including flour, spices, powdered milk mix, and pancake and cake mix.
Flour beetles also consume their own kind and participate in cannibalism. However, it is not a biological characteristic. It is suggested that they partake in cannibalism considering it raises the fitness of flour beetles that are in a habitat of weak sustainability. Additionally, it is a form of parental care. Some species produce trophic eggs for their children to eat. Those that engage in cannibalism are normally adults or larvae that consume pupae or eggs. Eggs and pupae fall prey to the older flour beetles because they do not have defense mechanisms being so young. Furthermore, the eggs and pupae are easily digestible, making them susceptible to becoming prey.
Distribution and habitat
In current day, Tribolium are more commonly found in stored food products. However, originally Tribolium lived under the bark of trees or in rotting wood. It is unknown of the exact time that flour beetles made the switch from bark to food products, but for as long as humans created grain piles, flour beetles have been using them as habitats. Tribolium confusum stem from Africa or Ethiopia. Tribolium castaneum originate in India. In present day, flour beetles are dispersed worldwide and do not reside in any specific country.
Sexual selection and reproduction
Tribolium use chemical signals, more specifically a pheromone, 4,8-dimethyldecanal (DMD), to attract mates. DMD attracts both females and males. DMD is isolated from Tribolium castaneum, Tribolium confusum, Tribolium freemani, and Tribolium madens. Tribolium participate in ployandry and continually lay eggs. Female Tribolium employ cryptic choice and accept or reject male spermatophores. Females also adjust the amount of spermatophores they accept based on male phenotypes. More specifically, Tribolium casataneum females are more inclined to accept more spermatophores from male mates if they are reoccurring mates.
Competition
An experiment done by Zane Holditch and Aaron D. Smith found that while there is competition among Tribolium species, the success of a species may depend on the timing of arrival and resources available. Results demonstrate that when species are simultaneously put together, Tribolium castaneum were competitively dominant. Tribolium castaneum grew larger populations than its competitors that were added later. Moreover, Tribolium castaneum thrived competitively from having early arrival in comparison to Tribolium confusum.
Research
In 2008, the Tribolium castaneum genome was sequenced by the Tribolium Genome Sequencing Consortium.
Triboilum are easy to use for research because they have a high growth rate and they thrive very well in a simple flour culture.
Evolutionary and ecological research
Tribolium beetles have contributed to research for a long period of time.
Tribolium experiments demonstrate that a multitude of factors determine success in colonization for any population. Experiments show that frequency and size, genetic and demographic processes, and individuals' relative fitness play a role in the success of colonizing populations.
Tribolium have also allowed researchers to gain a better understanding on the dynamics of population size.
| Biology and health sciences | Beetles (Coleoptera) | Animals |
5550252 | https://en.wikipedia.org/wiki/Stupendemys | Stupendemys | Stupendemys is an extinct genus of freshwater side-necked turtle, belonging to the family Podocnemididae. It is the largest freshwater turtle known to have existed, with a carapace over 2 meters long. Its fossils have been found in northern South America, in rocks dating from the Middle Miocene to the very start of the Pliocene, about 13 to 5 million years ago. Male specimens are known to have possessed bony horns growing from the front edges of the shell and the discovery of the fossil of a young adult shows that the carapace of these turtles flattens with age. A fossil skull described in 2021 indicates that Stupendemys was a generalist feeder.
History and naming
Stupendemys was first named in 1976 by Roger C. Wood based on specimen MCNC-244, the medial portion of a large sized carapace with associated left femur, scapulacoracoid and a cervical vertebra. Wood also described several other specimens he referred to Stupendemys, which includes MCZ(P)-4376. This specimen preserves much of the carapace alongside a fragmented plastron and various other bones. The fossils were unearthed by a paleontological excavation of the Harvard University in Venezuela in 1972. In 2006 a second species, Stupendemys souzai was described by Bocquentin and Melo based on material from the Solimões Formation in Acre State in Brazil, also home to the giant Caninemys.
In February 2020, Cadena and colleagues published a paper describing material discovered during the routine excavations in the Urumaco Formation, which have been ongoing since 1994. The material includes a relatively complete carapace that set a new maximum size for the genus and was designated as the allotype, meaning the specimen is of the opposite sex of the holotype. Venezuela also yielded fossils of a lower jaw, which has been used to lump Caninemys into Stupendemys in the 2020 study. The authors likewise consider S. souzai to be synonymous with S. geographica. However, more fossils were discovered in the Colombian Tatacoa Desert and formally described by Cadena and colleagues in 2021, including the first definitive skull remains as well as the first remains of a juvenile or early adult specimen (carapace length under 1 meter). The La Victoria Formation also yielded the remains of an adult female as well as more fossils of Caninemys. With definitive skull remains of Stupendemys known in association with a carapace and new fossils of Caninemys, the referral of Caninemys skull to Stupendemys was contested and the former was re-established as a valid genus.
The name Stupendemys is a combination of "stupendous", meaning extremely impressive, and the Latin word "emys" for freshwater turtle. The species name meanwhile honors the National Geographic Society. However, the name Stupendemys geographicus, as coined by Wood, is grammatically incorrect, as Stupendemys constitutes a feminine generic name. The name was eventually corrected to Stupendemys geographica in 2021 in accordance with the International Code of Zoological Nomenclature (ICZN).
Description
The skull of Stupendemys is roughly triangular in top view and the edges of the jaws converge at the front of the snout in a straight edge. The skull is dorsally extremely inflated by the prefrontals that make up a large area of the front region of the skull, forming a vertical wall above the bony nostril. Following the prefrontals and orbits the skull slopes down drastically before ascending again through the parietals. The orbits are relatively small and oriented to the sides. When viewed from below the premaxillae bear a deep concavity at their center. In this view the premaxillae form most of the anteromedial edge of the skull, meeting each other towards the middle of the skull and narrowing just before the deep concavity. In front view, the premaxillae form the bottom margin of the bony nostrils, tapering as they move down.
The carapace of adult Stupendemys can reach a straight midline length of greater than 2 meters with a low-arched profile. The nodular contours on the surface are irregular and the frontal margin of the shell is characterized by a deep notch flanked by large horns in male specimens. These horns are deeply grooved, suggesting that they were covered by a keratinous sheath. In addition to these horns, the front margin of the nuchal-peripheral bones is notably thickened and upturned. The surface is smooth to striated or lightly pitted. The margins of the posterior peripheral bones are moderately scalloped. The costal scutes of the carapace are relatively thin. In overall shape the carapace of Stupendemys is longer than it is wide.
SizeStupendemys is the largest known species of freshwater turtle currently known to science, with several specimens reaching a carapace length exceeding 2 meters. The largest specimen of Stupendemys is CIAAP-2002-01, an almost complete carapace with a parasagittal length of 2.86 meters. This exceeds the size of the Vienna-specimen of the Cretaceous sea turtle Archelon, the largest known turtle, (carapace length 2.20 meters).
The weight of Stupendemys was estimated based on the straight carapace length, with calculations indicating a weight of 871 kg for CIAAP-2002-01 and 744 kg for MCZ(P)-4376, the former largest known specimen of Stupendemys. However, these estimates do not compensate for the large embayment present at the front of the shell. A more precise body mass estimate might be achieved by calculating the average between the results of weight estimates based on midline length and parasagittal length. Applying this method yields a weight of 1,145 kg for the largest Stupendemys specimen.
The evolution of such an enormous size may have been multi-facetted and caused by a combination of factors including pressure from predators, habitat size and favorable climatic conditions, although Stupendemys temporal range indicates that it managed to survive through times of global cooling following the middle Miocene climatic transition (MMCT). Lastly, the giant size could have a phylogenetic link and be ancestral to Stupendemys, with several other related forms being known to have possessed gigantic proportions.
Phylogeny
Although initially believed to be a pelomedusid by Wood, later studies consistently recovered Stupendemys as a podocnemidid turtle instead. In 2020 Stupendemys was recovered as a basal member of Erymnochelyinae. However, this position was influenced by the inclusion of material belonging to Caninemys. In their 2021 publication, Cadena and colleagues again attempted to determine the relationship between Stupendemys and other pan-pleurodiran turtles using the morphological characters established previously by Joyce and colleagues (2021), 268 characters across 104 species of turtles. The analysis was run once with all taxa and once with a focus on Podocnemidoidae, removing all other taxa safe for Proganochelys quenstedti, Notoemys laticentralis, and Platychelys oberndorferi. The single most parsimonious tree resulting from the second analysis recovered Stupendemys as an early branching member of a clade with Peltocephalus dumerilianus at its base. Caninemys, now recognized as a distinct taxon, nested at the base of Erymnochelyinae. Similar results were later recovered in the 2024 description of Peltocephalus maturin.
Paleobiology
Paleoecology
Following the 2021 research of Cadena and colleagues, the Pebas Mega-Wetlands housed at least two species of giant side-necked turtles: Stupendemys and Caninemys. Despite their similar size (both sporting a carapace length greater than 2 meters), they vary greatly in skull morphology, with Caninemys proposed to have been deploying a vacuum feeding strategy combined with a strong bite supported by tooth-like structures of the maxilla, while a more durophagous-omnivorous diet has been suggested for Stupendemys. This difference in diet and feeding strategy would be in accordance with Gause's Law, by which two species competing for the same ecological niche cannot coexist with one another for a long period of time without either differentiating or one dominating over the other in the long run. In addition to the different skull morphology, the two taxa may have also been able to coexist due to the sheer size of the Pebas Mega-Wetlands they inhabited, as this ecosystem stretched over most of northern South America during the Middle Miocene. This reason may also prevent the two taxa from being in direct competition over nesting grounds and basking spaces.
The diet of Stupendemys may have been very diverse and broad, possibly including molluscs and other hard shelled prey as well as vertebrate prey as suggested by Meylan and colleagues for Caninemys. At its size it would have been easily capable of consuming various fish, snakes and small crocodilians. A broad dietary width would have helped Stupendemys in maintaining its large body size. Furthermore, Cadena and colleagues also highlight the role of turtles as seed-dispersers in modern-day Amazonia, consuming fruit of palms for example (Arecaceae), seasonally sometimes in great quantities, even if they are not typically part of their standard diet. With its wide gape, Stupendemys would qualify as a megafaunal frugivore and seed disperser.
Sexual dimorphism
The absence of horns on most Stupendemys specimens indicates that they were not used as a defense mechanism. However, their forward-facing position on the carapace may indicate that they were used in intraspecific combat. Cadena and colleagues hypothesize that the horns may have been a sexually dimorphic trait exclusively found in males, suggesting them to have been used similar to the horns and antlers found in artiodactyls. Among extant turtles similar behavior can be found in snapping turtles, some of the largest freshwater turtles alive, which are known to fight for dominance in overlapping territories. This hypothesis is supported by the presence of a deep, elongated scar along the left horn of CIAAP-2002-01, which could have been left by the horn of a rival male that engaged it in combat. The authors further suggest that in Stupendemys the males may have been the larger sex, similar to the condition seen in the modern podocnemids. However other traditionally sexually dimorphic traits of the turtle shell, such as a deeper anal notch or a xiphiplastral concavity, have not yet been observed in Stupendemys fossils.
Ontogeny
Prior to the 2021 study of Cadena and colleagues, only adult specimens of Stupendemys had been described. The discovery of a specimen with a carapace length smaller than 1 meter gives an insight into the changes the animal undergoes while reaching maturity. In addition to its small size, the animal is identified as a juvenile to young adult based on the absence of large horns and shallow anal notch. The inner nuchal notch, anterior expansion of the peripherals 1 and 2, irregular nodular contours, inner contact between the 7th and 8th costals and the relative size of the plastral lobes and their arrangement (except for the pectorals) remains relatively consistent with size.
One of the most significant changes of the carapace of Stupendemys is its height. With age the shell of the turtle grows significantly flatter, while the nuchal region develops a pronounced upturn of its anterior margin and peripheral 1, creating a wide and deep anteromedial embayment of the carapace. The 2nd and 3rd vertebral scutes grow narrower as the animal matures from juvenile to adult, similar to the extant Podocnemis, Erymnochelys and Peltocephalus. The 5th vertebral scute meanwhile belongs the longest and widest of the series in adults while keeping its trapezoid shape. This ontogenetic change of the vertebral scutes means that phylogenetic coding using the width of the vertebral scutes in relation to the pleural scutes should be treated with care due to the variable nature of these features as shown by Stupendemys.
Paleoenvironment
During the Middle Miocene, the area inhabited by Stupendemys was part of an interconnected series of lakes, rivers, swamps and marshes that drained into the Caribbean known as the Pebas Mega-Wetlands, which included the Colombian La Victoria Formation. The Wetlands provided favorable conditions to the native reptilian fauna, with several lineages of crocodilians reaching enormous sizes during the Mid to Late Miocene and also diversifying in ecology. Some of the enormous crocodilians that coexisted with Stupendemys included the enormous caimain Purussaurus, the bizarre Mourasuchus and large-bodied gharials of the genus Gryposuchus, some species of which reaching lengths of over 10 meters. Some of these crocodilians may have played a role in the evolution of Stupendemys large body-size, putting pressure on the animal through predation. Bite marks have been found on Colombian and Venezuelan specimens and an isolated tooth was found attached to the ventral surface of CIAAP-2002-01.
As the Pebas System began to disappear with the onset of the transcontinental Amazon Drainage, Stupendemys persisted in the wetlands of the northern Urumaco Formation and the Solimões Formation in Acre State, Brazil, into the Late Miocene before eventually dying out during the Early Pliocene like much of the large crocodilian fauna of the Miocene wetlands. Besides the aforementioned reptiles the waterways of Late Miocene South America were also inhabited by fish, including catfish such as Phractocephalus and Callichthyidae, characids such as Acregoliath rancii and the tambaqui (Colossoma macropomum), the South American lungfish (Lepidosiren paradoxa), trahiras (e.g. Paleohoplias assisbrasiliensis) and freshwater rays and sharks. Other turtles and tortoises found in the same deposits are Chelus columbiana (a fossil relative of the mata mata) and Chelonoidis. Further aquatic vertebrates included river dolphins and the large darter "Anhinga" fraileyi. At least within the Solimões Formation Stupendemys would have inhabited a floodplain or lacustrine environment with savannahs and gallery forests.
| Biology and health sciences | Prehistoric turtles | Animals |
5550318 | https://en.wikipedia.org/wiki/Meiolania | Meiolania | Meiolania is an extinct genus of meiolaniid stem-turtle native to Australasia throughout much of the Cenozoic. Meiolania was a large turtle, with the shell alone ranging from in length. Four species are currently recognized, although the validity of two of them is disputed. Meiolania was first described as a species of lizard related to Megalania by Richard Owen towards the end of the 19th century, before the continued discovery of additional fossils solidified its placement as a kind of turtle.
The best known species is M. platyceps, known from hundreds of specimens collected in Pleistocene strata of Lord Howe Island. The oldest known species is M. brevicollis from the Miocene of mainland Australia. Other species include M. mackayi from Pleistocene New Caledonia, which may be synonymous with M. platyceps, ? M. damelipi from Holocene Vanuatu, which may represent a non-meiolaniid turtle, and the Wyandotte species, an unnamed form from Pleistocene Australia tentatively identified as M. cf. platyceps by meiolaniid researcher Eugene S. Gaffney. Additional fossil remains indicate the presence of Meiolania or a close relative in multiple localities across Australia, New Caledonia and Fiji.
Meiolania was a well-armored animal with a somewhat raised carapace with spiky edges, osteoderm-covered frontlimbs, a head adorned by massive cow-like horns and a tail encased by spiked tail rings and tipped by a large bony club. It has been hypothesized that many of these features could have been used either in self-defense or in intraspecific combat during the mating season. Furthermore, the horns could have served a role during foraging, helping the animal brush aside foliage while grazing. The discovery of fossil nests and certain adaptations against sand entering its nasal cavity indicate that they spent at least some time in arid regions or on the beaches of the islands they inhabited.
Neither the dispersal nor the extinction of Meiolania are fully understood. Several hypotheses have been proposed ranging from it spreading across the now submerged continent of Zealandia to it swimming between islands (the latter of which is now considered unlikely based on its heavy build and lack of aquatic adaptations). The extinction of this turtle was most likely a multi-facetted process with ties to climate change, reduction of its native territory by rising sea levels, predation from invasive livestock and possibly hunting by humans. However, some of the youngest records are uncertain, with the roughly 3.000 year old ?M. damelipi possibly being another type of turtle and the even younger, ca. 2.000-1.500 year old, Pindai Cave meiolaniid being indeterminate at a genus level.
History and naming
Early research
Perhaps the first recorded discovery of meiolaniid remains stems from John Foulis, a doctor who lived on Lord Howe Island halfway through the 19th century. Foulis mentioned that he discovered the bones of a turtle when describing the island's geology, with later authors claiming that he sent a skull to an unspecified museum. While records of his writing exist, the later claim could not be verified and remains questionable. More scientists arrived on the island around 1869 on the ship Thetis following the murder of a resident. Among these scientist was botanist and poet Robert D. FitzGerald, who according to Clarke (1875) discovered fossil turtles. However this claim too could not be verified by later research, as could the claim that another collection was made by a Mr. Leggatt, despite the fact that these remains were supposedly sent to British paleontologist Richard Owen. However, there are records of FitzGerald writing to Owen regarding later discoveries on Lord Howe Island and notes on turtle remains recorded by zoologist Edward P. Ramsay. Another early record tells of meiolaniid remains being collected by geologist H. Wilkinson during yet another Thetis expedition to the island in 1882.
In 1884 turtle fossils once again found their way into the possession of FitzGerald, who proceeded to send the remains to Owen in London. It is noted that the collection of Meiolania remains seemingly reached a highpoint around this time, yet was poorly recorded in contrast to the well known history of less prolific expeditions. Multiple researchers appear to have been actively collecting turtle fossils at the time, including FitzGerald and Wilkinson who were seemingly unaware of each other's efforts despite working for the same institution. In London meanwhile, Owen noted that the fossils of Lord Howe Island were similar to a skull discovered several years prior in Queensland, which he had attributed to the giant squamate Megalania (at the time thought to be a giant relative of the thorny devil rather than a monitor lizard). What has been noted as strange about Owen's conclusion is that he identified the fossils as those of a lizard despite the fact that multiple researchers in correspondence with him had already recognized them as belonging to a turtle. Owen saw the discovery of the Lord Howe Island material as evidence for a smaller relative of Megalania, which he subsequently named Meiolania. BMNH R675, an incomplete and damaged skull embedded in hardened calcarenite, was chosen to serve as the holotype for the genus and although its precise age and location is not known, Eugene S. Gaffney suggests it may have come from the 100,000 to 120,000 year old rocks of Ned's Beach. The fossils were originally assigned to two distinct species, M. platyceps and M. minor, tho the latter has since then been sunk into M. platyceps.
Not long after Owen named Meiolania, more and more material was published, including fossils much better preserved than the holotype, which have led to revisions regarding Meiolanias classification. In 1887 Thomas Henry Huxley agreed with collectors in that Meiolania was not a lizard but a type of turtle, which he named Ceratochelys sthenurus. In addition to erecting Ceratochelys, Huxley also referred the Queensland skull to this new genus. Meanwhile, upon receiving some additional fossil remains collected by Wilkinson (including a fully preserved skull), Owen came to believe that Meiolania was related to both lizards and turtles and thus placed the animal in a group he called Ceratosauria (a name already occupied by a clade of dinosaurs). When the fossils were examined by George Albert Boulenger, he sided with Huxley, but placed the animal in Pleurodira (side-necked turtles) rather than Cryptodira. This was the first of a long series of differing opinions on the relationship between meiolaniids and modern turtles. Arthur Smith Woodward on the other hand conducted further research on the continental remains and recognized that Owen's composite Megalania further contained the fossils of a marsupial in addition to the monitor lizard and turtle remains. Although he too agreed with Huxley's conclusion that the fossils were those of a turtle, both he and Boulenger pointed out that Meiolania took precedence over Ceratochelys and would thus be the correct name. He also concluded that the Queensland skull was clearly distinct from the material collected on Lord Howe Island and thus coined the name Meiolania oweni for the continental material in 1888.
Additional material was then described in 1889 from Gulgong, New South Wales, and in 1893 from Coolah, New South Wales. These instances were thought to correspond to a Pliocene and Pleistocene age respectively and recorded by Robert Etheridge, Junior. Etheridge initially intended to write a detailed description on Meiolania after becoming personally invested and ensuring the continued collection of material. However his focus eventually shifted and his work was instead continued by Charles Anderson. Around the same time the remains of a meiolaniid turtle were discovered in Argentina, named Niolamia argentina by Florentino Ameghino. While these remains were briefly considered to be a species of Meiolania and may have in fact originally intended to be named as a species of the genus, they would eventually found to be distinct enough to retain the original name.
Additional finds on Lord Howe Island
Another important contributor in the research history of Meiolania was William Nichols, a local who served as a guide and collector for the Australian Museum. According to Gaffney, Nichols' contribution practically doubled the amount of known Meiolania specimens while also finding the first significant shell remains of this genus.
After a brief period of little to no new discoveries, Anderson's work with the turtle remains eventually lead to the creation of another species in 1925, when he described the horns and limb bones of a meiolaniid turtle discovered on Walpole Island south of New Caledonia. Originally these bones were discovered by A. C. Mackay, an engineer working for the Australian guano company. These remains were named Meiolania mackayi by Anderson, although later reviews of the material argued that the species was not diagnostic enough to have warranted this distinction. In addition to describing a new species, Anderson further supervised the creation of a skull reconstruction of Meiolania and described a great many remains collected towards the end of the 19th century on Lord Howe Island. With this Anderson finished the project started by Etheridge years prior. Anderson furthermore was the first to map the distribution of Meiolania across Lord Howe Island, even though the information was largely based on information given to him by Allan Riverstone McCulloch, as Anderson had not visited the island himself.
Parallel to Anderson publishing on the Etheridge collection, William Nichol's son-in-law Reginald V. Hines and a schoolteacher named Max Nicholls worked together to continue excavations, uncovering an additional 200 specimens, which they sold to the Australian Museum. Among the most important finds of theirs was a plastron and an articulated hindlimb, which dispelled Anderson's notion that Meiolania was a sea turtle. By the 1940s the locality where both Nichols' and Hine's had recovered their specimens became less relevant, with other areas across the island gaining importance. Among the most important finds from these new localities was a carapace with articulated vertebrae and limb bones, but no skull. The discovery, made at Ned's Beach in 1959, was made rather coincidentally following a joking challenge made by Elizabeth Carrington Pope to Ray Missen, a local meteorologist. While photos were taken during recovery, a collapse of the excavation area nearly destroyed the shell. Eggs were found soon thereafter and during the excavation of a pool the most complete skeleton to date was found. Although uncovered with the use of a jackhammer, the specimen could be pieced together and would eventually serve as the basis for later reconstructions of Meiolania.
The best known carapace of Meiolania was discovered in 1977, and much like the remains of Pope and Missen, had been an coincidental find. Alex Ritchie, who worked at the Australian Museum, failed to participate in the recovery of the "swimming pool" skeleton (as it is named by Gaffney). When he was informed of yet another find, he traveled to Lord Howe Island only to conclude that the remains were relatively insignificant. During his stay however he discovered the aforementioned shell and an associated skull on Old Settlement Beach.
Work by Gaffney and beyond
The next important expedition to Lord Howe Island would be a joint project between the Australian Museum and the American Museum of Natural History in 1980, with the latter returning two years prior for a second dig. These expeditions were the basis for a series of major publications by American researcher Eugene S. Gaffney, now considered to be an expert on meiolaniids. Gaffney's work consisted of complete and detailed descriptions of all known body parts of Meiolania and their significance for the phylogenetic position of the group. Gaffney published three papers on the subject: the first, dealing with the history of Meiolania's discovery and its skull, the second on the vertebrae and tail club, and the final publication covering the shell and limbs while also reviewing Meiolaniidae as a whole. These papers were published in 1983, 1985, and 1996, respectively.
In part due to Gaffney's work, meiolaniids had become much better understood by 1992. These advances in our understanding of Meiolania lead Gaffney to re-examine the material of M. oweni, finding that it was sufficiently distinct from all other species to warrant being placed in its own genus, Ninjemys. While this removed one species from the genus, another was added that same year when Dirk Megirian described Meiolania brevicollis based on Miocene remains from the Camfield Beds (Northern Territory) of mainland Australia. Megirian had previously mentioned the Camfield material in a brief report in 1989, but was at that time unable to identify it at a species level.
The most recently described species was published in 2010 by White and colleagues, based on limb material from Vanuatu. However, due to the fact that this species is not known from skulls and tail elements, it is uncertain if it actually represents a species of Meiolania and is thus typically referred to as ?Meiolania damelipi both by the original team and subsequent authors.
Although the taxonomy of this genus is still not fully clarified, especially with the abundance of isolated remains and species named from poor or possibly non-meiolaniid remains, papers from the 2010s and onward largely focussed on aspects of the animals paleobiology, aided by multiple papers reexamining the South American taxa. 2016 saw an analysis performed on an egg clutch assigned to Meiolania, in 2017 the braincase of Meiolania was studied illuminating some aspects of its lifestyle, and in 2019 Brown and Moll published an extensive review on the dispersal, ecology and lifestyle of the animal.
Etymology
The meaning of the name Meiolania has been subject to some debate, as Owen didn't offer a detailed etymology in the type description. This has led to two primary hypotheses, both agreeing that the name has the same origin as that of Megalania and that the first element of the name derives from the Ancient Greek "meion" meaning "lesser". The origin of the second part is however less conclusive. Gaffney argues that the suffix derives from the Latin word "lanius", which means "butcher". In this way Meiolania ("lesser butcher") would be complementary to Megalania ("great butcher"). Juliana Sterli and colleagues meanwhile translate the name to "lesser roamer" from the Ancient Greek "ήλaίνω" meaning "to roam about". This argument is supported by the work of Richard Owen himself. Although Owen never gave an etymology for Meiolania, he did provide one for Megalania. Contrary to Gaffney's writing, Owen translates Megalania as "great roamer" rather than "great butcher".
Species
M. brevicollis
While most species of Meiolania were recovered from islands, M. brevicollis is the only named species from mainland Australia. Its fossils were discovered at the "Blast Site" near Camfield Station in the Northern Territory. It is thought to be a part of the Bullock Creek Local Fauna, dating to the middle to late Miocene and thus making it the oldest named Meiolania species. M. brevicollis is among the better known Meiolania species and known from partial skulls, neck vertebrae, horn cores, osteoderms, and shell remains. While this is not as much as is known for the type species, it still allows for comparison beyond the size and shape of the horns. The species name brevicollis was chosen in reference to the neck, which is shorter than that of M. platyceps.
?M. damelipi
The fossils of ?Meiolania damelipi could represent the youngest record of the genus and meiolaniids as a whole, as they were discovered at the archaeological site of Teouma on Efate, Vanuatu. These bones date to 2,890 to 2,760 BP (ca. 940–810 BC) and were found alongside those of sea turtles. Although the remains of ?M. damelipi are numerous, accounting for 405 bones, their assignment to Meiolania is uncertain as critical components such as tail rings, extensive carapace remains and most importantly skulls and horn cores are absent from the site. Due to this ?M. damelipi (named for Willie Damelip) is only tentatively assigned to Meiolania and could in fact represent a different terrestrial turtle. Later excavations have revealed additional bones, but still failed to recover horn cores. It has also been noted that ?M. damelipi bears resemblance to fossil turtles from Fiji.
M. mackayi
Known from the Pleistocene of Walpole Island, M. mackayi is known primarily from isolated horn core remains as well as a few limb bones. This species generally resembles M. platyceps, but was described as having narrower horns and more gracile limbs. However, according to Gaffney the fossil remains are not adequate enough to support a new species. While Gaffney concedes that it could represent a distinct biological species due to the isolation of Walpole Island, it does not show many physical differences from the better known species. This is followed by Sterli, who reasons that the two populations could have become genetically distinct due to being unable to maintain gene flow over such a large distance. However as the material had already been named, Gaffney retained the species in his 1996 review as the name has some value in making discussion easier. It was named after A. C. Mackay, an engineer who discovered the first fossils.
M. platyceps
The type species of the genus, M. platyceps is also the best known of the recognized species and is represented by several hundred individual bones as well as a few articulated skeletons. While the extensive number of fossil remains establishes M. platyceps as the de facto species used in comparison with other turtle groups, it also serves to highlight the shortcomings of the less complete species, as M. platyceps specimens show a great range of intraspecific variation. Meiolania platyceps was endemic to Lord Howe Island, the remnant of a former volcano located between Australia and New Zealand and lived during the Pleistocene. Meiolania platyceps also includes specimens previously named Meiolania minor by Owen.
Wyandotte Species
A large bodied Meiolania from Wyandotte Creek on Australia's mainland. The bones of the Wyandotte species were recovered from the Late Pleistocene (45,000 and 200,000 BP) of northern Queensland, with a potential second locality from Darling Downs. As the material of this "species" only consists of isolated horn cores and tail vertebrae, it has not been named and simply referred to by the name of the locality it was found at. Gaffney cautions against establishing species based on variable features such as horn cores and tentatively assigns the Wyandotte remains to Meiolania cf. platyceps. Gaffney does however still note the great size of the animal, its unique geography compared to other Pleistocene Meiolania species and the fact that the horns do not fully match those of the other established species.
Other indeterminate Meiolania or meiolaniid fossils have been found across several South Pacific islands including Tiga Island, the Pindai Caves of Grande Terre and Viti Levu (Fiji). While these occurrences are sometimes listed as examples of Meiolania, they are too fragmentary to properly assign to the genus properly and are listed as indeterminate meiolaniids by Gaffney. This highlights one particular issue with the fossil record of Meiolania, the lack of material. Although M. platyceps is known from a large amount of material and M. brevicollis can be morphologically distinguished from it, other members of the genus are mostly known from isolated remains and separated largely on body size, horn thickness and geography. This renders M. platyceps and M. brevicollis the only well understood Meiolania species, with the others being possible synonyms or not even belonging to Meiolania at all.
Description
Skull and horns
The skull of Meiolania is robust with a rounded snout and a series of horns ornamenting the back. The nasal bones are fused into a single element that protrudes from the skull, though how far is subject to variation. In some specimens it is the front-most part of the skull, while in others the premaxilla extends further. The nares are divided into two by a bony internarial septum, a rarity among turtles. This appears to be a derived trait evolved by the lineage leading up to Meiolania. Basal meiolaniids only feature a single narial opening, while those of Ninjemys are only partially divided. Meiolania platyceps specimens show both states, with some possessing partially and others fully divided nares. This can be attributed either to individual variation or to different growth stages of this part of the skull specifically. It is however not consistent with being overall indicative of age, as the difference in size between these specimens does not line up with the hypothesis. The tympanic cavity of Meiolania is large, comparable to that of modern testudinids. The triturating surface of the maxilla, used to grind and chew, shows a distinct second accessory ridge that is not present in more basal taxa like Niolamia and Ninjemys. The best known skull besides the type species is that of M. brevicollis. Although it overall resembles the more recent form, the head of this species appears to have been flatter with more elongated eye sockets. Furthermore, the cheeks behind the eyes are flat in M. brevicollis and there is a small postorbital crest behind the eyes.
Like other meiolaniids, Meiolania is easily identifyable by the size and shape of the various scale areas. In the absence of many of the actual bone sutures, it is the margins of these scales that are used to diagnose meiolaniid turtles. The areas are referred to as scales, scutes, scale areas or horns should they feature prominent protuberances. The C scales are highly variable convex scutes located right above the tympanic cavity. In some individuals it is simply a rounded area, while in others it features a notable elevation. In the most extreme cases the C scales can form horns similar to the more consistently prominent B scales, also referred to as horns or horn cores. It is these B horns that are most characteristic for Meiolania, being typically large elements with a resemblance to the horns of a cow. The size of these horns is variable among the different species and even among individuals of a single form. In Meiolania platyceps the B scales range from being fully formed horns to being no bigger than the preceding C scale. These stark differences within a single species are however not related to sexual dimorphism. A study comparing 50 horn cores of M. platyceps found that there is no evidence for bimodality, which means the distribution in horn size is more complex than simply being split between short-horned and long-horned individuals. Instead the results skew towards the long-horned form. Among the different species, the horn cores of M. brevicollis are the narrowest, followed by those of M. mackayi, whose upper limit overlaps with those of the more robust Wyandotte species and M. platyceps. The B horns of the Wyandotte species show some of the strongest curvature among the species and are also among the longest proportionally. At least in relative length the narrow horns of M. brevicollis manage to rival those of the Wyandotte species. The angle at which these horns project may also differ between species, as those of M. brevicollis emerge at a notably lower angle than those of M. platyceps. Regardless of these potential interspecific differences, they are very much unlike the sideways projecting horns of Ninjemys and Niolamia or the small ridge of hornlets seen in Warkalania. Additionally, while in these taxa the A, B and C scale areas form a single continuous shelf at the back of the head, this is not the case in Meiolania. The A scale area, which overlays the rear-most surface of the skull, forms only relatively small horns differing greatly from the massive, shield-like structure seen in Ninjemys and Niolamia. This is most noticeable in M. brevicollis, in which the A horns are practically vestigial.
The scale areas of Meiolania are described as consistently showing at least slightly raised centers, so while they may not all form horns they are thicker in the middle than towards the edges. The X scale, which is a singular scale in the center of the skull roof, is small and rhomboid in shape. Furthermore, it is concave, meaning it forms somewhat of an indentation in the skull in profile view. While this scale projects in-between the D and G scales, it does not prevent either of those pairs from contacting each other along the midline. Although the D scales have been described as slightly convex, said convexity is only poorly developed compared to other meiolaniids and thus appears much lower, setting Meiolania apart from Ninjemys and Niolamia. The G scales are much smaller and almost fully separated by the X scale. Like the D scales, they are not especially convex and appear flat. The Y and Z scales, covering the front-most areas of the skull from between the eyes to the tip of the snout, are both large and unpaired. Scale Z is the more anterior of the two, covering the very tip of Meiolania'''s snout and the nasal area. Although the margins between the Z and Y scale are raised, towards the bottom of the jaw where the Z scale connects to the F scales this ridge becomes a trough. Scale Y meanwhile covers much of the area between the two eyesockets and is larger than scale Z. This scale features a very prominent convexity, meaning it bulges out giving it a somewhat domed appearance. Although the Y scale is also slightly raised in Ninjemys, it is not as extreme as in Meiolania.
The F scales cover the entire area between the eyesockets and the dorsal Y and Z scutes, which effectively amounts to half of the area surrounding each eye. They meet the H scales towards the back of the skull. The H scales are somewhat shaped like a pentagon and cover both the sides as well as the top of the skull. Scute K is the posterior-most scale on the side of the skull, covering the area behind the tympanic cavity. Some individuals feature a prominent convexity in this region. Below the tympanic cavity the K scale area meets the J scale area, with the two only being separated by a shallow trough. It is not entirely clear how this might affect the soft tissue of the skull. On the one hand, this shallow, poorly defined contact may indicate that these two areas were not covered by scales but by skin as in many modern turtles (the same would also have to apply to Ninjemys). The prominent boss on the K scale area of some other individuals meanwhile favors the interpretation that the covering would have primarily consisted of scales like the remainder of the skull. Scale E is a minor element located between the tympanic cavity and the eyesockets, while scale I covers the area that effectively forms the beak. Scale area I is presumably covered in an extension of the rhamphotheca which also covers the chewing surface of the jaws. It is thought that the rhamphotheca extends approximately until the connection between scales I and J.
Postcrania
Limbs
The shoulder girdle is typically chelonian, showing the three prongs formed by the scapula and coracoid also seen in other turtles. As with most of Meiolanias anatomy, the shoulder girdle is more robust than in most turtles, bearing similarity to terrestrial tortoises. The dorsal process of the shoulder blade is nearly cylindrical, ending in a rounded surface that articulates with the shell. The second process of the shoulder blade, known as the acromial process, diverges from the dorsal process at a 120° angle. The angle between these two processes has been used as an indicator for shell depth, as it joins at a right angle of 90° in the shallow-shelled aquatic turtles and at more obtuse angles in high-shelled turtles like tortoises and snapping turtles. This is confirmed in Meiolania when examining the shell remains themselves. The final of the three processes of the shoulder blade is formed by the coracoid, which is a short and fan-shaped element like in tortoises.
There is no complete hip for Meiolania, but numerous partial remains have been found. The ilium extends nearly straight from the hip joint and takes on a spool-shape, both characters seen in snapping turtles and tortoises as well. Where Meiolania diverges from the typical bauplan is in how the ilium ends. Among cryptodires, the ilium either fans out (as in baenids) or curves back (as in most testudinoids and snapping turtles). The fact that the ilium is mostly straight like in testudinids might hint at yet another trait related to lifestyle rather than phylogeny. The ischium possesses well developed lateral processes and the contact between this bone and the lower half of the shell (the plastron) extends to the midline of the hip. The area where ischium and pubis meet is massive and forms a large plate of bone that has thyroid fenestrae, something typically considered to be a basal condition for turtles that was lost and redeveloped across the clade. The pubis also forms multiple processes, two lateral processes that connect to the plastron and a single ossified epipubis that extends about as far as the lateral processes.
The front limbs of Meiolania were short and stout, especially in M. platyceps, resembling those of terrestrial tortoises and thus disspelling early notions researchers had about Meiolania being marine. Humerus bones are well sampled and consequently, a lot of variation is present. For instance, some are noted by Gaffney for being unusually rugose in their surface texture. The carpal bones consist of seven elements, three proximal bones articulating with the radius and ulna while the remaining four connect to the fingers. The presence of three proximal carpals stands out, as most other turtles have four proximal carpals instead. While at least some modern species also possess three, this is due to two of them having been fused, creating a single elongated bone. This does not appear to be the case in Meiolania and it is instead hypothesized that the missing bone was either simply absent or consisted purely of cartilage, preventing it from fossilizing along the others. Something similar could be the case with the distal carpal bones, which like the proximal ones lack one bone compared to other turtles. Five fingers were present on each of Meiolanias forelimbs. The first finger was short and wide, followed by three fingers of roughly equal length and a fifth, shortened finger. Each finger is tipped by blunt and flat unguals resembling those of gopher tortoises, rather than the narrow and curved claws seen in terrapins and snapping turtles. The limbs of ?Meiolania damelipi meanwhile were described as being relatively more gracile.
The femur shows the same overall traits as the humerus, with a great amount of variability between specimens, especially in rugosity and ossification of the articular surface edge. It is a stout element, matching the robust build of Meiolania and broadly resembling those of cryptodire turtles. The femoral head is very much rounded like that of tortoises, rather than elongated, yet another feature associated with terrestrial locomotion or at least bottom walking rather than swimming. Like the femur, the tibia and fibula are stout, robust bones, even more so than in those of other terrestrial turtles. As in other turtles, the astragalus and calcaneum are fused into a single element simply referred to as the astragalocalcaneum. However, where the two bones can still be differentiated in most other turtles based on the presence of a suture zone, in Meiolania no such distinction can be made as they are completely fused to one another. The distal tarsal bones are only poorly known, although it is assumed that Meiolania possessed the same amount as modern turtles. Meiolania had four toes with similar proportions as the fingers, meaning they were short with broad, flattened and uncurved unguals.
Osteoderms or dermal ossifications are known from both M. platyceps and M. brevicollis and are clearly different from those of crocodilians. Three types can be distinguished. The first is a smooth sesamoid bone embedded in the flexor tendon musculature of the third finger, facing the ground. The second type correlates to more traditional protective osteoderms and can be found in the form of disc and cone-shaped bones that likely covered the limbs in a manner similar to the dermal ossicles of modern tortoises. Similar structures are common in modern turtles and are partially exposed with scales covering the outer surface. The third type is a porous and rough-surfaced ossification of unclear function found near the toes. Unlike the exposed cone-like osteoderms, this third type was likely fully embedded into the skin of the turtle.
Neck
The neck vertebrae are known from two species: M. platyceps and M. brevicollis. The former had vertebrae that were longer, wider and lower while those of the later were shorter, narrower and taller. The neural spine of the axis is short in M. platyceps and high in M. brevicollis. Furthermore, the underside of the 5th and 6th cervicals possessed a keel in M. brevicollis which is missing in the Lord Howe Island species. This essentially means that M. platyceps had a neck notably longer than that of M. brevicollis. One of the most distinctive features of the neck vertebrae of Meiolania is the presence of well-developed cervical ribs. This stands out as they are absent in nearly all other turtles. Meiolania has at least five pairs of free cervical ribs, a possibly fused sixth pair and what may be a small rib element of the atlas. The first well-developed rib belongs to the second cervical and is, together with the third and fourth, among the largest of the cervical ribs.
Tail and tail club
As no fully articulated tails are available, it is not known how many vertebrae formed the caudal series. Gaffney suggests that a minimum of 10 vertebrae formed the tail based on comparison with modern turtles. It is possible that there were more, but unlikely for there to have been fewer. Following this interpretation, the tail of Meiolania would have been proportionally long, similar to those of modern snapping turtles. The tail of Meiolania was protected by a series of armored rings. These rings do not fully enclose the tail around its circumference, as the bottom of the segment is open and the ring thus incomplete. This is different from the rings in more basal species, in which the rings are closed. Based on specimen AMF:9051 these rings articulated with one another and correspond directly with the vertebrae, meaning each ring surrounds a single tail vertebra. It is unclear exactly how much of the tail was covered by rings. The best preserved fossils primarily show those towards the end of the tail, however, Gaffney proposes that the entire length would have been covered. Evidence to support Gaffney's claim can be found in some isolated material initially described as a "sternal arch" by Owen. Based on this fossil the rings begin as much wider elements, corresponding with a broader tail closer to the torso. These earlier rings are comparably thin and do not immediately articulate with one another, meaning they don't overlap as the posterior ones do. Along the length of the tail, two pairs or ridges situated on the surface of the rings gradually grow and form two distinct pairs of thornlike spikes towards the tails end. The larger pair is placed atop the rings and curves up and backwards, while the second pair projects from the side of the rings and is notably smaller.
The final segment of the tail is covered by what is generally called a tail club or tail sheath, superficially resembling those seen in certain species of glyptodonts and ankylosaurid dinosaurs. This club consists of four spiked segments similar to the rings and the conical tip of the tail. It is not entirely clear whether or not the club is derived from the tail rings or not. While the overall structure appears similar, with the fused club showing the same spike arrangement, one individual shows sutures that do not fit this interpretation. However it is also possible that said specimen is an outlier that does not represent how the club is typically formed. Unlike the preceding rings, the fused club surrounds the entirety of the vertebrae, lacking the ventral opening seen before. The surface of the club is covered in a multitude of pits and foramina that give it a rugose texture and likely served as pathways for nutrient vessels, indicating that the club itself was covered in a cornified layer of scales. The spikes of the club show a much greater level of variability compared to those on the rings prior. Although still appearing in the form of a dorsolateral and a lateral pair, both of which taper to form backwards-directed points, some Meiolania specimen preserve spikes on their club that are much blunter. The distance between the spikes can vary and even the size progression is subject to intraspecific variation, as typically the spikes of the club peak at the beginning of the club and grow gradually smaller, while in other individuals the spikes reach their greatest size around the second pair. There are signs of abrasion in some fossils, indicating that the bottom layer of scales on the club was gradually damaged by the animal dragging its tail over the ground. The tip of the club consists of nearly solid bone, with the final vertebra being barely separated from the surrounding ring. The tip of the club, like the spikes, may be pointed or blunt depending on the specimen.
Shell
The shell of Meiolania is roughly ovoid with parallel edges and a protruding front. It lacks the cephalic notch, an indentation in the front of the shell, as well as the caudal notch towards the back of it. The very back of the carapace is serrated, with the scutes forming a spiked edge. There are however questions regarding the precise shape of the carapace that aren't fully resolved. The most complete specimens all preserve it as flattened, but also clearly suffered from distortion and collapse of the shell during the fossilisation process, thus not reflecting its form in life. Analysis of individual shell elements all indicate at least some degree of vaulting. This indicates that the overall shape of the carapace is raised or arched, however, probably not nearly as pronounced as the vaulting seen in modern giant tortoises. Instead, the shell of Meiolania is thought to have resembled the shape seen in species of the gopher tortoise.
Size estimates
Size estimates differ between all recognized species. M. mackayi was among the smaller species, 30% smaller than M. platyceps, which in turn was 10 to 20% smaller than M. brevicollis and only half the size of the Wyandotte species. Put into numbers, M. mackayi is considered to have reached a carapace length of approximately and M. platyceps was estimated to reach a carapace length of . Rhodin notes that the largest complete M. platyceps carapace measures , and that the size of larger specimens was estimated based on the proportions of this individual. Another method used to determine the size of Meiolania was to compare the size of fossil eggs with those of modern tortoises. Based on this method, Lawver and Jackson estimated that one individual M. platyceps responsible for a clutch of 10 eggs must have had a carapace length of around . Furthermore, some sources claim that Meiolania platyceps could reach lengths of more than including the tail, neck and head. The Wyandotte species stands out as the largest member of Meiolania, with the horn cores indicating an animal similar in size to Ninjemys and a carapace length of perhaps up to . ?M. damelipi was described as being of a similar size to M. platyceps, with Rhodin and colleagues estimating a length of based on limb material.
Phylogeny
Unlike the relationship between meiolaniids and other turtles, the internal structure of Meiolaniidae is well understood with research consistently recovering the same results. Within its family, Meiolania is placed as one of the most derived members, displaying several features that are observed to have changed between the earliest meiolaniids and Meiolania itself. The second accessory chewing surface, divided nares and small X scale first appeared at the beginning of the Australasian group that unites all meiolaniids other than Niolamia. Furthermore, Ninjemys is excluded from the clade formed by Warkalania and Meiolania in part based on the anatomy of the D scales, which is high in the former and low in the latter. Finally, the recovered cow-like horns and absence of a continuous shelf of scales at the back of the skull separates Meiolania from its closest relative, Warkalania. Throughout this family tree, one can also observe a gradual decrease in the size of the A scales, which begin as a large shield-like structure in basal taxa like Niolamia and Ninjemys and are comparably reduced in Meiolania. The B horns change orientation, protruding sideways in early forms and curving back in later species, however here Warkalania represents an outlier with its highly reduced horns. One wildcard is represented through Gaffneylania, the most recently described meiolaniid, as it was recovered in several different positions within the family. However this is largely due to its very fragmentary nature that leaves several important traits ambiguous.
Within the species Meiolania things are less certain, primarily due to how fragmentary or even undiagnostic some species are. It is hypothetically possible to find clades within the genus based on the width of the B horns, however as Gaffney notes this is not a consistent feature and varies greatly once one is met with a greater sample size.
Paleobiogeography
While the geographic range of early meiolaniids and the continental Meiolania brevicollis are easily explained through the breakup of Gondwana, other means of dispersal must have been necessary to account for the many remains found on offshore islands.
A commonly proposed but somewhat controversial hypothesis for the appearance of Meiolania is direct dispersal across water, which may range from drifting, floating, walking, and wading to active swimming. While some authors simply suggest that Meiolania was a terrestrial animal capable of swimming, others suggest an aquatic lifestyle altogether. Mittermeier even goes as far as to suggest that meiolaniids had marine ancestors, a hypothesis not favored by modern phylogenetic analysis.
Brown and Moll (2019) deem active travel through saltwater unlikely and even flat out impossible. According to them the head size, large horns, fused bones, heavy ossification and inability to retract the head, combined with the animal's neck flexibility being primarily adapted for downward movement while grazing, would have caused difficulties with keeping the head above water while swimming or floating. Their limbs are proportionally short and similar to those of tortoises, differing clearly from the flippers of sea turtles and the limbs of other aquatic turtles, making them inefficient tools for active swimming. Adding to that is the likelihood that they were covered in osteoderms, bony scutes, based on basal meiolaniids from South America. The shell shape is another factor against active swimming. While White mentions the highly buoyant shell of modern turtles, the carapace of Meiolania is shaped in a way that traps much less air and thus does not give it the same lift in water as seen in modern species. The highly armored tail on the other hand is likened to an anchor, further decreasing mobility in water. All of these combined would likely lead to the animal drowning if it found itself in open water, especially with the additional factors of exposure and currents working against it. This conclusion is corroborated by the work of Lichtig and Lucas, who consider Meiolania negatively buoyant (although their work is otherwise criticized by Brown and Moll).
While they may not have been actively swimming to settle new islands, another possibility is dispersal through rafting. This mode of dispersal could allow them to arrive on isolated pieces of land through the use of natural platforms, such as tree stumps or masses of debris. This is considered more likely by Brown and Moll, though it comes with its own issues. Adults would have needed large rafts while juveniles would have had to face increased predation once arriving at their destination, while also needing more time to reach sexual maturity. Still, rafting is a valid hypothesis as it is well documented in modern tortoises. Rafting has been used to explain the current existence of giant tortoises on isolated island chains within the Indian Ocean, with Aldabra giant tortoises having been found floating in the open ocean. In 2004 a female Aldabra giant tortoise came ashore at Kimbiji, Tanzania after having travelled from its home on Grande Terre. However, the turtle was found emaciated and covered in gooseneck barnacles. Another example of rafting giant tortoises was recorded in 1936, when two adult Galápagos tortoises were found adrift off the coast of Florida, likely after being swept away from a captive colony by a hurricane. At the same time, the retractable neck, center of gravity and greater buoyancy may all be factors in which modern tortoises are better suited at surviving rafting than meiolaniids.
Yet another hypothesis stems from Bauer and Vindum (1990), who suggest that Meiolania could have arrived on New Caledonia not through natural dispersal but because they were introduced there by humans. Although there are known examples of living giant tortoises being transported on ships as a source of food, this hypothesis is nonetheless considered unlikely by Brown and Moll. They point out multiple problems with the idea that render human introduction an impracticle and unlikely scenario. The transport of mature Meiolania would have come with several issues, in particular related to the size and armored, possibly very defensive nature of these animals. Even if humans had been able to subdue and transport large adults over great distances, they would have had to introduce a great number of them in order to sustain the population given the slow reproduction cycles of tortoises. Finally, studies have shown that adult tortoises introduced to new areas are very likely to disperse soon after release if not contained, an inconvenience if the turtle was meant to serve as a foodsource. The introduction of juveniles would be easier logistically due to their smaller size and the increased likelihood of them staying in the area they were released in. However, they too make for a poorly sustainable source of meat due to the long time tortoises take to grow and reproduce, again causing issues with establishing a stable population. Brown and Moll further point out that there is no evidence that the Lapita kept or even domesticated Meiolania.
The final hypotheses for the discovery of meiolaniids on remote islands depends more on geological events and terrestrial movement rather than trans-oceanic travel. One explanation could be found in what is termed the "escalator hopscotch" model. According to this model, an island chain may undergo a process that would see land emerge on one and of the chain and submerged on the other. This may go on in a continuous fashion and thus allow an animal, such as Meiolania, to either travel over directly connected islands or swim through narrow waterways. This means that even if one particular island has only emerged recently, the fauna on it could have arrived from a nearby island that is no longer above sealevel. This scenario can be applied to Meiolania platyceps on Lord Howe Island for instance, as the island is simply the latest in a series of volcanic islands formed by the Lord Howe Rise. Brown and Moll meanwhile favor the hypothesis that Zealandia could have played an important rolle in the dispersal of Meiolania. According to them, many of the islands Meiolania was native to are in fact parts of a now largely submerged Zealandia and could indicate that these turtles were more widespread in the past, only to be "stranded" on the remote islands after the continent was flooded.
Paleobiology
Lifestyle
The idea that Meiolania was a marine animal has been suggested a multitude of times since its initial discovery. Allan Riverstone McCulloch for instance believed that the fossils on Lord Howe Island were preserved when turtles came ashore to lay their eggs, only to die in the process. Anderson meanwhile, who described a series of limb elements of what he considered to be two different species, likened Meiolania to marsh and river turtles, noting that its limbs were far less specialised than those of true sea turtles. He proposed that Meiolania was more semi-aquatic, inhabiting the shore and estuaries while still being capable of traveling the ocean to disperse to other islands.
Among the most recent publications suggesting a more aquatic way of life was Lichtig and Lucas (2018). Their analysis primarily focused on shell dimension ratios and limb anatomy in comparison to modern turtle species, leading to them proposing that Meiolania was a bottom walker they liken to modern members of the Chelydridae, the snapping turtles. However, this hypothesis was rejected by Brown and Moll the following year. Besides the questionable nature of their ratio-based approach, the two authors point out that Lichtig and Lucas based their conclusion on a single juvenile specimen which was a composite reconstruction, thus not only providing an extremely poor sample size but also not reflecting the real proportions of the animal, much less those of an adult. Yet another problem pointed out by Brown and Moll is that the aquatic nature proposed in the Lichtig and Lucas study only results from one of the two calculations offered by them.
Most other authors, historical as well as contemporary, support the idea that Meiolania was terrestrial. One study with major implications for this hypothesis was published by Paulina-Carabajal et al. in 2017. In this study, the endocasts of the meiolaniids Meiolania, Gaffneylania and Niolamia were scanned and analyzed, with particular focus on the nasal cavity and inner ear. One particularly noteworthy feature in this regard is the elongated vestibulum of the nose. This is compared to two groups of animals, on the one hand some species of turtles with snorkel-like noses (softshell turtles, mata matas and pig-nosed turtles) and on the other lizards living in dry and arid conditions. In case of the latter, the elongated vestibulum helps keep sand out of the animals nose, which is particularly useful in deserts or other sandy environments. This matches the fact that some researchers have suggested that Meiolania platyceps was a beachgoing animal, spending at least parts of its life in sandy areas, even if only to lay eggs. The size of the nasal cavity (cavum nasi proprium) itself is also indicative of a terrestrial lifestyle. Generally, aquatic turtles possess the smallest nasal cavities in living testudines, while those of terrestrial tortoises are notably larger and allow for a better sense of smell. The nasal cavity of meiolaniids meanwhile is even greater. A similar relation can be seen in the inner ear anatomy, especially the wide angle between the anterior and posterior semicircular canals. This angle only measures between 80 and 95° in aquatic turtles, 100° in tortoises and 115° in meiolaniids. The inner ear anatomy of tortoises and meiolaniids is generally associated with stabilizing the head while walking, while aquatic turtles are built to deal with rolling during swimming. Various aspects of its general morphology also draw closer comparison to terrestrial tortoises than aquatic ones, such as the robustness of the limbs, the rounded head of the femur and the shape of the shoulder girdle.
Diet
Multiple aspects of Meiolania'''s anatomy have been used to infer its diet. The flexibility of the neck being largely focused on side to side movement and restricted by the projection of the carapace as well as the weight of the horns are thought to indicate that Meiolania was a terrestrial grazer. It likely fed on a variety of plant material including various herbaceous plants, ferns and perhaps even the fallen fruit of palm trees. Although this may have been its preferred way of feeding, it is not entirely impossible that Meiolania could have occasionally browsed on low hanging vegetation. The mild climate of Lord Howe Island would have provided the turtle with a consistent supply of food and it is possible that Meiolania periodically wandered across the island in search of food, appearing seasonally in certain regions. Foraging could have made use of the enhanced sense of smell proposed by Paulina-Carabajal and colleagues. While it is possible that ?Meiolania damelipi was not a meiolaniid, isotopic analysis does line up with the ecology of Meiolania platyceps, suggesting an herbivorous to omnivorous diet.
Reproduction
Although nothing concrete is known about the mating behavior of Meiolania, a widespread hypothesis suggests that the highly ornamented skull, armored limbs and tail club could have served a function in intraspecific combat between rival males during the mating season. Combat may have been instigated through chemical cues produced by one of the many scent glands found in modern turtles (musk glands, cloacal secretions and mental glands). While such glands are not preserved in fossils, the enlarged nasal cavity and inferred heightened sense of smell support this interpretation. Modern tortoises engage with rivals primarily through maneuvers performed with the shell, including knocking, ramming, twisting and others. Combat between Meiolania specimens could further involve the use of armored limbs, the spiked tails covered in bony rings and the large horns situated atop the animal's head. This may also explain the great sideways mobility that has been inferred for Meiolanias neck.
Meiolania platyceps is thought to have at least occasionally traversed the beaches of Lord Howe Island, at the very least while laying eggs. This is evident through the discovery of egg clutches on Lord Howe Island, which are placed in the oogenus Testudoolithus lordhowensis. These eggs have been assigned to Meiolania in the absence of other turtles from the island, as well as the close association between eggs and Meiolania fossils in some localities. Furthermore, the eggs are rigid and thus differ clearly from the more pliable eggs that would be produced by sea turtles. Based on these fossils, Meiolania laid large, spherical eggs that measured across, likely weighed and were slightly higher than wide (1.2:1). This makes the eggs the largest fossil turtle eggs known and only smaller than the eggs laid by modern Galapagos and Aldabra giant tortoises. Study of the gas conductance of the fossil eggshells allowed for comparison with modern turtles in order to determine the potential nesting strategy. Analysis showed that the eggs were highly conductive, lacking adaptations against water evaporation as seen in bird eggs, and were thus likely laid in a hole nest in a high moisture environment, for example a sandy beach. This confirms that the beach likely was the area intentionally sought out by these animals. The clutch that serves as the holotype for T. lordhowensis consists of a minimum of 10 eggs, laid across two layers within a single nest.
Extinction
Due to the great range of Meiolania, which covered many ecosystems entirely independent from each other, the genus' extinction is thought to have been a multifaceted process caused by various factors not directly tied to one another.
On Lord Howe, the rising sea levels following the end of the last ice age greatly decreased the land area of the island, which may have lead to the demise of the turtles on the island. This does not account for extinctions on other islands that retained a much more significant land area. It is unclear just how long Meiolania survived across most of its range and if they ever came in contact with humans.
However, there are potential exceptions to this. Among the most recent records of Meiolania may be that of ?M. damelipi from Vanuatu (assuming the remains are actually those of a meiolaniid). These remains were initially found in archaeological layers above those of a human graveyard, with the remains dating to roughly 2,800 BP. Later studies identified similar sites across Vanuatu and Fiji, greatly extending the area across which humans and megafaunal turtles came in contact with each other. White and colleagues propose that the turtles at the Vanuatu site may have been butchered for their meat based on the fact that only limb bones were present in abundance, i.e. parts of the turtle that would have been much fleshier than the diagnostic skulls and tails. The bones recovered at the site show clear signs of being cut up and consumed, showing marks left by cutting tools, burns and fractures in addition to all bones being found in association with human settlements. Although hunting of turtles by the Lapita people may have been a contributing factor, the Pleistocene overkill hypothesis remains a controversial idea among researchers. Perhaps a more important factor could have been the introduction of invasive species to the fragile island ecosystem, namely pigs, which would have fed on eggs and juvenile turtles. Whatever the precise combination of factors, ?Meiolania damelipi appears to have disappeared from the island only 300 years after the first humans settled there. A similar pattern can be observed with regards to the extinction of island crocodiles and birds across the western Pacific.
Another, even younger, instance of a meiolaniid surviving into human times stems from New Caledonia. The remains from Pindai Cave have been dated to 1720 ± 70 years BP ((160–300 AD) via uncalibrated radiocarbon dating and 1820–1419 years BP (130–531 AD) through calibrated 14C dating. While it is unclear whether or not these remains belong to Meiolania itself like those from nearby Walpole Island, it is at least confirmed to actually represent a meiolaniid, unlike the uncertainty regarding ?M. damelipi.
| Biology and health sciences | Prehistoric turtles | Animals |
5552783 | https://en.wikipedia.org/wiki/Basal%20shoot | Basal shoot | Basal shoots, root sprouts, adventitious shoots, and suckers are words for various kinds of shoots that grow from adventitious buds on the base of a tree or shrub, or from adventitious buds on its roots. Shoots that grow from buds on the base of a tree or shrub are called basal shoots; these are distinguished from shoots that grow from adventitious buds on the roots of a tree or shrub, which may be called root sprouts or suckers. A plant that produces root sprouts or runners is described as surculose. Water sprouts produced by adventitious buds may occur on the above-ground stem, branches or both of trees and shrubs. Suckers are shoots arising underground from the roots some distance from the base of a tree or shrub.
In botany and ecology
In botany, a root sprout or sucker is a severable plant that grows not from a seed but from the meristem of a root at the base of or a certain distance from the original tree or shrub. Root sprouts may emerge a substantial distance from the base of the originating plant, are a form of vegetative dispersal, and may form a patch that constitutes a habitat in which that surculose plant is the dominant species. Root sprouts also may grow from the roots of trees that have been felled. Tree roots ordinarily grow outward from their trunks a distance of 1.5 to 2 times their heights, and therefore root sprouts can emerge a substantial distance from the trunk.
This is a phenomenon of natural "asexual reproduction", also denominated "vegetative reproduction". It is a strategy of plant propagation. The complex of clonal individuals and the originating plant comprise a single genetic individual, i. e., a genet. The individual root sprouts are clones of the original plant, and each has a genome that is identical to that of the originating plant from which it grew. Many species of plants reproduce through vegetative reproduction, e. g. Canada thistle, cherry, apple, guava, privet, hazel, lilac, tree of heaven, and Asimina triloba.
The root sprout is a form of dispersal vector that allows plants to spread to habitats that favor their survival and growth. Some species, such as poplars and blackthorn, produce root sprouts that can spread rapidly, and they can form thick mats of roots that can reclaim areas that have been cleared of vegetation by logging, erosion, pasturing. The giant aspen, "Pando" is a dramatic example. These plants could be considered invasive, but they are cultivated or permitted to grow to stabilize soils and even to then be naturally replaced by non-pioneer species in locations as such those that have been developed for public works and along channels of waterways that may flood and reservoirs. These plants form shaded areas wherein new species may grow and gradually replace them.
Stolons are stems that grow on the surface of the soil or immediately below it and form adventitious roots at their nodes, and new clonal plants from the buds. Not all horizontal plant stems are stolons. Plants with stolons are described as "stoloniferous". Stolons, especially those above the surface of the soil are often denominated "runners". Rhizomes, in contrast, are root-like stems that may either grow horizontally on the surface of the soil or in other orientations underground.
In horticulture
Root sprouts and basal shoots can be used to propagate woody plants. Root sprouts can be dug or severed with some of the roots still attached. As for basal shoots, stool beds involve cutting a juvenile plant proximate to the surface of the soil and heaping soil over the cut so that basal shoots will form adventitious roots and later can be severed to form multiple, rooted, new plants. The technique is used especially for vegetative propagation of rootstocks for apple trees.
| Technology | Horticulture | null |
5555728 | https://en.wikipedia.org/wiki/Transition%20zone%20%28Earth%29 | Transition zone (Earth) | The transition zone is the part of Earth's mantle that is located between the lower and the upper mantle, most strictly between the seismic-discontinuity depths of about , but more broadly defined as the zone encompassing those discontinuities, i.e., between about depth. Earth's solid, rocky mantle, including the mantle transition zone (often abbreviated as MTZ), consists primarily of peridotite, an ultramafic igneous rock.
The mantle was divided into the upper mantle, transition zone, and lower mantle as a result of sudden seismic-velocity discontinuities at depths of . This is thought to occur as a result of rearrangement of grains in olivine (which constitutes a large portion of peridotite) at a depth of , to form a denser crystal structure as a result of the increase in pressure with increasing depth. Below a depth of , evidence suggests due to pressure changes ringwoodite minerals change into two new denser phases, bridgmanite and periclase. This can be seen using body waves from earthquakes, which are converted, reflected or refracted at the boundary, and predicted from mineral physics, as the phase changes are temperature and density-dependent and hence depth dependent.
410 km discontinuity – phase transition
A peak is seen in seismological data at about as is predicted by the transition from α- to β-Mg2SiO4 (olivine to wadsleyite). From the Clapeyron slope, this change is predicted to occur at shallower depths in cold regions, such as where subducting slabs penetrate into the transition zone, and at greater depths in warmer regions, such as where mantle plumes pass through the transition zone. Therefore, the exact depth of the "410 km discontinuity" can vary.
660 km discontinuity – phase transition
The 660 km discontinuity appears in PP precursors (a wave which reflects off the discontinuity once) only in certain regions but is always apparent in SS precursors. It is seen as single and double reflections in receiver functions for P to S conversions over a broad range of depths (). The Clapeyron slope predicts a deeper discontinuity in cold regions and a shallower discontinuity in hot regions. This discontinuity is generally linked to the transition from ringwoodite to bridgmanite and periclase. This is thermodynamically an endothermic reaction and creates a viscosity jump. Both characteristics cause this phase transition to play an important role in geodynamical models. Cold downwelling material might pond on this transition.
Other discontinuities
There is another major phase transition predicted at for the transition of olivine (β to γ) and garnet in the pyrolite mantle. This one has only sporadically been observed in seismological data.
Other non-global phase transitions have been suggested at a range of depths.
| Physical sciences | Tectonics | Earth science |
5557857 | https://en.wikipedia.org/wiki/Pre-main-sequence%20star | Pre-main-sequence star | A pre-main-sequence star (also known as a PMS star and PMS object) is a star in the stage when it has not yet reached the main sequence. Earlier in its life, the object is a protostar that grows by acquiring mass from its surrounding envelope of interstellar dust and gas. After the protostar blows away this envelope, it is optically visible, and appears on the stellar birthline in the Hertzsprung-Russell diagram. At this point, the star has acquired nearly all of its mass but has not yet started hydrogen burning (i.e. nuclear fusion of hydrogen). The star continues to contract, its internal temperature rising until it begins hydrogen burning on the zero age main sequence. This period of contraction is the pre-main sequence stage. An observed PMS object can either be a T Tauri star, if it has fewer than 2 solar masses (), or else a Herbig Ae/Be star, if it has 2 to 8 . Yet more massive stars have no pre-main-sequence stage because they contract too quickly as protostars. By the time they become visible, the hydrogen in their centers is already fusing and they are main-sequence objects.
The energy source of PMS objects is gravitational contraction, as opposed to hydrogen burning in main-sequence stars. In the Hertzsprung–Russell diagram, pre-main-sequence stars with more than 0.5 first move vertically downward along Hayashi tracks, then leftward and horizontally along Henyey tracks, until they finally halt at the main sequence. Pre-main-sequence stars with less than 0.5 contract vertically along the Hayashi track for their entire evolution.
PMS stars can be differentiated empirically from main-sequence stars by using stellar spectra to measure their surface gravity. A PMS object has a larger radius than a main-sequence star with the same stellar mass and thus has a lower surface gravity. Although they are optically visible, PMS objects are rare relative to those on the main sequence, because their contraction lasts for only 1 percent of the time required for hydrogen fusion. During the early portion of the PMS stage, most stars have circumstellar disks, which are the sites of planet formation.
| Physical sciences | Stellar astronomy | Astronomy |
8741245 | https://en.wikipedia.org/wiki/Debris%20disk | Debris disk | A debris disk (American English), or debris disc (Commonwealth English), is a circumstellar disk of dust and debris in orbit around a star. Sometimes these disks contain prominent rings, as seen in the image of Fomalhaut on the right. Debris disks are found around stars with mature planetary systems, including at least one debris disk in orbit around an evolved neutron star. Debris disks can also be produced and maintained as the remnants of collisions between planetesimals, otherwise known as asteroids and comets.
As of 2001, more than 900 candidate stars had been found to possess a debris disk. They are usually discovered by examining the star system in infrared light and looking for an excess of radiation beyond that emitted by the star. This excess is inferred to be radiation from the star that has been absorbed by the dust in the disk, then re-radiated away as infrared energy.
Debris disks are often described as massive analogs to the debris in the Solar System. Most known debris disks have radii of 10–100 astronomical units (AU); they resemble the Kuiper belt in the Solar System, although the Kuiper belt does not have a high enough dust mass to be detected around even the nearest stars. Some debris disks contain a component of warmer dust located within 10 AU from the central star. This dust is sometimes called exozodiacal dust by analogy to zodiacal dust in the Solar System.
Observation history
In 1984 a debris disk was detected around the star Vega using the IRAS satellite. Initially this was believed to be a protoplanetary disk, but it is now known to be a debris disk due to the lack of gas in the disk and the age of the star. The first four debris disks discovered with IRAS are known as the "fabulous four": Vega, Beta Pictoris, Fomalhaut, and Epsilon Eridani. Subsequently, direct images of the Beta Pictoris disk showed irregularities in the dust, which were attributed to gravitational perturbations by an unseen exoplanet. That explanation was confirmed with the 2008 discovery of the exoplanet Beta Pictoris b.
Other exoplanet-hosting stars, including the first discovered by direct imaging (HR 8799), are known to also host debris disks. The nearby star 55 Cancri, a system that is also known to contain five planets, also was reported to have a debris disk, but that detection could not be confirmed.
Structures in the debris disk around Epsilon Eridani suggest perturbations by a planetary body in orbit around that star, which may be used to constrain the mass and orbit of the planet.
On 24 April 2014, NASA reported detecting debris disks in archival images of several young stars, HD 141943 and HD 191089, first viewed between 1999 and 2006 with the Hubble Space Telescope, by using newly improved imaging processes.
In 2021, observations of a star, VVV-WIT-08, that became obscured for a period of 200 days may have been the result of a debris disk passing between the star and observers on Earth. Two other stars, Epsilon Aurigae and TYC 2505-672-1, are reported to be eclipsed regularly and it has been determined that the phenomenon is the result of disks orbiting them in varied periods, suggesting that VVV-WIT-08 may be similar and have a much longer orbital period that just has been experienced by observers on Earth. VVV-WIT-08 is ten times the size of the Sun in the constellation of Sagittarius.
Origin
During the formation of a Sun-like star, the object passes through the T-Tauri phase during which it is surrounded by a gas-rich, disk-shaped nebula. Out of this material are formed planetesimals, which can continue accreting other planetesimals and disk material to form planets. The nebula continues to orbit the pre-main-sequence star for a period of until it is cleared out by radiation pressure and other processes. Second generation dust may then be generated about the star by collisions between the planetesimals, which forms a disk out of the resulting debris. At some point during their lifetime, at least 45% of these stars are surrounded by a debris disk, which then can be detected by the thermal emission of the dust using an infrared telescope. Repeated collisions may cause a disk to persist for much of the lifetime of a star.
Typical debris disks contain small grains 1–100 μm in size. Collisions will grind down these grains to sub-micrometre sizes, which will be removed from the system by radiation pressure from the host star. In very tenuous disks such as the ones in the Solar System, the Poynting–Robertson effect can cause particles to spiral inward instead. Both processes limit the lifetime of the disk to 10 Myr or less. Thus, for a disk to remain intact, a process is needed to continually replenish the disk. This can occur, for example, by means of collisions between larger bodies, followed by a cascade that grinds down the objects to the observed small grains.
For collisions to occur in a debris disk, the bodies must be gravitationally perturbed sufficiently to create relatively large collisional velocities. A planetary system around the star can cause such perturbations, as can a binary star companion or the close approach of another star. The presence of a debris disk may indicate a high likelihood of exoplanets orbiting the star. Furthermore, many debris disks also show structures within the dust (for example, clumps and warps or asymmetries) that point to the presence of one or more exoplanets within the disk. The presence or absence of asymmetries in our own trans-Neptunian belt remains controversial although they might exist.
Extreme Debris disks
A sub-type of debris disk is the so-called "extreme debris disk" (EDD). This type is defined as exceeding 1% of the luminosity of the star in the infrared. An EDD is surrounded by warm dust (200-600 Kelvin), that orbits the star within a few astronomical units. In other words the dust is present in a region where terrestrial planets form. EDDs are rare and around 24 are known as of 2024. Infrared spectra with Spitzer have shown that the dust is dominated by small particles made up of silicates that have a size between sub-μm and a few μm. EDDs are interpreted to have formed from one or more giant collisions between large planetesimals or planetary bodies. This is different to most debris disks, which are sustained by smaller collisions. EDDs are often transient events, with the dust produced in the event lasting years around the star before radiation pressure blows the small particles away. 2MASS J08090250-4858172 was one of the first such systems with observed infrared variability, showing two giant impact events in 2012 and 2014. In rare cases the dust cloud can orbit in front of the star, causing dips of brightness in the optical. One such system is HD 166191, which shows a star-sized dust cloud transiting in front of the star. Giant impacts are more common in young systems and after around 300 Myrs giant impacts become less common. A few relative old EDDs are also known, reaching up to 5.5 Gyrs. These old EDDs often have a wide, eccentric companion, which might help trigger such giant impact events. Giant impacts might not always be detectable as EDDs. Such disks are made up of two types of dust. The first type is vapor condensates that is produced immediately in the event. The second type is dust created by the grinding down of boulders produced in the event. Simulations have shown that boulders are more important to classify disks as extreme.
Known belts
Belts of dust or debris have been detected around many stars, including the Sun, including the following:
The orbital distance of the belt is an estimated mean distance or range, based either on direct measurement from imaging or derived from the temperature of the belt. The Earth has an average distance from the Sun of 1 AU.
| Physical sciences | Stellar astronomy | Astronomy |
11885926 | https://en.wikipedia.org/wiki/Flow%20velocity | Flow velocity | In continuum mechanics the flow velocity in fluid dynamics, also macroscopic velocity in statistical mechanics, or drift velocity in electromagnetism, is a vector field used to mathematically describe the motion of a continuum. The length of the flow velocity vector is scalar, the flow speed.
It is also called velocity field; when evaluated along a line, it is called a velocity profile (as in, e.g., law of the wall).
Definition
The flow velocity u of a fluid is a vector field
which gives the velocity of an element of fluid at a position and time
The flow speed q is the length of the flow velocity vector
and is a scalar field.
Uses
The flow velocity of a fluid effectively describes everything about the motion of a fluid. Many physical properties of a fluid can be expressed mathematically in terms of the flow velocity. Some common examples follow:
Steady flow
The flow of a fluid is said to be steady if does not vary with time. That is if
Incompressible flow
If a fluid is incompressible the divergence of is zero:
That is, if is a solenoidal vector field.
Irrotational flow
A flow is irrotational if the curl of is zero:
That is, if is an irrotational vector field.
A flow in a simply-connected domain which is irrotational can be described as a potential flow, through the use of a velocity potential with If the flow is both irrotational and incompressible, the Laplacian of the velocity potential must be zero:
Vorticity
The vorticity, , of a flow can be defined in terms of its flow velocity by
If the vorticity is zero, the flow is irrotational.
The velocity potential
If an irrotational flow occupies a simply-connected fluid region then there exists a scalar field such that
The scalar field is called the velocity potential for the flow. (See Irrotational vector field.)
Bulk velocity
In many engineering applications the local flow velocity vector field is not known in every point and the only accessible velocity is the bulk velocity or average flow velocity (with the usual dimension of length per time), defined as the quotient between the volume flow rate (with dimension of cubed length per time) and the cross sectional area (with dimension of square length):
.
| Physical sciences | Fluid mechanics | Physics |
11887250 | https://en.wikipedia.org/wiki/Stellar%20magnetic%20field | Stellar magnetic field | A stellar magnetic field is a magnetic field generated by the motion of conductive plasma inside a star. This motion is created through convection, which is a form of energy transport involving the physical movement of material. A localized magnetic field exerts a force on the plasma, effectively increasing the pressure without a comparable gain in density. As a result, the magnetized region rises relative to the remainder of the plasma, until it reaches the star's photosphere. This creates starspots on the surface, and the related phenomenon of coronal loops.
Measurement
A star's magnetic field can be measured using the Zeeman effect. Normally the atoms in a star's atmosphere will absorb certain frequencies of energy in the electromagnetic spectrum, producing characteristic dark absorption lines in the spectrum. However, when the atoms are within a magnetic field, these lines become split into multiple, closely spaced lines. The energy also becomes polarized with an orientation that depends on the orientation of the magnetic field. Thus the strength and direction of the star's magnetic field can be determined by examination of the Zeeman effect lines.
A stellar spectropolarimeter is used to measure the magnetic field of a star. This instrument consists of a spectrograph combined with a polarimeter. The first instrument to be dedicated to the study of stellar magnetic fields was NARVAL, which was mounted on the Bernard Lyot Telescope at the Pic du Midi de Bigorre in the French Pyrenees mountains.
Various measurements—including magnetometer measurements over the last 150 years; 14C in tree rings; and 10Be in ice cores—have established substantial magnetic variability of the Sun on decadal, centennial and millennial time scales.
Field generation
Stellar magnetic fields, according to solar dynamo theory, are caused within the convective zone of the star. The convective circulation of the conducting plasma functions like a dynamo. This activity destroys the star's primordial magnetic field, then generates a dipolar magnetic field. As the star undergoes differential rotation—rotating at different rates for various latitudes—the magnetism is wound into a toroidal field of "flux ropes" that become wrapped around the star. The fields can become highly concentrated, producing activity when they emerge on the surface.
The magnetic field of a rotating body of conductive gas or liquid develops self-amplifying electric currents, and thus a self-generated magnetic field, due to a combination of differential rotation (different angular velocity of different parts of body), Coriolis forces and induction. The distribution of currents can be quite complicated, with numerous open and closed loops, and thus the magnetic field of these currents in their immediate vicinity is also quite twisted. At large distances, however, the magnetic fields of currents flowing in opposite directions cancel out and only a net dipole field survives, slowly diminishing with distance. Because the major currents flow in the direction of conductive mass motion (equatorial currents), the major component of the generated magnetic field is the dipole field of the equatorial current loop, thus producing magnetic poles near the geographic poles of a rotating body.
The magnetic fields of all celestial bodies are often aligned with the direction of rotation, with notable exceptions such as certain pulsars.
Periodic field reversal
Another feature of this dynamo model is that the currents are AC rather than DC. Their direction, and thus the direction of the magnetic field they generate, alternates more or less periodically, changing amplitude and reversing direction, although still more or less aligned with the axis of rotation.
The Sun's major component of magnetic field reverses direction every 11 years (so the period is about 22 years), resulting in a diminished magnitude of magnetic field near reversal time. During this dormancy, the sunspots activity is at maximum (because of the lack of magnetic braking on plasma) and, as a result, massive ejection of high energy plasma into the solar corona and interplanetary space takes place. Collisions of neighboring sunspots with oppositely directed magnetic fields result in the generation of strong electric fields near rapidly disappearing magnetic field regions. This electric field accelerates electrons and protons to high energies (kiloelectronvolts) which results in jets of extremely hot plasma leaving the Sun's surface and heating coronal plasma to high temperatures (millions of kelvin).
If the gas or liquid is very viscous (resulting in turbulent differential motion), the reversal of the magnetic field may not be very periodic. This is the case with the Earth's magnetic field, which is generated by turbulent currents in a viscous outer core.
Surface activity
Starspots are regions of intense magnetic activity on the surface of a star. (On the Sun they are termed sunspots.) These form a visible component of magnetic flux tubes that are formed within a star's convection zone. Due to the differential rotation of the star, the tube becomes curled up and stretched, inhibiting convection and producing zones of lower than normal temperature. Coronal loops often form above starspots, forming from magnetic field lines that stretch out into the stellar corona. These in turn serve to heat the corona to temperatures over a million kelvins.
The magnetic fields linked to starspots and coronal loops are linked to flare activity, and the associated coronal mass ejection. The plasma is heated to tens of millions of kelvins, and the particles are accelerated away from the star's surface at extreme velocities.
Surface activity appears to be related to the age and rotation rate of main-sequence stars. Young stars with a rapid rate of rotation exhibit strong activity. By contrast middle-aged, Sun-like stars with a slow rate of rotation show low levels of activity that varies in cycles. Some older stars display almost no activity, which may mean they have entered a lull that is comparable to the Sun's Maunder minimum. Measurements of the time variation in stellar activity can be useful for determining the differential rotation rates of a star.
Magnetosphere
A star with a magnetic field will generate a magnetosphere that extends outward into the surrounding space. Field lines from this field originate at one magnetic pole on the star then end at the other pole, forming a closed loop. The magnetosphere contains charged particles that are trapped from the stellar wind, which then move along these field lines. As the star rotates, the magnetosphere rotates with it, dragging along the charged particles.
As stars emit matter with a stellar wind from the photosphere, the magnetosphere creates a torque on the ejected matter. This results in a transfer of angular momentum from the star to the surrounding space, causing a slowing of the stellar rotation rate. Rapidly rotating stars have a higher mass loss rate, resulting in a faster loss of momentum. As the rotation rate slows, so too does the angular deceleration. By this means, a star will gradually approach, but never quite reach, the state of zero rotation.
Magnetic stars
A T Tauri star is a type of pre-main-sequence star that is being heated through gravitational contraction and has not yet begun to burn hydrogen at its core. They are variable stars that are magnetically active. The magnetic field of these stars is thought to interact with its strong stellar wind, transferring angular momentum to the surrounding protoplanetary disk. This allows the star to brake its rotation rate as it collapses.
Small, M-class stars (with 0.1–0.6 solar masses) that exhibit rapid, irregular variability are known as flare stars. These fluctuations are hypothesized to be caused by flares, although the activity is much stronger relative to the size of the star. The flares on this class of stars can extend up to 20% of the circumference, and radiate much of their energy in the blue and ultraviolet portion of the spectrum.
Straddling the boundary between stars that undergo nuclear fusion in their cores and non-hydrogen fusing brown dwarfs are the ultracool dwarfs. These objects can emit radio waves due to their strong magnetic fields. Approximately 5–10% of these objects have had their magnetic fields measured. The coolest of these, 2MASS J10475385+2124234 with a temperature of 800-900 K, retains a magnetic field stronger than 1.7 kG, making it some 3000 times stronger than the Earth's magnetic field. Radio observations also suggest that their magnetic fields periodically change their orientation, similar to the Sun during the solar cycle.
Planetary nebulae are created when a red giant star ejects its outer envelope, forming an expanding shell of gas. However it remains a mystery why these shells are not always spherically symmetrical. 80% of planetary nebulae do not have a spherical shape; instead forming bipolar or elliptical nebulae. One hypothesis for the formation of a non-spherical shape is the effect of the star's magnetic field. Instead of expanding evenly in all directions, the ejected plasma tends to leave by way of the magnetic poles. Observations of the central stars in at least four planetary nebulae have confirmed that they do indeed possess powerful magnetic fields.
After some massive stars have ceased thermonuclear fusion, a portion of their mass collapses into a compact body of neutrons called a neutron star. These bodies retain a significant magnetic field from the original star, but the collapse in size causes the strength of this field to increase dramatically. The rapid rotation of these collapsed neutron stars results in a pulsar, which emits a narrow beam of energy that can periodically point toward an observer.
Compact and fast-rotating astronomical objects (white dwarfs, neutron stars and black holes) have extremely strong magnetic fields. The magnetic field of a newly born fast-spinning neutron star is so strong (up to 108 teslas) that it electromagnetically radiates enough energy to quickly (in a matter of few million years) damp down the star rotation by 100 to 1000 times. Matter falling on a neutron star also has to follow the magnetic field lines, resulting in two hot spots on the surface where it can reach and collide with the star's surface. These spots are literally a few feet (about a metre) across but tremendously bright. Their periodic eclipsing during star rotation is hypothesized to be the source of pulsating radiation (see pulsars).
An extreme form of a magnetized neutron star is the magnetar. These are formed as the result of a core-collapse supernova. The existence of such stars was confirmed in 1998 with the measurement of the star SGR 1806-20. The magnetic field of this star has increased the surface temperature to 18 million K and it releases enormous amounts of energy in gamma ray bursts.
Jets of relativistic plasma are often observed along the direction of the magnetic poles of active black holes in the centers of very young galaxies.
Star-planet interaction controversy
In 2008, a team of astronomers first described how as the exoplanet orbiting HD 189733 A reaches a certain place in its orbit, it causes increased stellar flaring. In 2010, a different team found that every time they observe the exoplanet at a certain position in its orbit, they also detected X-ray flares. Theoretical research since 2000 suggested that an exoplanet very near to the star that it orbits may cause increased flaring due to the interaction of their magnetic fields, or because of tidal forces. In 2019, astronomers combined data from Arecibo Observatory, MOST, and the Automated Photoelectric Telescope, in addition to historical observations of the star at radio, optical, ultraviolet, and X-ray wavelengths to examine these claims. Their analysis found that the previous claims were exaggerated and the host star failed to display many of the brightness and spectral characteristics associated with stellar flaring and solar active regions, including sunspots. They also found that the claims did not stand up to statistical analysis, given that many stellar flares are seen regardless of the position of the exoplanet, therefore debunking the earlier claims. The magnetic fields of the host star and exoplanet do not interact, and this system is no longer believed to have a "star-planet interaction."
| Physical sciences | Stellar astronomy | null |
11890785 | https://en.wikipedia.org/wiki/Summer%20solstice | Summer solstice | The summer solstice or estival solstice occurs when one of Earth's poles has its maximum tilt toward the Sun. It happens twice yearly, once in each hemisphere (Northern and Southern). The summer solstice is the day with the longest period of daylight and shortest night of the year in that hemisphere, when the sun is at its highest position in the sky. At either pole there is continuous daylight at the time of its summer solstice. The opposite event is the winter solstice.
The summer solstice occurs during the hemisphere's summer. In the Northern Hemisphere, this is the June solstice (20, 21 or 22 June) and in the Southern Hemisphere, this is the December solstice (20, 21, 22 or 23 of December). Since prehistory, the summer solstice has been a significant time of year in many cultures, and has been marked by festivals and rituals. Traditionally, in temperate regions (especially Europe), the summer solstice is seen as the middle of summer and referred to as midsummer; although today in some countries and calendars it is seen as the beginning of summer.
On the summer solstice, Earth's maximum axial tilt toward the Sun is 23.44°. Likewise, the Sun's declination from the celestial equator is 23.44°. In areas outside the tropics, the sun reaches its highest elevation angle at solar noon on the summer solstice.
Although the summer solstice is the longest day of the year for that hemisphere, the dates of earliest sunrise and latest sunset vary by a few days. This is because Earth orbits the Sun in an ellipse, and its orbital speed varies slightly during the year.
Culture
There is evidence that the summer solstice has been culturally important since the Neolithic era. Many ancient monuments in Europe especially, as well as parts of the Middle East, Asia and the Americas, are aligned with the sunrise or sunset on the summer solstice (see archaeoastronomy). The significance of the summer solstice has varied among cultures, but most recognize the event in some way with holidays, festivals, and rituals around that time with themes of fertility. In the Roman Empire, the traditional date of the summer solstice was 24 June. In Germanic-speaking cultures, the time around the summer solstice is called 'midsummer'. Traditionally in northern Europe midsummer was reckoned as the night of 23–24 June, with summer beginning on May Day. The summer solstice continues to be seen as the middle of summer in many European cultures, but in some cultures or calendars it is seen as summer's beginning. In Sweden, midsummer is one of the year's major holidays when the country closes down as much as during Christmas.
Observances
Traditional festivals
Saint John's Eve (Europe), including:
Juhannus (Finland)
Jaanipäev (Estonia)
Jāņi (Latvia)
Joninės (Lithuania)
Jónsmessa (Iceland)
Golowan (Cornwall)
Kupala Night (Slavic peoples)
Yhyakh (Yakuts)
Tiregān (Iran)
Xiazhi (China)
Shën Gjini–Shën Gjoni, Festa e Malit/Bjeshkës, Festa e Blegtorisë, etc. (Albanians)
Modern observances
National Indigenous Peoples Day (Canada)
Day of Private Reflection (Northern Ireland)
Fremont Solstice Parade (Fremont, Seattle, Washington, United States)
Santa Barbara Summer Solstice Parade (Santa Barbara, California, United States)
International Yoga Day
Fête de la Musique, also known as World Music Day
In folk music
"Oh at Ivan, oh at Kupala" (Ukr. Ой на Івана, ой на Купала) - Ukrainian folk song.
"Kupalinka" - (Belar. Купалінка) - Belarusian folk song
"There is a lake behind the hill" (Lith. Už kalnelio ežerėlis) - Lithuanian folk song.
Length of the day on northern summer solstice
The length of day increases from the equator towards the North Pole in the Northern Hemisphere in June (around the summer solstice there), but decreases towards the South Pole in the Southern Hemisphere at the time of the southern winter solstice.
| Physical sciences | Celestial sphere: General | Astronomy |
2202301 | https://en.wikipedia.org/wiki/Thermal%20decomposition | Thermal decomposition | Thermal decomposition, or thermolysis, is a chemical decomposition of a substance caused by heat. The decomposition temperature of a substance is the temperature at which the substance chemically decomposes. The reaction is usually endothermic as heat is required to break chemical bonds in the compound undergoing decomposition. If decomposition is sufficiently exothermic, a positive feedback loop is created producing thermal runaway and possibly an explosion or other chemical reaction. Thermal decomposition is a chemical reaction where heat is a reactant. Since heat is a reactant, these reactions are endothermic meaning that the reaction requires thermal energy to break the chemical bonds in the molecule.
Decomposition temperature definition
A simple substance (like water) may exist in equilibrium with its thermal decomposition products, effectively halting the decomposition. The equilibrium fraction of decomposed molecules increases with the temperature.
Since thermal decomposition is a kinetic process, the observed temperature of its beginning in most instances will be a function of the experimental conditions and sensitivity of the experimental setup. For a rigorous depiction of the process, the use of thermokinetic modeling is recommended.
main definition: Thermal decomposition is the breakdown of a compound into two or more different substances using heat, and it is an endothermic reaction
Examples
Calcium carbonate (limestone or chalk) decomposes into calcium oxide and carbon dioxide when heated. The chemical reaction is as follows:
CaCO3 → CaO + CO2
The reaction is used to make quick lime, which is an industrially important product.
Another example of thermal decomposition is 2Pb(NO3)2 → 2PbO + O2 + 4NO2.
Some oxides, especially of weakly electropositive metals decompose when heated to high enough temperature. A classical example is the decomposition of mercuric oxide to give oxygen and mercury metal. The reaction was used by Joseph Priestley to prepare samples of gaseous oxygen for the first time.
When water is heated to well over , a small percentage of it will decompose into OH, monatomic oxygen, monatomic hydrogen, O2, and H2.
The compound with the highest known decomposition temperature is carbon monoxide at ≈3870 °C (≈7000 °F).
Decomposition of nitrates, nitrites and ammonium compounds
Ammonium dichromate on heating yields nitrogen, water and chromium(III) oxide.
Ammonium nitrate on strong heating yields dinitrogen oxide ("laughing gas") and water.
Ammonium nitrite on heating yields nitrogen gas and water.
Barium azide -"Ba(N 3)"on heating yields barium metal and nitrogen gas.
Sodium azide on heating at violently decomposes to nitrogen and metallic sodium.
Sodium nitrate on heating yields sodium nitrite and oxygen gas.
Organic compounds like tertiary amines on heating undergo Hofmann elimination and yield secondary amines and alkenes.
Ease of decomposition
When metals are near the bottom of the reactivity series, their compounds generally decompose easily at high temperatures. This is because stronger bonds form between atoms towards the top of the reactivity series, and strong bonds are difficult to break. For example, copper is near the bottom of the reactivity series, and copper sulfate (CuSO4), begins to decompose at about , increasing rapidly at higher temperatures to about . In contrast potassium is near the top of the reactivity series, and potassium sulfate (K2SO4) does not decompose at its melting point of about , nor even at its boiling point.
Practical applications
Many scenarios in the real world are affected by thermal degradation. One of the things affected is fingerprints. When anyone touches something, there is residue left from the fingers. If fingers are sweaty, or contain more oils, the residue contains many chemicals. De Paoli and her collogues conducted a study testing thermal degradation on certain components found in fingerprints. For heat exposure, the amino acid and urea samples started degradation at and for lactic acid, the decomposition process started around . These components are necessary for further testing, so in the forensics discipline, decomposition of fingerprints is significant.
| Physical sciences | Other reactions | Chemistry |
2202422 | https://en.wikipedia.org/wiki/Ligand%20%28biochemistry%29 | Ligand (biochemistry) | In biochemistry and pharmacology, a ligand is a substance that forms a complex with a biomolecule to serve a biological purpose. The etymology stems from Latin ligare, which means 'to bind'. In protein-ligand binding, the ligand is usually a molecule which produces a signal by binding to a site on a target protein. The binding typically results in a change of conformational isomerism (conformation) of the target protein. In DNA-ligand binding studies, the ligand can be a small molecule, ion, or protein which binds to the DNA double helix. The relationship between ligand and binding partner is a function of charge, hydrophobicity, and molecular structure.
Binding occurs by intermolecular forces, such as ionic bonds, hydrogen bonds and Van der Waals forces. The association or docking is actually reversible through dissociation. Measurably irreversible covalent bonding between a ligand and target molecule is atypical in biological systems. In contrast to the definition of ligand in metalorganic and inorganic chemistry, in biochemistry it is ambiguous whether the ligand generally binds at a metal site, as is the case in hemoglobin. In general, the interpretation of ligand is contextual with regards to what sort of binding has been observed.
Ligand binding to a receptor protein alters the conformation by affecting the three-dimensional shape orientation. The conformation of a receptor protein composes the functional state. Ligands include substrates, inhibitors, activators, signaling lipids, and neurotransmitters. The rate of binding is called affinity, and this measurement typifies a tendency or strength of the effect. Binding affinity is actualized not only by host–guest interactions, but also by solvent effects that can play a dominant, steric role which drives non-covalent binding in solution. The solvent provides a chemical environment for the ligand and receptor to adapt, and thus accept or reject each other as partners.
Radioligands are radioisotope labeled compounds used in vivo as tracers in PET studies and for in vitro binding studies.
Receptor/ligand binding affinity
The interaction of ligands with their binding sites can be characterized in terms of a binding affinity. In general, high-affinity ligand binding results from greater attractive forces between the ligand and its receptor while low-affinity ligand binding involves less attractive force. In general, high-affinity binding results in a higher occupancy of the receptor by its ligand than is the case for low-affinity binding; the residence time (lifetime of the receptor-ligand complex) does not correlate. High-affinity binding of ligands to receptors is often physiologically important when some of the binding energy can be used to cause a conformational change in the receptor, resulting in altered behavior for example of an associated ion channel or enzyme.
A ligand that can bind to and alter the function of the receptor that triggers a physiological response is called a receptor agonist. Ligands that bind to a receptor but fail to activate the physiological response are receptor antagonists.
Agonist binding to a receptor can be characterized both in terms of how much physiological response can be triggered (that is, the efficacy) and in terms of the concentration of the agonist that is required to produce the physiological response (often measured as EC50, the concentration required to produce the half-maximal response). High-affinity ligand binding implies that a relatively low concentration of a ligand is adequate to maximally occupy a ligand-binding site and trigger a physiological response. Receptor affinity is measured by an inhibition constant or Ki value, the concentration required to occupy 50% of the receptor. Ligand affinities are most often measured indirectly as an IC50 value from a competition binding experiment where the concentration of a ligand required to displace 50% of a fixed concentration of reference ligand is determined. The Ki value can be estimated from IC50 through the Cheng Prusoff equation. Ligand affinities can also be measured directly as a dissociation constant (Kd) using methods such as fluorescence quenching, isothermal titration calorimetry or surface plasmon resonance.
Low-affinity binding (high Ki level) implies that a relatively high concentration of a ligand is required before the binding site is maximally occupied and the maximum physiological response to the ligand is achieved. In the example shown to the right, two different ligands bind to the same receptor binding site. Only one of the agonists shown can maximally stimulate the receptor and, thus, can be defined as a full agonist. An agonist that can only partially activate the physiological response is called a partial agonist. In this example, the concentration at which the full agonist (red curve) can half-maximally activate the receptor is about 5 x 10−9 Molar (nM = nanomolar).
Binding affinity is most commonly determined using a radiolabeled ligand, known as a tagged ligand. Homologous competitive binding experiments involve binding competition between a tagged ligand and an untagged ligand.
Real-time based methods, which are often label-free, such as surface plasmon resonance, dual-polarization interferometry and multi-parametric surface plasmon resonance (MP-SPR) can not only quantify the affinity from concentration based assays; but also from the kinetics of association and dissociation, and in the later cases, the conformational change induced upon binding. MP-SPR also enables measurements in high saline dissociation buffers thanks to a unique optical setup. Microscale thermophoresis (MST), an immobilization-free method was developed. This method allows the determination of the binding affinity without any limitation to the ligand's molecular weight.
For the use of statistical mechanics in a quantitative study of the
ligand-receptor binding affinity, see the comprehensive article
on the configurational partition function.
Drug or hormone binding potency
Binding affinity data alone does not determine the overall potency of a drug or a naturally produced (biosynthesized) hormone.
Potency is a result of the complex interplay of both the binding affinity and the ligand efficacy.
Drug or hormone binding efficacy
Ligand efficacy refers to the ability of the ligand to produce a biological response upon binding to the target receptor and the quantitative magnitude of this response. This response may be as an agonist, antagonist, or inverse agonist, depending on the physiological response produced.
Selective and non-selective
Selective ligands have a tendency to bind to very limited kinds of receptor, whereas non-selective ligands bind to several types of receptors. This plays an important role in pharmacology, where drugs that are non-selective tend to have more adverse effects, because they bind to several other receptors in addition to the one generating the desired effect.
Hydrophobic ligands
For hydrophobic ligands (e.g. PIP2) in complex with a hydrophobic protein (e.g. lipid-gated ion channels) determining the affinity is complicated by non-specific hydrophobic interactions. Non-specific hydrophobic interactions can be overcome when the affinity of the ligand is high. For example, PIP2 binds with high affinity to PIP2 gated ion channels.
Bivalent ligand
Bivalent ligands consist of two drug-like molecules (pharmacophores or ligands) connected by an inert linker. There are various kinds of bivalent ligands and are often classified based on what the pharmacophores target. Homobivalent ligands target two of the same receptor types. Heterobivalent ligands target two different receptor types. Bitopic ligands target an orthosteric binding sites and allosteric binding sites on the same receptor.
In scientific research, bivalent ligands have been used to study receptor dimers and to investigate their properties. This class of ligands was pioneered by Philip S. Portoghese and coworkers while studying the opioid receptor system. Bivalent ligands were also reported early on by Micheal Conn and coworkers for the gonadotropin-releasing hormone receptor. Since these early reports, there have been many bivalent ligands reported for various G protein-coupled receptor (GPCR) systems including cannabinoid, serotonin, oxytocin, and melanocortin receptor systems, and for GPCR-LIC systems (D2 and nACh receptors).
Bivalent ligands usually tend to be larger than their monovalent counterparts, and therefore, not 'drug-like' as in Lipinski's rule of five. Many believe this limits their applicability in clinical settings. In spite of these beliefs, there have been many ligands that have reported successful pre-clinical animal studies. Given that some bivalent ligands can have many advantages compared to their monovalent counterparts (such as tissue selectivity, increased binding affinity, and increased potency or efficacy), bivalents may offer some clinical advantages as well.
Mono- and polydesmic ligands
Ligands of proteins can be characterized also by the number of protein chains they bind. "Monodesmic" ligands (μόνος: single, δεσμός: binding) are ligands that bind a single protein chain, while "polydesmic" ligands (πολοί: many) are frequent in protein complexes, and are ligands that bind more than one protein chain, typically in or near protein interfaces. Recent research shows that the type of ligands and binding site structure has profound consequences for the evolution, function, allostery and folding of protein compexes.
Privileged scaffold
A privileged scaffold is a molecular framework or chemical moiety that is statistically recurrent among known drugs or among a specific array of biologically active compounds. These privileged elements can be used as a basis for designing new active biological compounds or compound libraries.
Methods used to study binding
Main methods to study protein–ligand interactions are principal hydrodynamic and calorimetric techniques, and principal spectroscopic and structural methods such as
Fourier transform spectroscopy
Raman spectroscopy
Fluorescence spectroscopy
Circular dichroism
Nuclear magnetic resonance
Mass spectrometry
Atomic force microscope
Paramagnetic probes
Dual polarisation interferometry
Multi-parametric surface plasmon resonance
Ligand binding assay and radioligand binding assay
Other techniques include:
fluorescence intensity,
bimolecular fluorescence complementation,
FRET (fluorescent resonance energy transfer) / FRET quenching
surface plasmon resonance,
bio-layer interferometry,
Coimmunopreciptation
indirect ELISA,
equilibrium dialysis,
gel electrophoresis,
far western blot,
fluorescence polarization anisotropy,
electron paramagnetic resonance,
microscale thermophoresis,
switchSENSE.
The dramatically increased computing power of supercomputers and personal computers has made it possible to study protein–ligand interactions also by means of computational chemistry. For example, a worldwide grid of well over a million ordinary PCs was harnessed for cancer research in the project grid.org, which ended in April 2007. Grid.org has been succeeded by similar projects such as World Community Grid, Human Proteome Folding Project, Compute Against Cancer and Folding@Home.
| Biology and health sciences | Cell processes | Biology |
2202860 | https://en.wikipedia.org/wiki/Magnesium%20sulfide | Magnesium sulfide | Magnesium sulfide is an inorganic compound with the formula MgS. It is a white crystalline material but often is encountered in an impure form that is brown and non-crystalline powder. It is generated industrially in the production of metallic iron.
Preparation and general properties
MgS is formed by the reaction of sulfur or hydrogen sulfide with magnesium. It crystallizes in the rock salt structure as its most stable phase, its zinc blende and wurtzite structures can be prepared by molecular beam epitaxy. The chemical properties of MgS resemble those of related ionic sulfides such as those of sodium, barium, or calcium. It reacts with oxygen to form the corresponding sulfate, magnesium sulfate. MgS reacts with water to give hydrogen sulfide and magnesium hydroxide.
Applications
In the BOS steelmaking process, sulfur is the first element to be removed. Sulfur is removed from the impure blast furnace iron by the addition of several hundred kilograms of magnesium powder by a lance. Magnesium sulfide is formed, which then floats on the molten iron and is removed.
MgS is a wide band-gap direct semiconductor of interest as a blue-green emitter, a property that has been known since the early 1900s. The wide-band gap property also allows the use of MgS as photo-detector for short wavelength ultraviolet light.
Occurrence
Aside from being a component of some slags, MgS is a rare nonterrestrial mineral niningerite detected in some meteorites. It is also a solid solution component along with CaS and FeS in oldhamite. MgS is also found in the circumstellar envelopes of certain evolved carbon stars, i. e., those with C/O > 1.
Safety
MgS evolves hydrogen sulfide upon contact with moisture.
| Physical sciences | Sulfide salts | Chemistry |
2203131 | https://en.wikipedia.org/wiki/Geomagnetic%20reversal | Geomagnetic reversal | A geomagnetic reversal is a change in a planet's dipole magnetic field such that the positions of magnetic north and magnetic south are interchanged (not to be confused with geographic north and geographic south). The Earth's magnetic field has alternated between periods of normal polarity, in which the predominant direction of the field was the same as the present direction, and reverse polarity, in which it was the opposite. These periods are called chrons.
Reversal occurrences are statistically random. There have been at least 183 reversals over the last 83 million years (on average once every ~450,000 years). The latest, the Brunhes–Matuyama reversal, occurred 780,000 years ago with widely varying estimates of how quickly it happened. Other sources estimate that the time that it takes for a reversal to complete is on average around 7,000 years for the four most recent reversals. Clement (2004) suggests that this duration is dependent on latitude, with shorter durations at low latitudes and longer durations at mid and high latitudes. The duration of a full reversal varies between 2,000 and 12,000 years.
Although there have been periods in which the field reversed globally (such as the Laschamp excursion) for several hundred years, these events are classified as excursions rather than full geomagnetic reversals. Stable polarity chrons often show large, rapid directional excursions, which occur more often than reversals, and could be seen as failed reversals. During such an excursion, the field reverses in the liquid outer core but not in the solid inner core. Diffusion in the outer core is on timescales of 500 years or less while that of the inner core is longer, around 3,000 years.
History
In the early 20th century, geologists such as Bernard Brunhes first noticed that some volcanic rocks were magnetized opposite to the direction of the local Earth's field. The first systematic evidence for and time-scale estimate of the magnetic reversals were made by Motonori Matuyama in the late 1920s; he observed that rocks with reversed fields were all of early Pleistocene age or older. At the time, the Earth's polarity was poorly understood, and the possibility of reversal aroused little interest.
Three decades later, when Earth's magnetic field was better understood, theories were advanced suggesting that the Earth's field might have reversed in the remote past. Most paleomagnetic research in the late 1950s included an examination of the wandering of the poles and continental drift. Although it was discovered that some rocks would reverse their magnetic field while cooling, it became apparent that most magnetized volcanic rocks preserved traces of the Earth's magnetic field at the time the rocks had cooled. In the absence of reliable methods for obtaining absolute ages for rocks, it was thought that reversals occurred approximately every million years.
The next major advance in understanding reversals came when techniques for radiometric dating were improved in the 1950s. Allan Cox and Richard Doell, at the United States Geological Survey, wanted to know whether reversals occurred at regular intervals, and they invited geochronologist Brent Dalrymple to join their group. They produced the first magnetic-polarity time scale in 1959. As they accumulated data, they continued to refine this scale in competition with Don Tarling and Ian McDougall at the Australian National University. A group led by Neil Opdyke at the Lamont–Doherty Earth Observatory showed that the same pattern of reversals was recorded in sediments from deep-sea cores.
During the 1950s and 1960s information about variations in the Earth's magnetic field was gathered largely by means of research vessels, but the complex routes of ocean cruises rendered the association of navigational data with magnetometer readings difficult. Only when data were plotted on a map did it become apparent that remarkably regular and continuous magnetic stripes appeared on the ocean floors.
In 1963, Frederick Vine and Drummond Matthews provided a simple explanation by combining the seafloor spreading theory of Harry Hess with the known time scale of reversals: sea floor rock is magnetized in the direction of the field when it is formed. Thus, sea floor spreading from a central ridge will produce pairs of magnetic stripes parallel to the ridge. Canadian L. W. Morley independently proposed a similar explanation in January 1963, but his work was rejected by the scientific journals Nature and Journal of Geophysical Research, and remained unpublished until 1967, when it appeared in the literary magazine Saturday Review. The Morley–Vine–Matthews hypothesis was the first key scientific test of the seafloor spreading theory of continental drift.
Past field reversals are recorded in the solidified ferrimagnetic minerals of consolidated sedimentary deposits or cooled volcanic flows on land. Beginning in 1966, Lamont–Doherty Geological Observatory scientists found that the magnetic profiles across the Pacific-Antarctic Ridge were symmetrical and matched the pattern in the north Atlantic's Reykjanes ridge. The same magnetic anomalies were found over most of the world's oceans, which permitted estimates for when most of the oceanic crust had developed.
Observing past fields
Because no existing unsubducted sea floor (or sea floor thrust onto continental plates) is more than about (Ma) old, other methods are necessary for detecting older reversals. Most sedimentary rocks incorporate minute amounts of iron-rich minerals, whose orientation is influenced by the ambient magnetic field at the time at which they formed. These rocks can preserve a record of the field if it is not later erased by chemical, physical or biological change.
Because Earth's magnetic field is a global phenomenon, similar patterns of magnetic variations at different sites may be used to help calculate age in different locations. The past four decades of paleomagnetic data about seafloor ages (up to ~) has been useful in estimating the age of geologic sections elsewhere. While not an independent dating method, it depends on "absolute" age dating methods like radioisotopic systems to derive numeric ages. It has become especially useful when studying metamorphic and igneous rock formations where index fossils are seldom available.
Geomagnetic polarity time scale
Through analysis of seafloor magnetic anomalies and dating of reversal sequences on land, paleomagnetists have been developing a Geomagnetic Polarity Time Scale. The current time scale contains 184 polarity intervals in the last 83million years (and therefore 183 reversals).
Changing frequency over time
The rate of reversals in the Earth's magnetic field has varied widely over time. Around , the field reversed 5 times in a million years. In a 4-million-year period centered on , there were 10 reversals; at around , 17 reversals took place in the span of 3million years. In a period of 3million years centering on , 13 reversals occurred. No fewer than 51 reversals occurred in a 12-million-year period, centering on . Two reversals occurred during a span of 50,000 years. These eras of frequent reversals have been counterbalanced by a few "superchrons": long periods when no reversals took place.
Superchrons
A superchron is a polarity interval lasting at least 10million years. There are two well-established superchrons, the Cretaceous Normal and the Kiaman. A third candidate, the Moyero, is more controversial. The Jurassic Quiet Zone in ocean magnetic anomalies was once thought to represent a superchron but is now attributed to other causes.
The Cretaceous Normal (also called the Cretaceous Superchron or C34) lasted for almost 40million years, from about , including stages of the Cretaceous period from the Aptian through the Santonian. The frequency of magnetic reversals steadily decreased prior to the period, reaching its low point (no reversals) during the period. Between the Cretaceous Normal and the present, the frequency has generally increased slowly.
The Kiaman Reverse Superchron lasted from approximately the late Carboniferous to the late Permian, or for more than 50million years, from around . The magnetic field had reversed polarity. The name "Kiaman" derives from the Australian town of Kiama, where some of the first geological evidence of the superchron was found in 1925.
The Ordovician is suspected to have hosted another superchron, called the Moyero Reverse Superchron, lasting more than 20million years (485 to 463million years ago). Thus far, this possible superchron has only been found in the Moyero river section north of the polar circle in Siberia. Moreover, the best data from elsewhere in the world do not show evidence for this superchron.
Certain regions of ocean floor, older than , have low-amplitude magnetic anomalies that are hard to interpret. They are found off the east coast of North America, the northwest coast of Africa, and the western Pacific. They were once thought to represent a superchron called the Jurassic Quiet Zone, but magnetic anomalies are found on land during this period. The geomagnetic field is known to have low intensity between about and , and these sections of ocean floor are especially deep, causing the geomagnetic signal to be attenuated between the seabed and the surface.
Statistical properties
Several studies have analyzed the statistical properties of reversals in the hope of learning something about their underlying mechanism. The discriminating power of statistical tests is limited by the small number of polarity intervals. Nevertheless, some general features are well established. In particular, the pattern of reversals is random. There is no correlation between the lengths of polarity intervals. There is no preference for either normal or reversed polarity, and no statistical difference between the distributions of these polarities. This lack of bias is also a robust prediction of dynamo theory.
There is no rate of reversals, as they are statistically random. The randomness of the reversals is inconsistent with periodicity, but several authors have claimed to find periodicity. However, these results are probably artifacts of an analysis using sliding windows to attempt to determine reversal rates.
Most statistical models of reversals have analyzed them in terms of a Poisson process or other kinds of renewal process. A Poisson process would have, on average, a constant reversal rate, so it is common to use a non-stationary Poisson process. However, compared to a Poisson process, there is a reduced probability of reversal for tens of thousands of years after a reversal. This could be due to an inhibition in the underlying mechanism, or it could just mean that some shorter polarity intervals have been missed. A random reversal pattern with inhibition can be represented by a gamma process. In 2006, a team of physicists at the University of Calabria found that the reversals also conform to a Lévy distribution, which describes stochastic processes with long-ranging correlations between events in time. The data are also consistent with a deterministic, but chaotic, process.
Character of transitions
Duration
Most estimates for the duration of a polarity transition are between 1,000 and 10,000 years, but some estimates are as quick as a human lifetime. During a transition, the magnetic field will not vanish completely, but many poles might form chaotically in different places during reversal, until it stabilizes again.
Studies of 16.7-million-year-old lava flows on Steens Mountain, Oregon, indicate that the Earth's magnetic field is capable of shifting at a rate of up to 6 degrees per day. This was initially met with skepticism from paleomagnetists. Even if changes occur that quickly in the core, the mantle—which is a semiconductor—is thought to remove variations with periods less than a few months. A variety of possible rock magnetic mechanisms were proposed that would lead to a false signal. That said, paleomagnetic studies of other sections from the same region (the Oregon Plateau flood basalts) give consistent results. It appears that the reversed-to-normal polarity transition that marks the end of Chron C5Cr () contains a series of reversals and excursions. In addition, geologists Scott Bogue of Occidental College and Jonathan Glen of the US Geological Survey, sampling lava flows in Battle Mountain, Nevada, found evidence for a brief, several-year-long interval during a reversal when the field direction changed by over 50 degrees. The reversal was dated to approximately 15million years ago. In 2018, researchers reported a reversal lasting only 200 years. A 2019 paper estimates that the most recent reversal, 780,000 years ago, lasted 22,000 years.
Causes
The magnetic field of the Earth, and of other planets that have magnetic fields, is generated by dynamo action in which convection of molten iron in the planetary core generates electric currents which in turn give rise to magnetic fields. In simulations of planetary dynamos, reversals often emerge spontaneously from the underlying dynamics. For example, Gary Glatzmaier and collaborator Paul Roberts of UCLA ran a numerical model of the coupling between electromagnetism and fluid dynamics in the Earth's interior. Their simulation reproduced key features of the magnetic field over more than 40,000 years of simulated time, and the computer-generated field reversed itself. Global field reversals at irregular intervals have also been observed in the laboratory liquid metal experiment "VKS2".
In some simulations, this leads to an instability in which the magnetic field spontaneously flips over into the opposite orientation. This scenario is supported by observations of the solar magnetic field, which undergoes spontaneous reversals every 9–12 years. With the Sun it is observed that the solar magnetic intensity greatly increases during a reversal, whereas reversals on Earth seem to occur during periods of low field strength.
Some scientists, such as Richard A. Muller, think that geomagnetic reversals are not spontaneous processes but rather are triggered by external events that directly disrupt the flow in the Earth's core. Proposals include impact events or internal events such as the arrival of continental slabs carried down into the mantle by the action of plate tectonics at subduction zones or the initiation of new mantle plumes from the core-mantle boundary. Supporters of this hypothesis hold that any of these events could lead to a large scale disruption of the dynamo, effectively turning off the geomagnetic field. Because the magnetic field is stable in either the present north–south orientation or a reversed orientation, they propose that when the field recovers from such a disruption it spontaneously chooses one state or the other, such that half the recoveries become reversals. This proposed mechanism does not appear to work in a quantitative model, and the evidence from stratigraphy for a correlation between reversals and impact events is weak. There is no evidence for a reversal connected with the impact event that caused the Cretaceous–Paleogene extinction event.
Effects on biosphere
Shortly after the first geomagnetic polarity time scales were produced, scientists began exploring the possibility that reversals could be linked to extinction events. Many such arguments were based on an apparent periodicity in the rate of reversals, but more careful analyses show that the reversal record is not periodic. It may be that the ends of superchrons have caused vigorous convection leading to widespread volcanism, and that the subsequent airborne ash caused extinctions. Tests of correlations between extinctions and reversals are difficult for several reasons. Larger animals are too scarce in the fossil record for good statistics, so paleontologists have analyzed microfossil extinctions. Even microfossil data can be unreliable if there are hiatuses in the fossil record. It can appear that the extinction occurs at the end of a polarity interval when the rest of that polarity interval was simply eroded away. Statistical analysis shows no evidence for a correlation between reversals and extinctions.
Most proposals tying reversals to extinction events assume that the Earth's magnetic field would be much weaker during reversals. Possibly the first such hypothesis was that high-energy particles trapped in the Van Allen radiation belt could be liberated and bombard the Earth. Detailed calculations confirm that if the Earth's dipole field disappeared entirely (leaving the quadrupole and higher components), most of the atmosphere would become accessible to high-energy particles but would act as a barrier to them, and cosmic ray collisions would produce secondary radiation of beryllium-10 or chlorine-36. A 2012 German study of Greenland ice cores showed a peak of beryllium-10 during a brief complete reversal 41,000 years ago, which led to the magnetic field strength dropping to an estimated 5% of normal during the reversal. There is evidence that this occurs both during secular variation and during reversals.
A hypothesis by McCormac and Evans assumes that the Earth's field disappears entirely during reversals. They argue that the atmosphere of Mars may have been eroded away by the solar wind because it had no magnetic field to protect it. They predict that ions would be stripped away from Earth's atmosphere above 100 km. Paleointensity measurements show that the magnetic field has not disappeared during reversals. Based on paleointensity data for the last 800,000 years, the magnetopause is still estimated to have been at about three Earth radii during the Brunhes–Matuyama reversal. Even if the internal magnetic field did disappear, the solar wind can induce a magnetic field in the Earth's ionosphere sufficient to shield the surface from energetic particles.
| Physical sciences | Geophysics | Earth science |
2204566 | https://en.wikipedia.org/wiki/Preemption%20%28computing%29 | Preemption (computing) | In computing, preemption is the act of temporarily interrupting an executing task, with the intention of resuming it at a later time. This interrupt is done by an external scheduler with no assistance or cooperation from the task. This preemptive scheduler usually runs in the most privileged protection ring, meaning that interruption and then resumption are considered highly secure actions. Such changes to the currently executing task of a processor are known as context switching.
User mode and kernel mode
In any given system design, some operations performed by the system may not be preemptable. This usually applies to kernel functions and service interrupts which, if not permitted to run to completion, would tend to produce race conditions resulting in deadlock. Barring the scheduler from preempting tasks while they are processing kernel functions simplifies the kernel design at the expense of system responsiveness. The distinction between user mode and kernel mode, which determines privilege level within the system, may also be used to distinguish whether a task is currently preemptable.
Most modern operating systems have preemptive kernels, which are designed to permit tasks to be preempted even when in kernel mode. Examples of such operating systems are Solaris 2.0/SunOS 5.0, Windows NT, Linux kernel (2.5.4 and newer), AIX and some BSD systems (NetBSD, since version 5).
Preemptive multitasking
The term preemptive multitasking is used to distinguish a multitasking operating system, which permits preemption of tasks, from a cooperative multitasking system wherein processes or tasks must be explicitly programmed to yield when they do not need system resources.
In simple terms: Preemptive multitasking involves the use of an interrupt mechanism which suspends the currently executing process and invokes a scheduler to determine which process should execute next. Therefore, all processes will get some amount of CPU time at any given time.
In preemptive multitasking, the operating system kernel can also initiate a context switch to satisfy the scheduling policy's priority constraint, thus preempting the active task. In general, preemption means "prior seizure of". When the high-priority task at that instance seizes the currently running task, it is known as preemptive scheduling.
The term "preemptive multitasking" is sometimes mistakenly used when the intended meaning is more specific, referring instead to the class of scheduling policies known as time-shared scheduling, or time-sharing.
Preemptive multitasking allows the computer system to more reliably guarantee each process a regular "slice" of operating time. It also allows the system to rapidly deal with important external events like incoming data, which might require the immediate attention of one or another process.
At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In early systems, processes would often "poll" or "busy-wait" while waiting for requested input (such as disk, keyboard or network input). During this time, the process was not performing useful work, but still maintained complete control of the CPU. With the advent of interrupts and preemptive multitasking, these I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution.
Although multitasking techniques were originally developed to allow multiple users to share a single machine, it became apparent that multitasking was useful regardless of the number of users. Many operating systems, from mainframes down to single-user personal computers and no-user control systems (like those in robotic spacecraft), have recognized the usefulness of multitasking support for a variety of reasons. Multitasking makes it possible for a single user to run multiple applications at the same time, or to run "background" processes while retaining control of the computer.
Time slice
The period of time for which a process is allowed to run in a preemptive multitasking system is generally called the time slice or quantum. The scheduler is run once every time slice to choose the next process to run. The length of each time slice can be critical to balancing system performance vs process responsiveness - if the time slice is too short then the scheduler will consume too much processing time, but if the time slice is too long, processes will take longer to respond to input.
An interrupt is scheduled to allow the operating system kernel to switch between processes when their time slices expire, effectively allowing the processor's time to be shared among a number of tasks, giving the illusion that it is dealing with these tasks in parallel (simultaneously). The operating system which controls such a design is called a multi-tasking system.
System support
Today, nearly all operating systems support preemptive multitasking, including the current versions of Windows, macOS, Linux (including Android), iOS and iPadOS.
An early microcomputer operating system providing preemptive multitasking was Microware's OS-9, available for computers based on the Motorola 6809, including home computers such as the TRS-80 Color Computer 2 when configured with disk drives, with the operating system supplied by Tandy as an upgrade. Sinclair QDOS and AmigaOS on the Amiga were also microcomputer operating systems offering preemptive multitasking as a core feature. These both ran on Motorola 68000-family microprocessors without memory management. Amiga OS used dynamic loading of relocatable code blocks ("hunks" in Amiga jargon) to multitask preemptively all processes in the same flat address space.
Early operating systems for IBM PC compatibles such as MS-DOS and PC DOS, did not support multitasking at all, however alternative operating systems such as MP/M-86 (1981) and Concurrent CP/M-86 did support preemptive multitasking. Other Unix-like systems including MINIX and Coherent provided preemptive multitasking on 1980s-era personal computers.
Later MS-DOS compatible systems natively supporting preemptive multitasking/multithreading include Concurrent DOS, Multiuser DOS, Novell DOS (later called Caldera OpenDOS and DR-DOS 7.02 and higher). Since Concurrent DOS 386, they could also run multiple DOS programs concurrently in virtual DOS machines.
The earliest version of Windows to support a limited form of preemptive multitasking was Windows/386 2.0, which used the Intel 80386's Virtual 8086 mode to run DOS applications in virtual 8086 machines, commonly known as "DOS boxes", which could be preempted. In Windows 95, 98 and Me, 32-bit applications were made preemptive by running each one in a separate address space, but 16-bit applications remained cooperative for backward compatibility. In Windows 3.1x (protected mode), the kernel and virtual device drivers ran preemptively, but all 16-bit applications were non-preemptive and shared the same address space.
Preemptive multitasking has always been supported by Windows NT (all versions), OS/2 (native applications), Unix and Unix-like systems (such as Linux, BSD and macOS), VMS, OS/360, and many other operating systems designed for use in the academic and medium-to-large business markets.
Early versions of the classic Mac OS did not support multitasking at all, with cooperative multitasking becoming available via MultiFinder in System Software 5 and then standard in System 7. Although there were plans to upgrade the cooperative multitasking found in the classic Mac OS to a preemptive model (and a preemptive API did exist in Mac OS 9, although in a limited sense), these were abandoned in favor of Mac OS X (now called macOS) that, as a hybrid of the old Mac System style and NeXTSTEP, is an operating system based on the Mach kernel and derived in part from BSD, which had always provided Unix-like preemptive multitasking.
| Technology | Operating systems | null |
13420465 | https://en.wikipedia.org/wiki/Acherontia%20lachesis | Acherontia lachesis | Acherontia lachesis, the greater death's head hawkmoth or bee robber, is a large (up to 13 cm wingspan) sphingid moth found in India, Sri Lanka and much of the East Asian region. It is one of the three species of death's-head hawkmoth genus, Acherontia. The species was first described by Johan Christian Fabricius in 1798. It is nocturnal and very fond of honey; they can mimic the scent of honey bees so that they can enter a hive unharmed to get honey. Their tongue, which is stout and very strong, enables them to pierce the wax cells and suck the honey out. This species occurs throughout almost the entire Oriental region, from India, Pakistan and Nepal to the Philippines, and from southern Japan and the southern Russian Far East to Indonesia, where it attacks colonies of several different honey bee species. It has recently become established on the Hawaiian Islands.
Description
A. lachesis is much larger than Acherontia styx. The segmental bands and grey stripe occupy so much of the abdomen that only small patches of yellow are left. The hindwing has a large black patch at the base. The medial and post-medial bands are so broad that only narrow bands of yellow remain. The ventral side of the abdomen is banded with black and the wings are banded with black and have a spot in the cell of each. the larva differs from A. styx in having blue streaks above the yellow ones; before pupating it turns brown and the oblique streaks disappear.
Life history
Eggs are laid on a variety of host plants belonging to the families Solanaceae, Verbenaceae, Fabaceae, Oleaceae, Bignoniaceae, and others. Mature larvae can attain a length of 125 mm and occur in green, yellow and brownish-grey colour forms (most commonly grey), with oblique body stripes and a prickly tail horn that is curled at the extreme tip. When molested the caterpillar throws the head and anterior segments of the body from side to side, at the same time making a rapidly repeated clicking noise, which appears to be produced by the mandibles. The larva pupates by pushing its head into the earth, burying itself, and making an ovoid chamber about 15 cm below the surface in which it sheds its skin.
The larvae are often parasitised by tachinid flies.
Subspecies
Acherontia lachesis lachesis
Acherontia lachesis diehli Eitschberger, 2003
Ecology
The moth rests with the wings folded with the abdomen completely covered. When disturbed it raises its body from the surface on which it is sitting and partially opens and raises its wings and emits a squealing note. Notable predators are mostly parasitoids such as Amblyjoppa cognatoria, Quandrus pepsoides and Drino atropivora.
Host plants
In their distribution countries, caterpillars are found on variety of plants such as Campsis grandiflora, Jasminum, Solanum tuberosum, Nicotiana tabacum, Tectona grandis, Datura, Ipomoea batatas, Clerodendrum kaempferi, Erythrina speciosa, Clerodendrum quadriloculare, Lantana camara, Sesamum indicum, Solanum melongena, Solanum verbascifolium, Stachytarpheta indica, Tithonia diversifolia, Solanum torvum, Spathodea campanulata, Vitex pinnata, Psilogramma menophron and Clerodendrum inerme.
A. lachesis is not the species of death's head used in the promotional posters for The Silence of the Lambs. That is Acherontia styx.
| Biology and health sciences | Lepidoptera | Animals |
3039253 | https://en.wikipedia.org/wiki/Torsion%20%28mechanics%29 | Torsion (mechanics) | In the field of solid mechanics, torsion is the twisting of an object due to an applied torque. Torsion could be defined as strain or angular deformation, and is measured by the angle a chosen section is rotated from its equilibrium position. The resulting stress (torsional shear stress) is expressed in either the pascal (Pa), an SI unit for newtons per square metre, or in pounds per square inch (psi) while torque is expressed in newton metres (N·m) or foot-pound force (ft·lbf). In sections perpendicular to the torque axis, the resultant shear stress in this section is perpendicular to the radius.
In non-circular cross-sections, twisting is accompanied by a distortion called warping, in which transverse sections do not remain plane. For shafts of uniform cross-section unrestrained against warping, the torsion-related physical properties are expressed as:
where:
T is the applied torque or moment of torsion in Nm.
(tau) is the maximum shear stress at the outer surface
JT is the torsion constant for the section. For circular rods, and tubes with constant wall thickness, it is equal to the polar moment of inertia of the section, but for other shapes, or split sections, it can be much less. For more accuracy, finite element analysis (FEA) is the best method. Other calculation methods include membrane analogy and shear flow approximation.
r is the perpendicular distance between the rotational axis and the farthest point in the section (at the outer surface).
ℓ is the length of the object to or over which the torque is being applied.
φ (phi) is the angle of twist in radians.
G is the shear modulus, also called the modulus of rigidity, and is usually given in gigapascals (GPa), lbf/in2 (psi), or lbf/ft2 or in ISO units N/mm2.
The product JTG is called the torsional rigidity wT.
Properties
The shear stress at a point within a shaft is:
Note that the highest shear stress occurs on the surface of the shaft, where the radius is maximum. High stresses at the surface may be compounded by stress concentrations such as rough spots. Thus, shafts for use in high torsion are polished to a fine surface finish to reduce the maximum stress in the shaft and increase their service life.
The angle of twist can be found by using:
Sample calculation
Calculation of the steam turbine shaft radius for a turboset:
Assumptions:
Power carried by the shaft is 1000 MW; this is typical for a large nuclear power plant.
Yield stress of the steel used to make the shaft (τyield) is: 250 × 106 N/m2.
Electricity has a frequency of 50 Hz; this is the typical frequency in Europe. In North America, the frequency is 60 Hz.
The angular frequency can be calculated with the following formula:
The torque carried by the shaft is related to the power by the following equation:
The angular frequency is therefore 314.16 rad/s and the torque 3.1831 × 106 N·m.
The maximal torque is:
After substitution of the torsion constant, the following expression is obtained:
The diameter is 40 cm. If one adds a factor of safety of 5 and re-calculates the radius with the maximum stress equal to the yield stress/5, the result is a diameter of 69 cm, the approximate size of a turboset shaft in a nuclear power plant.
Failure mode
The shear stress in the shaft may be resolved into principal stresses via Mohr's circle. If the shaft is loaded only in torsion, then one of the principal stresses will be in tension and the other in compression. These stresses are oriented at a 45-degree helical angle around the shaft. If the shaft is made of brittle material, then the shaft will fail by a crack initiating at the surface and propagating through to the core of the shaft, fracturing in a 45-degree angle helical shape. This is often demonstrated by twisting a piece of blackboard chalk between one's fingers.
In the case of thin hollow shafts, a twisting buckling mode can result from excessive torsional load, with wrinkles forming at 45° to the shaft axis.
| Physical sciences | Solid mechanics | Physics |
3041515 | https://en.wikipedia.org/wiki/Kick%20scooter | Kick scooter | A kick scooter (also referred to as a push-scooter or scooter) is a human-powered street vehicle with a handlebar, deck, and wheels propelled by a rider pushing off the ground with their leg. Today the most common scooters are made of aluminum, titanium, and steel. Some kick scooters made for younger children have 3 to 4 wheels (but most common ones have 2 wheels) and are made of plastic and do not fold. High-performance kickbikes are also made. A company that had once made the Razor Scooters revitalized the design in the mid-nineties and early two-thousands. Three-wheel models where the frame forks into two decks are known as Y scooters or trikkes.
Motorized scooters, historically powered by internal combustion engines, and more recently electric motors, are self-propelled kick scooters capable of speeds sometimes exceeding .
Models and history
Early scooters
Kick scooters have been handmade in industrial urban areas in Europe and the United States since the 1920s or earlier, often as toys made for children to roam the streets. One common home-made version is made by attaching roller skate wheelsets to a board with some kind of handle, usually an old box. To turn, riders can lean or use a second board connected by a crude pivot. The construction was all wood, with 3–4 inch (75–100 mm) wheels containing steel ball bearings. An additional advantage of this construction was a loud noise, like from a "real" vehicle. An alternative construction consists of one steel clamp on a roller skate divided into front and rear parts and attached to a wood beam.
The German Bundesarchiv for "roller" details that both homemade and manufactured children's scooters were used and even raced in Paris, Berlin and Leipzig in 1930, 1948 and 1951. They are similar to later designs.
The short movie "A Trip Through the Streets of Amsterdam" from 1922 shows several children on scooters.
Kick scooter
In 1974, the Honda company made the , a scooter driven by a pedal on a lever. While it seemed to be as much effort to "kick" as a regular scooter, the novelty of it caught on and it became popular nevertheless.
Pneumatic tires
Before bicycles became popular among children, steel scooters with two small bicycle wheels were more common. Around 1987, many BMX manufacturers produced BMX-like scooters, such as Scoot. Those manufacturers discontinued their scooters, but some scooter manufacturers were established in later years and remain in business. These scooters are still used in dense urban areas for utility purposes, since they are faster than a folding scooter. Some are made for off-road use and are described as mountain scooters. In addition to commuting, sports competition, and off-road use, large wheel scooters are a favorite for dog scootering, an activity in which single or team dogs, such as huskies, pull a scooter and its rider in the same way that a sled is pulled across snow. Some Amish do not want to ride bicycles, so they ride scooters instead. Today, variations on the kicksled with scooter design features are also available, such as the Kickspark.
Kickbike
The development of the kickbike in Finland in 1994 changed the way scooters are viewed. The Kickbike has a large standard size bicycle front wheel and a much smaller rear wheel, which allows for a much faster ride. The Footbike Eurocup has been held since 2001.
Folding scooters
In 1990, a foldable aluminium scooter with inline skates wheels was created by Wim Ouboter of Micro Mobility Systems in Switzerland. The scooter was sold as "Micro Skate Scooter," "Razor," and "JDBUG/JDRAZOR MS-130A". The Razor was introduced to Japan in 1999, with many early adopters being young Japanese who used it for portable transport. It later became a worldwide fad and these small scooters also became popular toys for children. TurboAnt's folding scooters are known for their detachable battery designs. Its electric scooters have a range of between 18 and 30 miles.
Pro scooters
Kick scooters used for extreme sport are called pro scooters. They are specially made to withstand damage as the rider performs stunts and tricks. Numerous brands specialize in stunt scooters and accessories including lightweight and high strength parts, helmets, pads, ramps, grind wax, griptape, grips, bearings and clothing.
Three wheels
Three-wheeled scooters similar to tricycles have been produced for little children.
In 1999, Micro Mobility Systems and K2 Sports produced a reverse-three-wheeled scooter called "Kickboard". Micro also produced the Kickboard-like children's scooters "Mini Micro" and "Maxi Micro". The reverse design inherently provides greater stability than the standard: a standing person will tend to stand at the front of a scooter rather than at the back. However, the steering geometry is inherently weak and requires design adaptation to improve its response. An example is the Mini Micro, which uses a spring-loaded system to translate lateral force on the handbars (child leaning) into turning motion on the wheels, referred by its makers as "lean and steer".
Four wheels
The early scooters, which were made with roller skates, were four-wheeled like skateboards.
Around 2000, a Swiss company produced a four-wheeled scooter called the "Wetzer Stickboard". The Wetzer Stickboard was a narrow skateboard with a foldable pole on the nose.
In 2006, a company called Nextsport started producing a line of four-wheeled scooters, known as Fuzions. The scooters are typically bigger and heavier than Razor and Micro models. The early Fuzion models come with large, wide wheels, and an oversized deck for stability. Later scooters such as the Fuzion NX included smaller and harder wheels. It also included 360 degree handlebar spinning capabilities, unlike its predecessors.
Electric kick scooters
Electric models achieved popularity over their gas-powered counter parts in the early 2000. They are often manufactured for fleet rentals, such as Lime e-scooters. Shared electric kick scooters have also made a certain contribution to environmental protection, because they do not emit greenhouse gases, reduce traffic congestion and reduce the need for public transportation, however they are most sustainable when they are "replacing personalized individual transport". Electric scooters are also available for personal use, with manufacturers such as TurboAnt offering models designed for individual ownership.
Safety
Care must be taken when going up or down curbs, as this can cause the scooter to come to a sudden stop, sending the rider onto the ground.
| Technology | Human-powered transport | null |
3043551 | https://en.wikipedia.org/wiki/Modern%20valence%20bond%20theory | Modern valence bond theory | Modern valence bond theory is the application of valence bond theory (VBT) with computer programs that are competitive in accuracy and economy, with programs for the Hartree–Fock or post-Hartree-Fock methods. The latter methods dominated quantum chemistry from the advent of digital computers because they were easier to program. The early popularity of valence bond methods thus declined. It is only recently that the programming of valence bond methods has improved. These developments are due to and described by Gerratt, Cooper, Karadakov and Raimondi (1997); Li and McWeeny (2002); Joop H. van Lenthe and co-workers (2002); Song, Mo, Zhang and Wu (2005); and Shaik and Hiberty (2004)
While molecular orbital theory (MOT) describes the electronic wavefunction as a linear combination of basis functions that are centered on the various atoms in a species (linear combination of atomic orbitals), VBT describes the electronic wavefunction as a linear combination of several valence bond structures. Each of these valence bond structures can be described using linear combinations of either atomic orbitals, delocalized atomic orbitals (Coulson-Fischer theory), or even molecular orbital fragments. Although this is often overlooked, MOT and VBT are equally valid ways of describing the electronic wavefunction, and are actually related by a unitary transformation. Assuming MOT and VBT are applied at the same level of theory, this relationship ensures that they will describe the same wavefunction, but will do so in different forms.
Theory
Bonding in H2
Heitler and London's original work on VBT attempts to approximate the electronic wavefunction as a covalent combination of localized basis functions on the bonding atoms. In VBT, wavefunctions are described as the sums and differences of VB determinants, which enforce the antisymmetric properties required by the Pauli exclusion principle. Taking H2 as an example, the VB determinant is
In this expression, N is a normalization constant, and a and b are basis functions that are localized on the two hydrogen atoms, often considered simply to be 1s atomic orbitals. The numbers are an index to describe the electron (i.e. a(1) represents the concept of ‘electron 1’ residing in orbital a). ɑ and β describe the spin of the electron. The bar over b in indicates that the electron associated with orbital b has β spin (in the first term, electron 2 is in orbital b, and thus electron 2 has β spin). By itself, a single VB determinant is not a proper spin-eigenfunction, and thus cannot describe the true wavefunction. However, by taking the sum and difference (linear combinations) of VB determinants, two approximate wavefunctions can be obtained:
ΦHL is the wavefunction as described by Heiter and London originally, and describes the covalent bonding between orbitals a and b in which the spins are paired, as expected for a chemical bond. ΦT is a representation of the bond that where the electron spins are parallel, resulting in a triplet state. This is a highly repulsive interaction, so this description of the bonding will not play a major role in determining the wave function.
Other ways of describing the wavefunction can also be constructed. Specifically, instead of considering a covalent interaction, the ionic interactions can be considered, resulting in the wavefunction
This wavefunction describes the bonding in H2 as the ionic interaction between an H+ and an H−.
Since none of these wavefunctions, ΦHL (covalent bonding) or ΦI (ionic bonding) perfectly approximates the wavefunction, a combination of these two can be used to describe the total wavefunction
where λ and μ are coefficients that can vary from 0 to 1. In determining the lowest energy wavefunction, these coefficients can be varied until a minimum energy is reached. λ will be larger in bonds that have more covalency, while μ will be larger in bonds that are more ionic. In the specific case of H2, λ ≈ 0.75, and μ ≈ 0.25.
The orbitals that were used as the basis (a and b) do not necessarily have to be localized on the atoms involved in bonding. Orbitals that are partially delocalized onto the other atom involved in bonding can also be used, as in the Coulson-Fischer theory. Even the molecular orbitals associated with a portion of a molecule can be used as a basis set, a processes referred to as using fragment orbitals.
For more complicated molecules, ΦVBT could consider several possible structures that all contribute in various degrees (there would be several coefficients, not just λ and μ). An example of this is the Kekule and Dewar structures used in describing benzene.
Note that all normalization constants were ignored in the discussion above for simplicity.
Relationship to molecular orbital theory
History
The application of VBT and MOT to computations that attempt to approximate the Schrödinger equation began near the middle of the 20th century, but MOT quickly became the preferred approach between the two. The relative computational ease of doing calculations with non-overlapping orbitals in MOT is said to have contributed to its popularity. In addition, the successful explanation of π-systems, pericyclic reactions, and extended solids further cemented MOT as the preeminent approach. Despite this, the two theories are just two different ways of representing the same wavefunction. As shown below, at the same level of theory, the two methods lead to the same results.
H2 - molecular orbital vs valence bond theory
The relationship between MOT and VBT can be made more clear by directly comparing the results of the two theories for the hydrogen molecule, H2. Using MOT, the same basis orbitals (a and b) can be used to describe the bonding. Combining them in a constructive and destructive manner gives two spin-orbitals
The ground state wavefunction of H2 would be that where the σ orbital is doubly occupied, which is expressed as the following Slater determinant (as required by MOT)
This expression for the wavefunction can be shown to be equivalent to the following wavefunction
which is now expressed in terms of VB determinants. This transformation does not alter the wavefunction in any way, only the way that the wavefunction is represented. This process of going from an MO description to a VB description can be referred to as ‘mapping MO wavefunctions onto VB wavefunctions’, and is fundamentally the same process as that used to generate localized molecular orbitals.
Rewriting the VB wavefunction derived above, we can clearly see the relationship between MOT and VBT
Thus, at its simplest level, MOT is just VBT, where the covalent and ionic contributions (the first and second terms, respectively) are equal. This is the basis of the claim that MOT does not correctly predict the dissociation of molecules. When MOT includes configuration interaction (MO-CI), this allows the relative contributions of the covalent and ionic contributions to be altered. This leads to the same description of bonding for both VBT and MO-CI. In conclusion, the two theories, when brought to a high enough level of theory, will converge. Their distinction is in the way they are built up to that description.
Note that in all of the aforementioned discussions, as with the derivation of H2 for VBT, normalization constants were ignored for simplicity.
'Failures' of valence bond theory
When describing the relationship between MOT and VBT, there are a few examples that are commonly cited as ‘failures’ of VBT. However, these often arise from an incomplete or inaccurate use of VBT.
Triplet ground state of oxygen
It is known that O2 has a triplet ground state, but a classic Lewis structure depiction of oxygen would not indicate that any unpaired electrons exist. Perhaps because Lewis structures and VBT often depict the same structure as the most stable state, this misinterpretation has persisted. However, as has been consistently demonstrated with VBT calculations, the lowest energy state is that with two, three electron π-bonds, which is the triplet state.
Ionization energy of methane
The photoelectron spectrum (PES) of methane is commonly used as an argument as to why MO theory is superior to VBT. From an MO calculation (or even just a qualitative MOT diagram), it can be seen that the HOMO is a triply degenerate state, while the HOMO-1 is a single degenerate state. By invoking Koopman's theorem, one can predict that there would be two distinct peaks in the ionization spectrum of methane. Those would be by exciting an electron from the t2 orbitals or the a1 orbital, which would result in a 3:1 ratio in intensity. This is corroborated by experiment. However, when one examines the VB description of CH4, it is clear that there are 4 equivalent bonds between C and H. If one were to invoke Koopman's Theorem (which is implicitly done when claiming that VBT is inadequate to describe PES), a single ionization energy peak would be predicted. However, Koopman's Theorem cannot be applied to orbitals that are not the canonical molecular orbitals, and thus a different approach is required to understand the ionization potentials of methane from VBT. To do this, the ionized product, CH4+ must be analyzed. The VB wavefunction of CH4+ would be an equal combination of 4 structures, each having 3 two-electron bonds, and 1 one-electron bond. Based on group theory arguments, these states must give rise to a triply degenerate T2 state and a single degenerate A1 state. A diagram showing the relative energies of the states is shown below, and it can be seen that there exist two distinct transitions from the CH4 state with 4 equivalent bonds to the two CH4+ states.
Valence bond theory methods
Listed below are a few notable VBT methods that are applied in modern computational software packages.
Generalized VBT (GVB)
This was one of the first ab initio computational methods developed that utilized VBT. Using Coulson-Fischer type basis orbitals, this method uses singly-occupied, instead of doubly-occupied orbitals, as the basis set. This allows from the distance between paired electrons to increase during variational optimization, lowering the resultant energy. The total wavefunction is described by a single set of orbitals, rather than a linear combination of multiple VB structures. GVB is considered to be a user-friendly method for new practitioners.
Spin-coupled generalized valence bond theory (SCGVB, or sometimes SCVB/full GVB)
SCGVB is an extension of GVB that still uses delocalized orbitals, whose delocalization can adjust with molecular structure. In addition, the electronic wavefunction is still a single product of orbitals. The difference is that the spin functions are allowed to adjust simultaneously with the orbitals during energy minimization procedures. This is considered to be one of the best VB descriptions of the wavefunction that relies on only a single configuration.
Complete active space valence bond method (CASVB)
This is a method that often gets confused as a traditional VB method. Instead, this is a localization procedure that maps the full configuration interaction Hartree-Fock wavefunction (CASSCF) onto valence bond structures.
Spin-coupled theory
There are a large number of different valence bond methods. Most use n valence bond orbitals for n electrons. If a single set of these orbitals is combined with all linear independent combinations of the spin functions, we have spin-coupled valence bond theory. The total wave function is optimized using the variational method by varying the coefficients of the basis functions in the valence bond orbitals and the coefficients of the different spin functions. In other cases only a sub-set of all possible spin functions is used. Many valence bond methods use several sets of the valence bond orbitals. It is important to note here that different authors use different names for these different valence bond methods.
Valence bond programs
Several groups have produced computer programs for modern valence bond calculations that are freely available.
| Physical sciences | Bond structure | Chemistry |
3043836 | https://en.wikipedia.org/wiki/Nuclear%20binding%20energy | Nuclear binding energy | Nuclear binding energy in experimental physics is the minimum energy that is required to disassemble the nucleus of an atom into its constituent protons and neutrons, known collectively as nucleons. The binding energy for stable nuclei is always a positive number, as the nucleus must gain energy for the nucleons to move apart from each other. Nucleons are attracted to each other by the strong nuclear force. In theoretical nuclear physics, the nuclear binding energy is considered a negative number. In this context it represents the energy of the nucleus relative to the energy of the constituent nucleons when they are infinitely far apart. Both the experimental and theoretical views are equivalent, with slightly different emphasis on what the binding energy means.
The mass of an atomic nucleus is less than the sum of the individual masses of the free constituent protons and neutrons. The difference in mass can be calculated by the Einstein equation, , where E is the nuclear binding energy, c is the speed of light, and m is the difference in mass. This 'missing mass' is known as the mass defect, and represents the energy that was released when the nucleus was formed.
The term "nuclear binding energy" may also refer to the energy balance in processes in which the nucleus splits into fragments composed of more than one nucleon. If new binding energy is available when light nuclei fuse (nuclear fusion), or when heavy nuclei split (nuclear fission), either process can result in release of this binding energy. This energy may be made available as nuclear energy and can be used to produce electricity, as in nuclear power, or in a nuclear weapon. When a large nucleus splits into pieces, excess energy is emitted as gamma rays and the kinetic energy of various ejected particles (nuclear fission products).
These nuclear binding energies and forces are on the order of one million times greater than the electron binding energies of light atoms like hydrogen.
Introduction
Nuclear energy
An absorption or release of nuclear energy occurs in nuclear reactions or radioactive decay; those that absorb energy are called endothermic reactions and those that release energy are exothermic reactions. Energy is consumed or released because of differences in the nuclear binding energy between the incoming and outgoing products of the nuclear transmutation.
The best-known classes of exothermic nuclear transmutations are nuclear fission and nuclear fusion. Nuclear energy may be released by fission, when heavy atomic nuclei (like uranium and plutonium) are broken apart into lighter nuclei. The energy from fission is used to generate electric power in hundreds of locations worldwide. Nuclear energy is also released during fusion, when light nuclei like hydrogen are combined to form heavier nuclei such as helium. The Sun and other stars use nuclear fusion to generate thermal energy which is later radiated from the surface, a type of stellar nucleosynthesis. In any exothermic nuclear process, nuclear mass might ultimately be converted to thermal energy, emitted as heat.
In order to quantify the energy released or absorbed in any nuclear transmutation, one must know the nuclear binding energies of the nuclear components involved in the transmutation.
The nuclear force
Electrons and nuclei are kept together by electrostatic attraction (negative attracts positive). Furthermore, electrons are sometimes shared by neighboring atoms or transferred to them (by processes of quantum physics); this link between atoms is referred to as a chemical bond and is responsible for the formation of all chemical compounds.
The electric force does not hold nuclei together, because all protons carry a positive charge and repel each other. If two protons were touching, their repulsion force would be almost 40 newtons. Because each of the neutrons carries total charge zero, a proton could electrically attract a neutron if the proton could induce the neutron to become electrically polarized. However, having the neutron between two protons (so their mutual repulsion decreases to 10 N) would attract the neutron only for an electric quadrupole arrangement. Higher multipoles, needed to satisfy more protons, cause weaker attraction, and quickly become implausible.
After the proton and neutron magnetic moments were measured and verified, it was apparent that their magnetic forces might be 20 or 30 newtons, attractive if properly oriented. A pair of protons would do 10−13 joules of work to each other as they approach – that is, they would need to release energy of 0.5 MeV in order to stick together. On the other hand, once a pair of nucleons magnetically stick, their external fields are greatly reduced, so it is difficult for many nucleons to accumulate much magnetic energy.
Therefore, another force, called the nuclear force (or residual strong force) holds the nucleons of nuclei together. This force is a residuum of the strong interaction, which binds quarks into nucleons at an even smaller level of distance.
The fact that nuclei do not clump together (fuse) under normal conditions suggests that the nuclear force must be weaker than the electric repulsion at larger distances, but stronger at close range. Therefore, it has short-range characteristics. An analogy to the nuclear force is the force between two small magnets: magnets are very difficult to separate when stuck together, but once pulled a short distance apart, the force between them drops almost to zero.
Unlike gravity or electrical forces, the nuclear force is effective only at very short distances. At greater distances, the electrostatic force dominates: the protons repel each other because they are positively charged, and like charges repel. For that reason, the protons forming the nuclei of ordinary hydrogen—for instance, in a balloon filled with hydrogen—do not combine to form helium (a process that also would require some protons to combine with electrons and become neutrons). They cannot get close enough for the nuclear force, which attracts them to each other, to become important. Only under conditions of extreme pressure and temperature (for example, within the core of a star), can such a process take place.
Physics of nuclei
There are around 94 naturally occurring elements on Earth. The atoms of each element have a nucleus containing a specific number of protons (always the same number for a given element), and some number of neutrons, which is often roughly a similar number. Two atoms of the same element having different numbers of neutrons are known as isotopes of the element. Different isotopes may have different properties – for example one might be stable and another might be unstable, and gradually undergo radioactive decay to become another element.
The hydrogen nucleus contains just one proton. Its isotope deuterium, or heavy hydrogen, contains a proton and a neutron. The most common isotope of helium contains two protons and two neutrons, and those of carbon, nitrogen and oxygen – six, seven and eight of each particle, respectively. However, a helium nucleus weighs less than the sum of the weights of the two heavy hydrogen nuclei which combine to make it. The same is true for carbon, nitrogen and oxygen. For example, the carbon nucleus is slightly lighter than three helium nuclei, which can combine to make a carbon nucleus. This difference is known as the mass defect.
Mass defect
Mass defect (also called "mass deficit") is the difference between the mass of an object and the sum of the masses of its constituent particles. Discovered by Albert Einstein in 1905, it can be explained using his formula E = mc2, which describes the equivalence of energy and mass. The decrease in mass is equal to the energy emitted in the reaction of an atom's creation divided by c2. By this formula, adding energy also increases mass (both weight and inertia), whereas removing energy decreases mass. For example, a helium atom containing four nucleons has a mass about 0.8% less than the total mass of four hydrogen atoms (each containing one nucleon). The helium nucleus has four nucleons bound together, and the binding energy which holds them together is, in effect, the missing 0.8% of mass.
For lighter elements, the energy that can be released by assembling them from lighter elements decreases, and energy can be released when they fuse. This is true for nuclei lighter than iron/nickel. For heavier nuclei, more energy is needed to bind them, and that energy may be released by breaking them up into fragments (known as nuclear fission). Nuclear power is generated at present by breaking up uranium nuclei in nuclear power reactors, and capturing the released energy as heat, which is converted to electricity.
As a rule, very light elements can fuse comparatively easily, and very heavy elements can break up via fission very easily; elements in the middle are more stable and it is difficult to make them undergo either fusion or fission in an environment such as a laboratory.
The reason the trend reverses after iron is the growing positive charge of the nuclei, which tends to force nuclei to break up. It is resisted by the strong nuclear interaction, which holds nucleons together. The electric force may be weaker than the strong nuclear force, but the strong force has a much more limited range: in an iron nucleus, each proton repels the other 25 protons, while the nuclear force only binds close neighbors. So for larger nuclei, the electrostatic forces tend to dominate and the nucleus will tend over time to break up.
As nuclei grow bigger still, this disruptive effect becomes steadily more significant. By the time polonium is reached (84 protons), nuclei can no longer accommodate their large positive charge, but emit their excess protons quite rapidly in the process of alpha radioactivity—the emission of helium nuclei, each containing two protons and two neutrons. (Helium nuclei are an especially stable combination.) Because of this process, nuclei with more than 94 protons are not found naturally on Earth (see periodic table). The isotopes beyond uranium (atomic number 92) with the longest half-lives are plutonium-244 (80 million years) and curium-247 (16 million years).
Nuclear reactions in the Sun
The nuclear fusion process works as follows: five billion years ago, the new Sun formed when gravity pulled together a vast cloud of hydrogen and dust, from which the Earth and other planets also arose. The gravitational pull released energy and heated the early Sun, much in the way Helmholtz proposed.
Thermal energy appears as the motion of atoms and molecules: the higher the temperature of a collection of particles, the greater is their velocity and the more violent are their collisions. When the temperature at the center of the newly formed Sun became great enough for collisions between hydrogen nuclei to overcome their electric repulsion, and bring them into the short range of the attractive nuclear force, nuclei began to stick together. When this began to happen, protons combined into deuterium and then helium, with some protons changing in the process to neutrons (plus positrons, positive electrons, which combine with electrons and annihilate into gamma-ray photons). This released nuclear energy now keeps up the high temperature of the Sun's core, and the heat also keeps the gas pressure high, keeping the Sun at its present size, and stopping gravity from compressing it any more. There is now a stable balance between gravity and pressure.
Different nuclear reactions may predominate at different stages of the Sun's existence, including the proton–proton reaction and the carbon–nitrogen cycle—which involves heavier nuclei, but whose final product is still the combination of protons to form helium.
A branch of physics, the study of controlled nuclear fusion, has tried since the 1950s to derive useful power from nuclear fusion reactions that combine small nuclei into bigger ones, typically to heat boilers, whose steam could turn turbines and produce electricity. No earthly laboratory can match one feature of the solar powerhouse: the great mass of the Sun, whose weight keeps the hot plasma compressed and confines the nuclear furnace to the Sun's core. Instead, physicists use strong magnetic fields to confine the plasma, and for fuel they use heavy forms of hydrogen, which burn more easily. Magnetic traps can be rather unstable, and any plasma hot enough and dense enough to undergo nuclear fusion tends to slip out of them after a short time. Even with ingenious tricks, the confinement in most cases lasts only a small fraction of a second.
Combining nuclei
Small nuclei that are larger than hydrogen can combine into bigger ones and release energy, but in combining such nuclei, the amount of energy released is much smaller compared to hydrogen fusion. The reason is that while the overall process releases energy from letting the nuclear attraction do its work, energy must first be injected to force together positively charged protons, which also repel each other with their electric charge.
For elements that weigh more than iron (a nucleus with 26 protons), the fusion process no longer releases energy. In even heavier nuclei energy is consumed, not released, by combining similarly sized nuclei. With such large nuclei, overcoming the electric repulsion (which affects all protons in the nucleus) requires more energy than is released by the nuclear attraction (which is effective mainly between close neighbors). Conversely, energy could actually be released by breaking apart nuclei heavier than iron.
With the nuclei of elements heavier than lead, the electric repulsion is so strong that some of them spontaneously eject positive fragments, usually nuclei of helium that form stable alpha particles. This spontaneous break-up is one of the forms of radioactivity exhibited by some nuclei.
Nuclei heavier than lead (except for bismuth, thorium, and uranium) spontaneously break up too quickly to appear in nature as primordial elements, though they can be produced artificially or as intermediates in the decay chains of heavier elements. Generally, the heavier the nuclei are, the faster they spontaneously decay.
Iron nuclei are the most stable nuclei (in particular iron-56), and the best sources of energy are therefore nuclei whose weights are as far removed from iron as possible. One can combine the lightest ones—nuclei of hydrogen (protons)—to form nuclei of helium, and that is how the Sun generates its energy. Alternatively, one can break up the heaviest ones—nuclei of uranium or plutonium—into smaller fragments, and that is what nuclear reactors do.
Nuclear binding energy
An example that illustrates nuclear binding energy is the nucleus of 12C (carbon-12), which contains 6 protons and 6 neutrons. The protons are all positively charged and repel each other, but the nuclear force overcomes the repulsion and causes them to stick together. The nuclear force is a close-range force (it is strongly attractive at a distance of 1.0 fm and becomes extremely small beyond a distance of 2.5 fm), and virtually no effect of this force is observed outside the nucleus. The nuclear force also pulls neutrons together, or neutrons and protons.
The energy of the nucleus is negative with regard to the energy of the particles pulled apart to infinite distance (just like the gravitational energy of planets of the Solar System), because energy must be utilized to split a nucleus into its individual protons and neutrons. Mass spectrometers have measured the masses of nuclei, which are always less than the sum of the masses of protons and neutrons that form them, and the difference—by the formula —gives the binding energy of the nucleus.
Nuclear fusion
The binding energy of helium is the energy source of the Sun and of most stars. The sun is composed of 74 percent hydrogen (measured by mass), an element having a nucleus consisting of a single proton. Energy is released in the Sun when 4 protons combine into a helium nucleus, a process in which two of them are also converted to neutrons.
The conversion of protons to neutrons is the result of another nuclear force, known as the weak (nuclear) force. The weak force, like the strong force, has a short range, but is much weaker than the strong force. The weak force tries to make the number of neutrons and protons into the most energetically stable configuration. For nuclei containing less than 40 particles, these numbers are usually about equal. Protons and neutrons are closely related and are collectively known as nucleons. As the number of particles increases toward a maximum of about 209, the number of neutrons to maintain stability begins to outstrip the number of protons, until the ratio of neutrons to protons is about three to two.
The protons of hydrogen combine to helium only if they have enough velocity to overcome each other's mutual repulsion sufficiently to get within range of the strong nuclear attraction. This means that fusion only occurs within a very hot gas. Hydrogen hot enough for combining to helium requires an enormous pressure to keep it confined, but suitable conditions exist in the central regions of the Sun, where such pressure is provided by the enormous weight of the layers above the core, pressed inwards by the Sun's strong gravity. The process of combining protons to form helium is an example of nuclear fusion.
Producing helium from normal hydrogen would be practically impossible on earth because of the difficulty in creating deuterium. Research is being undertaken on developing a process using deuterium and tritium. The Earth's oceans contain a large amount of deuterium that could be used and tritium can be made in the reactor itself from lithium, and furthermore the helium product does not harm the environment, so some consider nuclear fusion a good alternative to supply our energy needs. Experiments to carry out this form of fusion have so far only partially succeeded. Sufficiently hot deuterium and tritium must be confined. One technique is to use very strong magnetic fields, because charged particles (like those trapped in the Earth's radiation belt) are guided by magnetic field lines.
The binding energy maximum and ways to approach it by decay
In the main isotopes of light elements, such as carbon, nitrogen and oxygen, the most stable combination of neutrons and of protons occurs when the numbers are equal (this continues to element 20, calcium). However, in heavier nuclei, the disruptive energy of protons increases, since they are confined to a tiny volume and repel each other. The energy of the strong force holding the nucleus together also increases, but at a slower rate, as if inside the nucleus, only nucleons close to each other are tightly bound, not ones more widely separated.
The net binding energy of a nucleus is that of the nuclear attraction, minus the disruptive energy of the electric force. As nuclei get heavier than helium, their net binding energy per nucleon (deduced from the difference in mass between the nucleus and the sum of masses of component nucleons) grows more and more slowly, reaching its peak at iron. As nucleons are added, the total nuclear binding energy always increases—but the total disruptive energy of electric forces (positive protons repelling other protons) also increases, and past iron, the second increase outweighs the first. Iron-56 (56Fe) is the most efficiently bound nucleus meaning that it has the least average mass per nucleon. However, nickel-62 is the most tightly bound nucleus in terms of binding energy per nucleon. (Nickel-62's higher binding energy does not translate to a larger mean mass loss than 56Fe, because 62Ni has a slightly higher ratio of neutrons/protons than does iron-56, and the presence of the heavier neutrons increases nickel-62's average mass per nucleon).
To reduce the disruptive energy, the weak interaction allows the number of neutrons to exceed that of protons—for instance, the main isotope of iron has 26 protons and 30 neutrons. Isotopes also exist where the number of neutrons differs from the most stable number for that number of nucleons. If changing one proton into a neutron or one neutron into a proton increases the stability (lowering the mass), then this will happen through beta decay, meaning the nuclide will be radioactive.
The two methods for this conversion are mediated by the weak force, and involve types of beta decay. In the simplest beta decay, neutrons are converted to protons by emitting a negative electron and an antineutrino. This is always possible outside a nucleus because neutrons are more massive than protons by an equivalent of about 2.5 electrons. In the opposite process, which only happens within a nucleus, and not to free particles, a proton may become a neutron by ejecting a positron and an electron neutrino. This is permitted if enough energy is available between parent and daughter nuclides to do this (the required energy difference is equal to 1.022 MeV, which is the mass of 2 electrons). If the mass difference between parent and daughter is less than this, a proton-rich nucleus may still convert protons to neutrons by the process of electron capture, in which a proton simply electron captures one of the atom's K orbital electrons, emits a neutrino, and becomes a neutron.
Among the heaviest nuclei, starting with tellurium nuclei (element 52) containing 104 or more nucleons, electric forces may be so destabilizing that entire chunks of the nucleus may be ejected, usually as alpha particles, which consist of two protons and two neutrons (alpha particles are fast helium nuclei). (Beryllium-8 also decays, very quickly, into two alpha particles.) This type of decay becomes more and more probable as elements rise in atomic weight past 104.
The curve of binding energy is a graph that plots the binding energy per nucleon against atomic mass. This curve has its main peak at iron and nickel and then slowly decreases again, and also a narrow isolated peak at helium, which is more stable than other low-mass nuclides. The heaviest nuclei in more than trace quantities in nature, uranium 238U, are unstable, but having a half-life of 4.5 billion years, close to the age of the Earth, they are still relatively abundant; they (and other nuclei heavier than helium) have formed in stellar evolution events like supernova explosions preceding the formation of the Solar System. The most common isotope of thorium, 232Th, also undergoes alpha particle emission, and its half-life (time over which half a number of atoms decays) is even longer, by several times. In each of these, radioactive decay produces daughter isotopes that are also unstable, starting a chain of decays that ends in some stable isotope of lead.
Calculation of nuclear binding energy
Calculation can be employed to determine the nuclear binding energy of nuclei. The calculation involves determining the nuclear mass defect, converting it into energy, and expressing the result as energy per mole of atoms, or as energy per nucleon.
Conversion of nuclear mass defect into energy
Nuclear mass defect is defined as the difference between the nuclear mass, and the sum of the masses of the constituent nucleons. It is given by
where:
Z is the proton number (atomic number).
A is the nucleon number (mass number).
mp is the mass of proton.
mn is the mass of neutron.
M is the nuclear mass.
N is the neutron number.
The nuclear mass defect is usually converted into nuclear binding energy, which is the minimum energy required to disassemble the nucleus into its constituent nucleons. This conversion is done with the mass-energy equivalence: . However it must be expressed as energy per mole of atoms or as energy per nucleon.
Fission and fusion
Nuclear energy is released by the splitting (fission) or merging (fusion) of the nuclei of atom(s). The conversion of nuclear mass–energy to a form of energy, which can remove some mass when the energy is removed, is consistent with the mass–energy equivalence formula:
ΔE = Δm c2,
where
ΔE = energy release,
Δm = mass defect,
and c = the speed of light in vacuum.
Nuclear energy was first discovered by French physicist Henri Becquerel in 1896, when he found that photographic plates stored in the dark near uranium were blackened like X-ray plates (X-rays had recently been discovered in 1895).
Nickel-62 has the highest binding energy per nucleon of any isotope. If an atom of lower average binding energy per nucleon is changed into two atoms of higher average binding energy per nucleon, energy is emitted. (The average here is the weighted average.) Also, if two atoms of lower average binding energy fuse into an atom of higher average binding energy, energy is emitted. The chart shows that fusion, or combining, of hydrogen nuclei to form heavier atoms releases energy, as does fission of uranium, the breaking up of a larger nucleus into smaller parts.
Nuclear energy is released by three exoenergetic (or exothermic) processes:
Radioactive decay, where a neutron or proton in the radioactive nucleus decays spontaneously by emitting either particles, electromagnetic radiation (gamma rays), or both. Note that for radioactive decay, it is not strictly necessary for the binding energy to increase. What is strictly necessary is that the mass decrease. If a neutron turns into a proton and the energy of the decay is less than 0.782343 MeV, the difference between the masses of the neutron and proton multiplied by the speed of light squared, (such as rubidium-87 decaying to strontium-87), the average binding energy per nucleon will actually decrease.
Fusion, two atomic nuclei fuse together to form a heavier nucleus
Fission, the breaking of a heavy nucleus into two (or more rarely three) lighter nuclei, and some neutrons
The energy-producing nuclear interaction of light elements requires some clarification. Frequently, all light element energy-producing nuclear interactions are classified as fusion, however by the given definition above fusion requires that the products include a nucleus that is heavier than the reactants. Light elements can undergo energy-producing nuclear interactions by fusion or fission. All energy-producing nuclear interactions between two hydrogen isotopes and between hydrogen and helium-3 are fusion, as the product of these interactions include a heavier nucleus. However, the energy-producing nuclear interaction of a neutron with lithium–6 produces Hydrogen-3 and Helium-4, each a lighter nucleus. By the definition above, this nuclear interaction is fission, not fusion. When fission is caused by a neutron, as in this case, it is called induced fission.
Binding energy for atoms
The binding energy of an atom (including its electrons) is not exactly the same as the binding energy of the atom's nucleus. The measured mass deficits of isotopes are always listed as mass deficits of the neutral atoms of that isotope, and mostly in . As a consequence, the listed mass deficits are not a measure of the stability or binding energy of isolated nuclei, but for the whole atoms. There is a very practical reason for this, namely that it is very hard to totally ionize heavy elements, i.e. strip them of all of their electrons.
This practice is useful for other reasons, too: stripping all the electrons from a heavy unstable nucleus (thus producing a bare nucleus) changes the lifetime of the nucleus, or the nucleus of a stable neutral atom can likewise become unstable after stripping, indicating that the nucleus cannot be treated independently. Examples of this have been shown in bound-state β decay experiments performed at the GSI heavy ion accelerator.
This is also evident from phenomena like electron capture. Theoretically, in orbital models of heavy atoms, the electron orbits partially inside the nucleus (it does not orbit in a strict sense, but has a non-vanishing probability of being located inside the nucleus).
A nuclear decay happens to the nucleus, meaning that properties ascribed to the nucleus change in the event. In the field of physics the concept of "mass deficit" as a measure for "binding energy" means "mass deficit of the neutral atom" (not just the nucleus) and is a measure for stability of the whole atom.
Nuclear binding energy curve
In the periodic table of elements, the series of light elements from hydrogen up to sodium is observed to exhibit generally increasing binding energy per nucleon as the atomic mass increases. This increase is generated by increasing forces per nucleon in the nucleus, as each additional nucleon is attracted by other nearby nucleons, and thus more tightly bound to the whole. Helium-4 and oxygen-16 are particularly stable exceptions to the trend (see figure on the right). This is because they are doubly magic, meaning their protons and neutrons both fill their respective nuclear shells.
The region of increasing binding energy is followed by a region of relative stability (saturation) in the sequence from about mass 30 through about mass 90. In this region, the nucleus has become large enough that nuclear forces no longer completely extend efficiently across its width. Attractive nuclear forces in this region, as atomic mass increases, are nearly balanced by repellent electromagnetic forces between protons, as the atomic number increases.
Finally, in the heavier elements, there is a gradual decrease in binding energy per nucleon as atomic number increases. In this region of nuclear size, electromagnetic repulsive forces are beginning to overcome the strong nuclear force attraction.
At the peak of binding energy, nickel-62 is the most tightly bound nucleus (per nucleon), followed by iron-58 and iron-56. This is the approximate basic reason why iron and nickel are very common metals in planetary cores, since they are produced profusely as end products in supernovae and in the final stages of silicon burning in stars. However, it is not binding energy per defined nucleon (as defined above), which controls exactly which nuclei are made, because within stars, neutrons and protons can inter-convert to release even more energy per generic nucleon. In fact, it has been argued that photodisintegration of 62Ni to form 56Fe may be energetically possible in an extremely hot star core, due to this beta decay conversion of neutrons to protons. This favors the creation of 56Fe, the nuclide with the lowest mass per nucleon. However, at high temperatures not all matter will be in the lowest energy state. This energetic maximum should also hold for ambient conditions, say and , for neutral condensed matter consisting of 56Fe atoms—however, in these conditions nuclei of atoms are inhibited from fusing into the most stable and low energy state of matter.
Elements with high binding energy per nucleon, like iron and nickel, cannot undergo fission, but they can theoretically undergo fusion with hydrogen, deuterium, helium, and carbon, for instance:
Ni + C → Se Q = 5.467 MeV
It is generally believed that iron-56 is more common than nickel isotopes in the universe for mechanistic reasons, because its unstable progenitor nickel-56 is copiously made by staged build-up of 14 helium nuclei inside supernovas, where it has no time to decay to iron before being released into the interstellar medium in a matter of a few minutes, as the supernova explodes. However, nickel-56 then decays to cobalt-56 within a few weeks, then this radioisotope finally decays to iron-56 with a half life of about 77.3 days. The radioactive decay-powered light curve of such a process has been observed to happen in type II supernovae, such as SN 1987A. In a star, there are no good ways to create nickel-62 by alpha-addition processes, or else there would presumably be more of this highly stable nuclide in the universe.
Binding energy and nuclide masses
The fact that the maximum binding energy is found in medium-sized nuclei is a consequence of the trade-off in the effects of two opposing forces that have different range characteristics. The attractive nuclear force (strong nuclear force), which binds protons and neutrons equally to each other, has a limited range due to a rapid exponential decrease in this force with distance. However, the repelling electromagnetic force, which acts between protons to force nuclei apart, falls off with distance much more slowly (as the inverse square of distance). For nuclei larger than about four nucleons in diameter, the additional repelling force of additional protons more than offsets any binding energy that results between further added nucleons as a result of additional strong force interactions. Such nuclei become increasingly less tightly bound as their size increases, though most of them are still stable. Finally, nuclei containing more than 209 nucleons (larger than about 6 nucleons in diameter) are all too large to be stable, and are subject to spontaneous decay to smaller nuclei.
Nuclear fusion produces energy by combining the very lightest elements into more tightly bound elements (such as hydrogen into helium), and nuclear fission produces energy by splitting the heaviest elements (such as uranium and plutonium) into more tightly bound elements (such as barium and krypton). The nuclear fission of a few light elements (such as Lithium) occurs because Helium-4 is a product and a more tightly bound element than slightly heavier elements. Both processes produce energy as the sum of the masses of the products is less than the sum of the masses of the reacting nuclei.
As seen above in the example of deuterium, nuclear binding energies are large enough that they may be easily measured as fractional mass deficits, according to the equivalence of mass and energy. The atomic binding energy is simply the amount of energy (and mass) released, when a collection of free nucleons are joined to form a nucleus.
Nuclear binding energy can be computed from the difference in mass of a nucleus, and the sum of the masses of the number of free neutrons and protons that make up the nucleus. Once this mass difference, called the mass defect or mass deficiency, is known, Einstein's mass–energy equivalence formula can be used to compute the binding energy of any nucleus. Early nuclear physicists used to refer to computing this value as a "packing fraction" calculation.
For example, the dalton (1 Da) is defined as 1/12 of the mass of a 12C atom—but the atomic mass of a 1H atom (which is a proton plus electron) is 1.007825 Da, so each nucleon in 12C has lost, on average, about 0.8% of its mass in the form of binding energy.
Semiempirical formula for nuclear binding energy
For a nucleus with A nucleons, including Z protons and N neutrons, a semi-empirical formula for the binding energy (EB) per nucleon is:
where the coefficients are given by: ; ; ; ; .
The first term is called the saturation contribution and ensures that the binding energy per nucleon is the same for all nuclei to a first approximation. The term is a surface tension effect and is proportional to the number of nucleons that are situated on the nuclear surface; it is largest for light nuclei. The term is the Coulomb electrostatic repulsion; this becomes more important as increases. The symmetry correction term takes into account the fact that in the absence of other effects the most stable arrangement has equal numbers of protons and neutrons; this is because the n–p interaction in a nucleus is stronger than either the n−n or p−p interaction. The pairing term is purely empirical; it is + for even–even nuclei and − for odd–odd nuclei. When A is odd, the pairing term is identically zero.
Example values deduced from experimentally measured atom nuclide masses
The following table lists some binding energies and mass defect values. Notice also that we use 1 Da = . To calculate the binding energy we use the formula Z (mp + me) + N mn − mnuclide where Z denotes the number of protons in the nuclides and N their number of neutrons. We take , and . The letter A denotes the sum of Z and N (number of nucleons in the nuclide). If we assume the reference nucleon has the mass of a neutron (so that all "total" binding energies calculated are maximal) we could define the total binding energy as the difference from the mass of the nucleus, and the mass of a collection of A free neutrons. In other words, it would be (Z + N) mn − mnuclide. The "total binding energy per nucleon" would be this value divided by A.
56Fe has the lowest nucleon-specific mass of the four nuclides listed in this table, but this does not imply it is the strongest bound atom per hadron, unless the choice of beginning hadrons is completely free. Iron releases the largest energy if any 56 nucleons are allowed to build a nuclide—changing one to another if necessary. The highest binding energy per hadron, with the hadrons starting as the same number of protons Z and total nucleons A as in the bound nucleus, is 62Ni. Thus, the true absolute value of the total binding energy of a nucleus depends on what we are allowed to construct the nucleus out of. If all nuclei of mass number A were to be allowed to be constructed of A neutrons, then 56Fe would release the most energy per nucleon, since it has a larger fraction of protons than 62Ni. However, if nuclei are required to be constructed of only the same number of protons and neutrons that they contain, then nickel-62 is the most tightly bound nucleus, per nucleon.
In the table above it can be seen that the decay of a neutron, as well as the transformation of tritium into helium-3, releases energy; hence, it manifests a stronger bound new state when measured against the mass of an equal number of neutrons (and also a lighter state per number of total hadrons). Such reactions are not driven by changes in binding energies as calculated from previously fixed N and Z numbers of neutrons and protons, but rather in decreases in the total mass of the nuclide/per nucleon, with the reaction. (Note that the Binding Energy given above for hydrogen-1 is the atomic binding energy, not the nuclear binding energy which would be zero.)
| Physical sciences | Nuclear physics | Physics |
9415677 | https://en.wikipedia.org/wiki/Muteness | Muteness | In human development, muteness or mutism is defined as an absence of speech, with or without an ability to hear the speech of others. Mutism is typically understood as a person's inability to speak, and commonly observed by their family members, caregivers, teachers, doctors or speech and language pathologists. It may not be a permanent condition, as muteness can be caused or manifest due to several different phenomena, such as physiological injury, illness, medical side effects, psychological trauma, developmental disorders, or neurological disorders. A specific physical disability or communication disorder can be more easily diagnosed. Loss of previously normal speech (aphasia) can be due to accidents, disease, or surgical complication; it is rarely for psychological reasons.
Treatment or management also varies by cause and this can often determined after a speech assessment. Treatment can sometimes restore speech. If not, a range of assistive and augmentative communication devices are available.
Biological causes
Biological causes of mutism may stem from several different sources. One cause of muteness may be problems with the physiology involved in speech, for example, the mouth or tongue. Mutism may be due to apraxia, that is, problems with coordination of muscles involved in speech. Another cause may be a medical condition impacting the physical structures involved in speech, for example, loss of voice due to the injury, paralysis, or illness of the larynx. Anarthria is a severe form of dysarthria, in which the coordination of movements of the mouth and tongue or the conscious coordination of the lungs are damaged.
Neurological damage due to stroke may cause loss or impairment of speech, termed aphasia. Neurological damage or problems with development of the area of the brain involved in speech production, Broca's area, may cause muteness. Trauma or injury to Broca's area, located in the left inferior frontal cortex of the brain, can cause muteness. Muteness may follow brain surgery. For example, there is a spectrum of possible neurobehavioural deficits in the posterior fossa syndrome in children following cerebellar tumor surgery.
Psychological causes
When children do not speak, psychological problems or emotional stress, such as anxiety, may be involved. Children may not speak due to selective mutism. Selective mutism is a condition in which the child speaks only in certain situations or with certain people, such as close family members. Assessment is needed to rule out possible illness or other conditions and to determine treatment. Prevalence is low, but not as rare as once thought. Selective mutism should not be confused with a child who does not speak and cannot speak due to physical disabilities. It is common for symptoms to occur before the age of five. Not all children express the same symptoms.
Selective mutism may occur in conjunction with autism spectrum disorder or other diagnoses. Differential diagnosis between selective mutism and language delay associated with autism or other disorders is needed to determine appropriate treatment.
Adults who previously had speech and subsequently ceased talking may not speak for psychological or emotional reasons, though this is rare as a cause for adults. Absence or paucity of speech in adults may also be associated with specific psychiatric disorders.
Developmental and neurological causes
Absence of speech in children may involve communication disorders or language delays. Communication disorders or developmental language delays may occur for several different reasons.
Language delays may be associated with other developmental delays. For example, children with Down syndrome often have impaired language and speech.
Children with autism, categorized as a neurodevelopmental disorder in the DSM-5, often demonstrate language delays.
Treatment
Evaluation of children with language delays is necessary to determine whether the language delay was caused by another condition. Examples of such conditions are autism spectrum disorder, hearing loss and apraxia. The manner of treatment depends on the diagnosed condition. Language delays may impact expressive language, receptive language, or both. Communication disorders may impact articulation, fluency (stuttering) and other specified and unspecified communication disorders. For example, speech and language services may focus on the production of speech sounds for children with phonological challenges.
Intervention services and treatment programs have been specifically developed for autistic children with language delays. For example, pivotal response treatment is a well-established and researched intervention that includes family participation. Mark Sundberg's verbal behavior framework is another well-established assessment and treatment modality that is incorporated into many applied behavior analysis (ABA) early intervention treatment programs for young children with autism and communication challenges.
Treatment for absence of speech due to apraxia, involves assessment, and, based on the assessment, occupational therapy, physical therapy, and/or speech therapy. Treatment for selective mutism involves assessment, counseling, and positive supports. Treatment for absence of speech in adults who previously had speech involves assessment to determine cause, including medical and surgery related causes, followed by appropriate treatment or management. Treatment may involve counseling, or rehabilitation services, depending upon cause of loss of speech.
Management
Management involves the use of appropriate assistive devices, called alternative and augmentative communications. Suitability and appropriateness of modality will depend on users' physical abilities and cognitive functioning.
Augmentative and alternative communication technology ranges from elaborated software for tablets to enable complex communication with an auditory component to less technologically involved strategies. For example, a common method involves the use of pictures that can be attached to velcro strips to create an accessible communication modality that does not require the cognitive or fine motor skills needed to manipulate a tablet.
Speech-generating devices can help people with speech deficiencies associated with medical conditions that affect speech, communication disorders that impair speech, or surgeries that have impacted speech. Speech-generating devices continue to improve in ease of use.
| Biology and health sciences | Disabilities | Health |
9422452 | https://en.wikipedia.org/wiki/Galactic%20tide | Galactic tide | A galactic tide is a tidal force experienced by objects subject to the gravitational field of a galaxy such as the Milky Way. Particular areas of interest concerning galactic tides include galactic collisions, the disruption of dwarf or satellite galaxies, and the Milky Way's tidal effect on the Oort cloud of the Solar System.
Effects on external galaxies
Galaxy collisions
Tidal forces are dependent on the gradient of a gravitational field, rather than its strength, and so tidal effects are usually limited to the immediate surroundings of a galaxy. Two large galaxies undergoing collisions or passing nearby each other will be subjected to very large tidal forces, often producing the most visually striking demonstrations of galactic tides in action.
Two interacting galaxies will rarely (if ever) collide head-on, and the tidal forces will distort each galaxy along an axis pointing roughly towards and away from its perturber. As the two galaxies briefly orbit each other, these distorted regions, which are pulled away from the main body of each galaxy, will be sheared by the galaxy's differential rotation and flung off into intergalactic space, forming tidal tails. Such tails are typically strongly curved. If a tail appears to be straight, it is probably being viewed edge-on. The stars and gas that comprise the tails will have been pulled from the easily distorted galactic discs (or other extremities) of one or both bodies, rather than the gravitationally bound galactic centers. Two very prominent examples of collisions producing tidal tails are the Mice Galaxies and the Antennae Galaxies.
Just as the Moon raises two water tides on opposite sides of the Earth, so a galactic tide produces two arms in its galactic companion. While a large tail is formed if the perturbed galaxy is equal to or less massive than its partner, if it is significantly more massive than the perturbing galaxy, then the trailing arm will be relatively minor, and the leading arm, sometimes called a bridge, will be more prominent. Tidal bridges are typically harder to distinguish than tidal tails: in the first instance, the bridge may be absorbed by the passing galaxy or the resulting merged galaxy, making it visible for a shorter duration than a typical large tail. Secondly, if one of the two galaxies is in the foreground, then the second galaxy — and the bridge between them — may be partially obscured. Together, these effects can make it hard to see where one galaxy ends and the next begins. Tidal loops, where a tail joins with its parent galaxy at both ends, are rarer still.
Satellite interactions
Because tidal effects are strongest in the immediate vicinity of a galaxy, satellite galaxies are particularly likely to be affected. Such an external force upon a satellite can produce ordered motions within it, leading to large-scale observable effects: the interior structure and motions of a dwarf satellite galaxy may be severely affected by a galactic tide, inducing rotation (as with the tides of the Earth's oceans) or an anomalous mass-to-luminosity ratio. Satellite galaxies can also be subjected to the same tidal stripping that occurs in galactic collisions, where stars and gas are torn from the extremities of a galaxy, possibly to be absorbed by its companion. The dwarf galaxy M32, a satellite galaxy of Andromeda, may have lost its spiral arms to tidal stripping, while a high star formation rate in the remaining core may be the result of tidally-induced motions of the remaining molecular clouds (Because tidal forces can knead and compress the interstellar gas clouds inside galaxies, they induce large amounts of star formation in small satellites.)
The stripping mechanism is the same as between two comparable galaxies, although its comparatively weak gravitational field ensures that only the satellite, not the host galaxy, is affected. If the satellite is very small compared to the host, the tidal debris tails produced are likely to be symmetric, and follow a very similar orbit, effectively tracing the satellite's path. However, if the satellite is reasonably large—typically over one ten thousandth the mass of its host—then the satellite's own gravity may affect the tails, breaking the symmetry and accelerating the tails in different directions. The resulting structure is dependent on both the mass and orbit of the satellite, and the mass and structure of the conjectured galactic halo around the host, and may provide a means of probing the dark matter potential of a galaxy such as the Milky Way.
Over many orbits of its parent galaxy, or if the orbit passes too close to it, a dwarf satellite may eventually be completely disrupted, to form a tidal stream of stars and gas wrapping around the larger body. It has been suggested that the extended discs of gas and stars around some galaxies, such as Andromeda, may be the result of the complete tidal disruption (and subsequent merger with the parent galaxy) of a dwarf satellite galaxy.
Effects on bodies within a galaxy
Tidal effects are also present within a galaxy, where their gradients are likely to be steepest. This can have consequences for the formation of stars and planetary systems. Typically, a star's gravity will dominate within its own system, with only the passage of other stars substantially affecting dynamics. However, at the outer reaches of the system, the star's gravity is weak and galactic tides may be significant. In the Solar System, the theoretical Oort cloud, source of most long-period comets, lies in this transitional region.
The Oort cloud is a vast shell surrounding the Solar System, possibly over a light-year in radius. Across such a vast distance, the gradient of the Milky Way's gravitational field plays a far more noticeable role. Because of this gradient, galactic tides may then deform an otherwise spherical Oort cloud, stretching the cloud in the direction of the galactic centre and compressing it along the other two axes, just as the Earth distends in response to the gravity of the Moon.
The Sun's gravity is sufficiently weak at such a distance that these small galactic perturbations are enough to dislodge some planetesimals from such distant orbits, sending them towards the Sun and planets by significantly reducing their perihelia. Such bodies, composed of a rock and ice mixture, become comets when subjected to the increased solar radiation present in the inner Solar System.
It has been suggested that the galactic tide may also contribute to the formation of an Oort cloud, by increasing the perihelia of planetesimals with large aphelia. This shows that the effects of the galactic tide are quite complex, and depend heavily on the behaviour of individual objects within a planetary system. However, cumulatively, the effect can be quite significant; up to 90% of all comets originating from an Oort cloud may be the result of the galactic tide.
| Physical sciences | Basics_2 | Astronomy |
16094518 | https://en.wikipedia.org/wiki/Gauss%27s%20law%20for%20magnetism | Gauss's law for magnetism | In physics, Gauss's law for magnetism is one of the four Maxwell's equations that underlie classical electrodynamics. It states that the magnetic field has divergence equal to zero, in other words, that it is a solenoidal vector field. It is equivalent to the statement that magnetic monopoles do not exist. Rather than "magnetic charges", the basic entity for magnetism is the magnetic dipole. (If monopoles were ever found, the law would have to be modified, as elaborated below.)
Gauss's law for magnetism can be written in two forms, a differential form and an integral form. These forms are equivalent due to the divergence theorem.
The name "Gauss's law for magnetism" is not universally used. The law is also called "Absence of free magnetic poles". It is also referred to as the "transversality requirement" because for plane waves it requires that the polarization be transverse to the direction of propagation.
Differential form
The differential form for Gauss's law for magnetism is:
where denotes divergence, and is the magnetic field.
Integral form
The integral form of Gauss's law for magnetism states:
where is any closed surface (see image right), is the magnetic flux through , and is a vector, whose magnitude is the area of an infinitesimal piece of the surface , and whose direction is the outward-pointing surface normal (see surface integral for more details).
Gauss's law for magnetism thus states that the net magnetic flux through a closed surface equals zero.
The integral and differential forms of Gauss's law for magnetism are mathematically equivalent, due to the divergence theorem. That said, one or the other might be more convenient to use in a particular computation.
The law in this form states that for each volume element in space, there are exactly the same number of "magnetic field lines" entering and exiting the volume. No total "magnetic charge" can build up in any point in space. For example, the south pole of the magnet is exactly as strong as the north pole, and free-floating south poles without accompanying north poles (magnetic monopoles) are not allowed. In contrast, this is not true for other fields such as electric fields or gravitational fields, where total electric charge or mass can build up in a volume of space.
Vector potential
Due to the Helmholtz decomposition theorem, Gauss's law for magnetism is equivalent to the following statement:
The vector field is called the magnetic vector potential.
Note that there is more than one possible which satisfies this equation for a given field. In fact, there are infinitely many: any field of the form can be added onto to get an alternative choice for , by the identity (see Vector calculus identities):
since the curl of a gradient is the zero vector field:
This arbitrariness in is called gauge freedom.
Field lines
The magnetic field can be depicted via field lines (also called flux lines) – that is, a set of curves whose direction corresponds to the direction of , and whose areal density is proportional to the magnitude of . Gauss's law for magnetism is equivalent to the statement that the field lines have neither a beginning nor an end: Each one either forms a closed loop, winds around forever without ever quite joining back up to itself exactly, or extends to infinity.
Incorporating magnetic monopoles
If magnetic monopoles were to be discovered, then Gauss's law for magnetism would state the divergence of would be proportional to the magnetic charge density , analogous to Gauss's law for electric field. For zero net magnetic charge density (), the original form of Gauss's magnetism law is the result.
The modified formula for use with the SI is not standard and depends on the choice of defining equation for the magnetic charge and current; in one variation, magnetic charge has units of webers, in another it has units of ampere-meters.
where is the vacuum permeability.
So far, examples of magnetic monopoles are disputed in extensive search, although certain papers report examples matching that behavior.
History
This idea of the nonexistence of the magnetic monopoles originated in 1269 by Petrus Peregrinus de Maricourt. His work heavily influenced William Gilbert, whose 1600 work De Magnete spread the idea further. In the early 1800s Michael Faraday reintroduced this law, and it subsequently made its way into James Clerk Maxwell's electromagnetic field equations.
Numerical computation
In numerical computation, the numerical solution may not satisfy Gauss's law for magnetism due to the discretization errors of the numerical methods. However, in many cases, e.g., for magnetohydrodynamics, it is important to preserve Gauss's law for magnetism precisely (up to the machine precision). Violation of Gauss's law for magnetism on the discrete level will introduce a strong non-physical force. In view of energy conservation, violation of this condition leads to a non-conservative energy integral, and the error is proportional to the divergence of the magnetic field.
There are various ways to preserve Gauss's law for magnetism in numerical methods, including the divergence-cleaning techniques, the constrained transport method, potential-based formulations and de Rham complex based finite element methods where stable and structure-preserving algorithms are constructed on unstructured meshes with finite element differential forms.
| Physical sciences | Electrodynamics | Physics |
16105186 | https://en.wikipedia.org/wiki/Electric%20car | Electric car | An electric car or electric vehicle (EV) is a passenger automobile that is propelled by an electric traction motor, using electrical energy as the primary source of propulsion. The term normally refers to a plug-in electric vehicle, typically a battery electric vehicle (BEV), which only uses energy stored in on-board battery packs, but broadly may also include plug-in hybrid electric vehicle (PHEV), range-extended electric vehicle (REEV) and fuel cell electric vehicle (FCEV), which can convert electric power from other fuels via a generator or a fuel cell.
Compared to conventional internal combustion engine (ICE) vehicles, electric cars are quieter, more responsive, have superior energy conversion efficiency and no exhaust emissions, as well as a lower overall carbon footprint from manufacturing to end of life (even when a power plant supplying the electricity might add to its emissions). Due to the superior efficiency of electric motors, electric cars also generate less waste heat, thus reducing the need for engine cooling systems that are often large, complicated and maintenance-prone in ICE vehicles.
The electric vehicle battery typically needs to be plugged into a mains electricity power supply for recharging in order to maximize the cruising range. Recharging an electric car can be done at different kinds of charging stations; these charging stations can be installed in private homes, parking garages and public areas. There is also research and development in, as well as deployment of, other technologies such as battery swapping and inductive charging. As the recharging infrastructure (especially fast chargers) is still in its infancy, range anxiety and time cost are frequent psychological obstacles during consumer purchasing decisions against electric cars.
Worldwide, 14 million plug-in electric cars were sold in 2023, 18% of new car sales, up from 14% in 2022. Many countries have established government incentives for plug-in electric vehicles, tax credits, subsidies, and other non-monetary incentives while several countries have legislated to phase-out sales of fossil fuel cars, to reduce air pollution and limit climate change. EVs are expected to account for over one-fifth of global car sales in 2024.
China currently has the largest stock of electric vehicles in the world, with cumulative sales of 5.5 million units through December 2020, although these figures also include heavy-duty commercial vehicles such as buses, garbage trucks and sanitation vehicles, and only accounts for vehicles manufactured in China. In the United States and the European Union, as of 2020, the total cost of ownership of recent electric vehicles is cheaper than that of equivalent ICE cars, due to lower fueling and maintenance costs.
In 2023, the Tesla Model Y became the world's best selling car. The Tesla Model 3 became the world's all-time best-selling electric car in early 2020, and in June 2021 became the first electric car to pass 1 million global sales. Together with other emerging automotive technologies such as autonomous driving, connected vehicles and shared mobility, electric cars form a future mobility vision called Autonomous, Connected, Electric and Shared (ACES) Mobility.
Terminology
The term "electric car" typically refers specifically to battery electric vehicles (BEVs) or all-electric cars, a type of electric vehicle (EV) that has an onboard rechargeable battery pack that can be plugged in and charged from the electric grid, and the electricity stored on the vehicle is the only energy source that provide propulsion for the wheels. The term generally refers to highway-capable automobiles, but there are also low-speed electric vehicles with limitations in terms of weight, power, and maximum speed that are allowed to travel on certain public roads. The latter are classified as Neighborhood Electric Vehicles (NEVs) in the United States, and as electric motorised quadricycles in Europe.
History
Early developments
Robert Anderson is often credited with inventing the first electric car some time between 1832 and 1839.
The following experimental electric cars appeared during the 1880s:
In 1881, Gustave Trouvé presented an electric car driven by an improved Siemens motor at the Exposition internationale d'Électricité de Paris.
In 1884, Thomas Parker built an electric car in Wolverhampton, England using his own specially-designed high-capacity rechargeable batteries, although the only documentation is a photograph from 1895.
In 1888, the German Andreas Flocken designed the Flocken Elektrowagen, regarded by some as the first "real" electric car.
In 1890, Andrew Morrison introduced the first electric car to the United States.
Electricity was among the preferred methods for automobile propulsion in the late 19th and early 20th centuries, providing a level of comfort and an ease of operation that could not be achieved by the gasoline-driven cars of the time. The electric vehicle fleet peaked at approximately 30,000 vehicles at the turn of the 20th century.
In 1897, electric cars first found commercial use as taxis in Britain and in the United States. In London, Walter Bersey's electric cabs were the first self-propelled vehicles for hire at a time when cabs were horse-drawn. In New York City, a fleet of twelve hansom cabs and one brougham, based on the design of the Electrobat II, formed part of a project funded in part by the Electric Storage Battery Company of Philadelphia. During the 20th century, the main manufacturers of electric vehicles in the United States included Anthony Electric, Baker, Columbia, Anderson, Edison, Riker, Milburn, Bailey Electric, and Detroit Electric. Their electric vehicles were quieter than gasoline-powered ones, and did not require gear changes.
Six electric cars held the land speed record in the 19th century. The last of them was the rocket-shaped La Jamais Contente, driven by Camille Jenatzy, which broke the speed barrier by reaching a top speed of in 1899.
Electric cars remained popular until advances in internal-combustion engine (ICE) cars and mass production of cheaper gasoline- and diesel-powered vehicles, especially the Ford Model T, led to a decline. ICE cars' much quicker refueling times and cheaper production costs made them more popular. However, a decisive moment came with the introduction in 1912 of the electric starter motor that replaced other, often laborious, methods of starting the ICE, such as hand-cranking.
Modern electric cars
In the early 1990s the California Air Resources Board (CARB) began a push for more fuel-efficient, lower-emissions vehicles, with the ultimate goal of a move to zero-emissions vehicles such as electric vehicles. In response, automakers developed electric models. These early cars were eventually withdrawn from the U.S. market, because of a massive campaign by the US automakers to discredit the idea of electric cars.
California electric-automaker Tesla Motors began development in 2004 of what would become the Tesla Roadster, first delivered to customers in 2008. The Roadster was the first highway-legal all-electric car to use lithium-ion battery cells, and the first production all-electric car to travel more than per charge.
Better Place, a venture-backed company based in Palo Alto, California, but steered from Israel, developed and sold battery charging and battery swapping services for electric cars. The company was publicly launched on 29 October 2007 and announced deployment of electric vehicle networks in Israel, Denmark and Hawaii in 2008 and 2009. The company planned to deploy the infrastructure on a country-by-country basis. In January 2008, Better Place announced a memorandum of understanding with Renault-Nissan to build the world's first Electric Recharge Grid Operator (ERGO) model for Israel. Under the agreement, Better Place would build the electric recharge grid and Renault-Nissan would provide the electric vehicles. Better Place filed for bankruptcy in Israel in May 2013. The company's financial difficulties were caused by mismanagement, wasteful efforts to establish toeholds and run pilots in too many countries, the high investment required to develop the charging and swapping infrastructure, and a market penetration far lower than originally predicted.
The Mitsubishi i-MiEV, launched in 2009 in Japan, was the first highway-legal series production electric car, and also the first all-electric car to sell more than 10,000 units. Several months later, the Nissan Leaf, launched in 2010, surpassed the i MiEV as the best selling all-electric car at that time.
Starting in 2008, a renaissance in electric vehicle manufacturing occurred due to advances in batteries, and the desire to reduce greenhouse-gas emissions and to improve urban air quality. During the 2010s, the electric vehicle industry in China expanded rapidly with government support. Several automakers marked up the prices of their electric vehicles in anticipation of the subsidy adjustments, including Tesla, Volkswagen and Guangzhou-based GAC Group, which counts Fiat, Honda, Isuzu, Mitsubishi, and Toyota as foreign partners.
In July 2019 US-based Motor Trend magazine awarded the fully-electric Tesla Model S the title "ultimate car of the year". In March 2020 the Tesla Model 3 passed the Nissan Leaf to become the world's all-time best-selling electric car, with more than 500,000 units delivered; it reached the milestone of 1 million global sales in June 2021.
In the third quarter of 2021, the Alliance for Automotive Innovation reported that sales of electric vehicles had reached six percent of all US light-duty automotive sales, the highest volume of EV sales ever recorded at 187,000 vehicles. This was an 11% sales increase, as opposed to a 1.3% increase in gasoline and diesel-powered units. The report indicated that California was the US leader in EV with nearly 40% of US purchases, followed by Florida – 6%, Texas – 5% and New York 4.4%.
Electric companies from the Middle East have been designing electric cars. Oman's Mays Motors have developed the Mays i E1 which is expected to begin production in 2023. Built from carbon fibre, it has a range of about and can accelerate from in about 4 secs. In Turkey, the EV company Togg is starting production of its electric vehicles. Batteries will be created in a joint venture with the Chinese company Farasis Energy.
Economics
Manufacturing cost
The most expensive part of an electric car is its battery. The price decreased from per kWh in 2010, to in 2017, to in 2019. When designing an electric vehicle, manufacturers may find that for low production, converting existing platforms may be cheaper, as development cost is lower; however, for higher production, a dedicated platform may be preferred to optimize design, and cost.
Total cost of ownership
In the EU and US, but not yet China, the total cost of ownership of recent electric cars is cheaper than that of equivalent gasoline cars, due to lower fueling and maintenance costs. A 2024 Consumer Reports analysis of 29 car brands found Tesla was the least expensive to maintain over a 10-year period; Tesla was the only all-electric brand included.
The greater the distance driven per year, the more likely the total cost of ownership for an electric car will be less than for an equivalent ICE car. The break-even distance varies by country depending on the taxes, subsidies, and different costs of energy. In some countries the comparison may vary by city, as a type of car may have different charges to enter different cities; for example, in England, London charges ICE cars more than Birmingham does.
Purchase cost
Several national and local governments have established EV incentives to reduce the purchase price of electric cars and other plug-ins.
, the electric vehicle battery is more than a quarter of the total cost of the car. Purchase prices are expected to drop below those of new ICE cars when battery costs fall below per kWh, which is forecast to be in the mid-2020s.
Leasing or subscriptions are popular in some countries, depending somewhat on national taxes and subsidies, and end of lease cars are expanding the second hand market.
In a June 2022 report by AlixPartners, the cost for raw materials on an average EV rose from $3,381 in March 2020 to $8,255 in May 2022. The cost increase voice is attributed mainly to lithium, nickel, and cobalt.
Running costs
Electricity almost always costs less than gasoline per kilometer travelled, but the price of electricity often varies depending on where and what time of day the car is charged. Cost savings are also affected by the price of gasoline which can vary by location.
Environmental aspects
Electric cars have several benefits when replacing ICE cars, including a significant reduction of local air pollution, as they do not emit exhaust pollutants such as volatile organic compounds, hydrocarbons, carbon monoxide, ozone, lead, and various oxides of nitrogen. Similar to ICE vehicles, electric cars emit particulates from tyre and brake wear which may damage health, although regenerative braking in electric cars means less brake dust. More research is needed on non-exhaust particulates. The sourcing of fossil fuels (oil well to gasoline tank) causes further damage as well as use of resources during the extraction and refinement processes.
Depending on the production process and the source of the electricity to charge the vehicle, emissions may be partly shifted from cities to the plants that generate electricity and produce the car as well as to the transportation of material. The amount of carbon dioxide emitted depends on the emissions of the electricity source and the efficiency of the vehicle. For electricity from the grid, the life-cycle emissions vary depending on the proportion of coal-fired power, but are always less than ICE cars.
The cost of installing charging infrastructure has been estimated to be repaid by health cost savings in less than three years. According to a 2020 study, balancing lithium supply and demand for the rest of the century will require good recycling systems, vehicle-to-grid integration, and lower lithium intensity of transportation.
Some activists and journalists have raised concerns over the perceived lack of impact of electric cars in solving the climate change crisis compared to other, less popularized methods. These concerns have largely centered around the existence of less carbon-intensive and more efficient forms of transportation such as active mobility, mass transit and e-scooters and the continuation of a system designed for cars first.
Public opinion
A 2022 survey found that 33% of car buyers in Europe will opt for a petrol or diesel car when purchasing a new vehicle. 67% of the respondents mentioned opting for the hybrid or electric version. More specifically, it found that electric cars are only preferred by 28% of Europeans, making them the least preferred type of vehicle. 39% of Europeans tend to prefer hybrid vehicles, while 33% prefer petrol or diesel vehicles.
44% Chinese car buyers, on the other hand, are the most likely to buy an electric car, while 38% of Americans would opt for a hybrid car, 33% would prefer petrol or diesel, while only 29% would go for an electric car.
Specifically for the EU, 47% of car buyers over 65 years old are likely to purchase a hybrid vehicle, while 31% of younger respondents do not consider hybrid vehicles a good option. 35% would rather opt for a petrol or diesel vehicle, and 24% for an electric car instead of a hybrid.
In the EU, only 13% of the total population do not plan on owning a vehicle at all.
Performance
Acceleration and drivetrain design
Electric motors can provide high power-to-weight ratios. Batteries can be designed to supply the electrical current needed to support these motors. Electric motors have a flat torque curve down to zero speed. For simplicity and reliability, most electric cars use fixed-ratio gearboxes and have no clutch.
Many electric cars have faster acceleration than average ICE cars, largely due to reduced drivetrain frictional losses and the more quickly-available torque of an electric motor. However, NEVs may have a low acceleration due to their relatively weak motors.
Electric vehicles can also use a motor in each wheel hub or next to the wheels; this is rare but claimed to be safer. Electric vehicles that lack an axle, differential, or transmission can have less drivetrain inertia. Some direct current motor-equipped drag racer EVs have simple two-speed manual transmissions to improve top speed. The concept electric supercar Rimac Concept One claims it can go from in 2.5 seconds. Tesla claims the upcoming Tesla Roadster will go in 1.9 seconds.
Energy efficiency
Internal combustion engines have thermodynamic limits on efficiency, expressed as a fraction of energy used to propel the vehicle compared to energy produced by burning fuel. Gasoline engines effectively use only 15% of the fuel energy content to move the vehicle or to power accessories; diesel engines can reach on-board efficiency of 20%; electric vehicles convert over 77% of the electrical energy from the grid to power at the wheels.
Electric motors are more efficient than internal combustion engines in converting stored energy into driving a vehicle. However, they are not equally efficient at all speeds. To allow for this, some cars with dual electric motors have one electric motor with a gear optimised for city speeds and the second electric motor with a gear optimised for highway speeds. The electronics select the motor that has the best efficiency for the current speed and acceleration. Regenerative braking, which is most common in electric vehicles, can recover as much as one fifth of the energy normally lost during braking.
Cabin heating and cooling
Combustion powered cars harness waste heat from the engine to provide cabin heating, but this option is not available in an electric vehicle. While heating can be provided with an electric resistance heater, higher efficiency and integral cooling can be obtained with a reversible heat pump, such as on the Nissan Leaf. PTC junction cooling is also attractive for its simplicity—this kind of system is used, for example, in the 2008 Tesla Roadster.
To avoid using part of the battery's energy for heating and thus reducing the range, some models allow the cabin to be heated while the car is plugged in. For example, the Nissan Leaf, the Mitsubishi i-MiEV, Renault Zoe and Tesla cars can be preheated while the vehicle is plugged in.
Some electric cars (for example, the Citroën Berlingo Electrique) use an auxiliary heating system (for example gasoline-fueled units manufactured by Webasto or Eberspächer) but sacrifice "green" and "Zero emissions" credentials. Cabin cooling can be augmented with solar power external batteries and USB fans or coolers, or by automatically allowing outside air to flow through the car when parked; two models of the 2010 Toyota Prius include this feature as an option.
Safety
The safety issues of BEVs are largely dealt with by the international standard ISO 6469. This document is divided into three parts dealing with specific issues:
On-board electrical energy storage, i.e. the battery
Functional safety means and protection against failures
Protection of persons against electrical hazards
Research published in the British Medical Journal in 2024 indicates that between 2013 and 2017 in the United Kingdom, electric cars killed pedestrians at twice the rate of petrol or diesel vehicles because "they are less audible to pedestrians in urban areas". Jurisdictions have passed laws requiring electric vehicles to be manufactured with sound generators.
Weight
The weight of the batteries themselves usually makes an EV heavier than a comparable gasoline vehicle. In a collision, the occupants of a heavy vehicle will, on average, suffer fewer and less serious injuries than the occupants of a lighter vehicle; therefore, the additional weight brings safety benefits to the occupant, while increasing harm to others. On average, an accident will cause about 50% more injuries to the occupants of a vehicle than those in a vehicle. Heavier cars are more dangerous to people outside the car if they hit a pedestrian or another vehicle.
Stability
The battery in skateboard configuration lowers the center of gravity, increasing driving stability, lowering the risk of an accident through loss of control. Additionally, a lower center of gravity provides a greater resistance to roll-over crashes. If there is a separate motor near or in each wheel, this is claimed to be safer due to better handling.
Risk of fire
Like their ICE counterparts, electric vehicle batteries can catch fire after a crash or mechanical failure. Plug-in electric vehicle fire incidents have occurred, albeit fewer per distance traveled than ICE vehicles. Some cars' high-voltage systems are designed to shut down automatically in the event of an airbag deployment, and in case of failure firefighters may be trained for manual high-voltage system shutdown. Much more water may be required than for ICE car fires and a thermal imaging camera is recommended to warn of possible re-ignition of battery fires.
Controls
, most electric cars have similar driving controls to that of a car with a conventional automatic transmission. Even though the motor may be permanently connected to the wheels through a fixed-ratio gear, and no parking pawl may be present, the modes "P" and "N" are often still provided on the selector. In this case, the motor is disabled in "N" and an electrically actuated hand brake provides the "P" mode.
In some cars, the motor will spin slowly to provide a small amount of creep in "D", similar to a traditional automatic transmission car.
When an internal combustion vehicle's accelerator is released, it may slow by engine braking, depending on the type of transmission and mode. EVs are usually equipped with regenerative braking that slows the vehicle and recharges the battery somewhat. Regenerative braking systems also decrease the use of the conventional brakes (similar to engine braking in an ICE vehicle), reducing brake wear and maintenance costs.
Batteries
Lithium-ion-based batteries are often used for their high power and energy density. Batteries with different chemical compositions are becoming more widely used, such as lithium iron phosphate which is not dependent on nickel and cobalt so can be used to make cheaper batteries and thus cheaper cars.
Range
The range of an electric car depends on the number and type of batteries used, and (as with all vehicles), the aerodynamics, weight and type of vehicle, performance requirements, and the weather. Cars marketed for mainly city use are often manufactured with a short range battery to keep them small and light.
Most electric cars are fitted with a display of the expected range. This may take into account how the vehicle is being used and what the battery is powering. However, since factors can vary over the route, the estimate can vary from the actual range. The display allows the driver to make informed choices about driving speed and whether to stop at a charging point en route. Some roadside assistance organizations offer charge trucks to recharge electric cars in case of emergency.
Charging
Connectors
Most electric cars use a wired connection to supply electricity for recharging. Electric vehicle charging plugs are not universal throughout the world. However vehicles using one type of plug are generally able to charge at other types of charging stations through the use of plug adapters.
The Type 2 connector is the most common type of plug, but different versions are used in China and Europe.
The Type 1 (also called SAE J1772) connector is common in North America but rare elsewhere, as it does not support three-phase charging.
Wireless charging, either for stationary cars or as an electric road, is less common , but is used in some cities for taxis.
Home charging
Electric cars are usually charged overnight from a home charging station; sometimes known as a charging point, wallbox charger, or simply a charger; in a garage or on the outside of a house. typical home chargers are 7 kW, but not all include smart charging. Compared to fossil fuel vehicles, the need for charging using public infrastructure is diminished because of the opportunities for home charging; vehicles can be plugged in and begin each day with a full charge. Charging from a standard outlet is also possible but very slow.
Public charging
Public charging stations are almost always faster than home chargers, with many supplying direct current to avoid the bottleneck of going through the car's AC to DC converter, the fastest being 350 kW.
Combined Charging System (CCS) is the most widespread charging standard, whereas the GB/T 27930 standard is used in China, and CHAdeMO in Japan. The United States has no de facto standard, with a mix of CCS, Tesla Superchargers, and CHAdeMO charging stations.
Charging an electric vehicle using public charging stations takes longer than refueling a fossil fuel vehicle. The speed at which a vehicle can recharge depends on the charging station's charging speed and the vehicle's own capacity to receive a charge. some cars are 400-volt and some 800-volt. Connecting a vehicle that can accommodate very fast charging to a charging station with a very high rate of charge can refill the vehicle's battery to 80% in 15 minutes. Vehicles and charging stations with slower charging speeds may take as long as two hours to refill a battery to 80%. As with a mobile phone, the final 20% takes longer because the systems slow down to fill the battery safely and avoid damaging it.
Some companies are building battery swapping stations, to substantially reduce the effective time to recharge. Some electric cars (for example, the BMW i3) have an optional gasoline range extender. The system is intended as an emergency backup to extend range to the next recharging location, and not for long-distance travel.
Electric roads
An electric road system (ERS) is a road which supplies electric power to vehicles travelling on it. Common implementations are overhead power lines above the road, ground-level power supply through conductive rails, and dynamic wireless power transfer (DWPT) through resonant inductive coils or inductive rails embedded in the road. Overhead power lines are limited to commercial vehicles while ground-level rails and inductive power transfer can be used by any vehicle, which allows for public charging through a power metering and billing systems. Of the three methods, ground-level conductive rails are estimated to be the most cost-effective.
National electric road projects
Government studies and trials have been conducted in several countries seeking a national electric road network.
Korea was the first to implement an induction-based public electric road with a commercial bus line in 2013 after testing an experimental shuttle service in 2009, but it was shut down due to aging infrastructure amidst controversy over the continued public funding of the technology.
United Kingdom municipal projects in 2015 and 2021 found wireless electric roads financially unfeasible.
Sweden has been performing assessments of various electric road technologies since 2013 under the Swedish Transport Administration electric road program. After receiving electric road construction offers in excess of the project's budget in 2023, Sweden pursued cost-reduction measures for either wireless or rail electric roads. The project's final report was published in 2024, which recommended against funding a national electric road network in Sweden as it would not be cost-effective, unless the technology was adopted by its trading partners such as by France and Germany.
Germany found in 2023 that the wireless electric road system (wERS) by Electreon collects 64.3% of the transmitted energy, poses many difficulties during installation, and blocks access to other infrastructure in the road. Germany trialed overhead lines in three projects and reported they are too expensive, difficult to maintain, and pose a safety risk.
France found similar drawbacks for overhead lines as Germany did. France began several electric road pilot projects in 2023 for inductive and rail systems. Ground-level power supply systems are considered the most likely candidates.
Vehicle-to-grid: uploading and grid buffering
During peak load periods, when the cost of generation can be very high, electric vehicles with vehicle-to-grid capabilities could contribute energy to the grid. These vehicles can then be recharged during off-peak hours at cheaper rates while helping to absorb excess night time generation. The batteries in the vehicles serve as a distributed storage system to buffer power.
Lifespan
As with all lithium-ion batteries, electric vehicle batteries may degrade over long periods of time, especially if they are frequently charged to 100%; however, this may take at least several years before being noticeable. A typical warranty is 8 years or , but for non-professional drivers mileage may not be relevant, and the batteries usually last much longer, perhaps 15 to 20 years in the car and then more years in another use.
Currently available electric cars
Sales of electric cars
Tesla became the world's leading electric vehicle manufacturer in December 2019. Its Model S was the world's top selling plug-in electric car in 2015 and 2016, its Model 3 has been the world's best selling plug-in electric car for four consecutive years, from 2018 to 2021, and the Model Y was the top selling plug-in car in 2022. The Tesla Model 3 surpassed the Leaf in early 2020 to become the world's cumulative best selling electric car. Tesla produced its 1 millionth electric car in March 2020, becoming the first auto manufacturer to do so, and in June 2021, the Model 3 became the first electric car to pass 1 million sales. Tesla has been listed as the world's top selling plug-in electric car manufacturer, both as a brand and by automotive group for four years running, from 2018 to 2021. At the end of 2021, Tesla's global cumulative sales since 2012 totaled 2.3 million units, with 936,222 of those delivered in 2021.
BYD Auto is another leading electric vehicle manufacturer, with the majority of its sales coming from China. From 2018 to 2023, BYD produced nearly 3.18 million purely plug-in electric car, with 1,574,822 of those were produced in 2023 alone. In the fourth quarter of 2023, BYD surpassed Tesla as the top-selling electric vehicle manufacturer by selling 526,409 battery electric cars, while Tesla delivered 484,507 vehicles.
, the Renault–Nissan–Mitsubishi Alliance listed as one of major all-electric vehicle manufacturers, with global all-electric vehicle sales totaling over 1 million light-duty electric vehicles, including those manufactured by Mitsubishi Motors since 2009. Nissan leads global sales within the Alliance, with 1 million cars and vans sold by July 2023, followed by the Groupe Renault with more than 397,000 electric vehicles sold worldwide through December 2020, including its Twizy heavy quadricycle. , global sales totaled over 650,000 units since inception.
Other leading electric vehicles manufacturers are GAC Aion (part of GAC Group, with 962,385 cumulative sales ), SAIC Motor with 1,838,000 units (), Geely, and Volkswagen.
The following table lists the all-time best-selling highway-capable all-electric cars with cumulative global sales of over 250,000 units:
Electric cars by country
In the year of 2021, the total number of electric cars on the world's roads went to about 16.5 million. The sales of electric cars in the first quarter of 2022 went up to 2 million. China has the largest all-electric car fleet in use, with 2.58 million at the end of 2019, more than half (53.9%) of the world's electric car stock.
All-electric cars have oversold plug-in hybrids since 2012.
Government policies and incentives
Several national, provincial, and local governments around the world have introduced policies to support the mass-market adoption of plug-in electric vehicles. A variety of policies have been established to provide: financial support to consumers and manufacturers; non-monetary incentives; subsidies for the deployment of charging infrastructure; electric vehicle charging stations in buildings; and long-term regulations with specific targets.
Financial incentives for consumers are aiming to make electric car purchase price competitive with conventional cars due to the higher upfront cost of electric vehicles. Depending on battery size, there are one-time purchase incentives such as grants and tax credits; exemptions from import duties; exemptions from road tolls and congestion charges; and exemption of registration and annual fees.
Among the non-monetary incentives, there are several perks such allowing plug-in vehicles access to bus lanes and high-occupancy vehicle lanes, free parking and free charging. Some countries or cities that restrict private car ownership (for example, a purchase quota system for new vehicles), or have implemented permanent driving restrictions (for example, no-drive days), have these schemes exclude electric vehicles to promote their adoption. Several countries, including England and India, are introducing regulations that require electric vehicle charging stations in certain buildings.
Some government have also established long term regulatory signals with specific targets such as zero-emissions vehicle (ZEV) mandates, national or regional emission regulations, stringent fuel economy standards, and the phase out of internal combustion engine vehicle sales. For example, Norway set a national goal that by 2025 all new car sales should be ZEVs (battery electric or hydrogen). While these incentives aim to facilitate a quicker transition from internal combustion cars, they have been criticized by some economists for creating excess deadweight loss in the electric car market, which may partially counteract environmental gains.
EV plans from major manufacturers
Electric vehicles (EVs) have gained significant traction as an integral component of the global automotive landscape in recent years. Major automakers from around the world have adopted EVs as a critical component of their strategic plans, indicating a paradigm shift toward sustainable transportation.
Forecasts
Total global EV sales in 2030 were predicted to reach 31.1 million by Deloitte. The International Energy Agency predicted that the total global stock of EVs would reach almost 145 million by 2030 under current policies, or 230 million if Sustainable Development policies were adopted.
As of 2024, there are approximately 600 million people in sub-Saharan Africa without access to electricity, representing 83% of the world's unelectrified population. The World Bank Group and the African Development Bank plan to provide access to electricity to 300 million people in that region by 2030. At this time, there are just over 20,000 electric vehicles and less than 1,000 charging stations in Africa. However, EV manufacturers have already built or are planning to build production plants in 21 African countries.
| Technology | Motorized road transport | null |
16105212 | https://en.wikipedia.org/wiki/Battery%20electric%20vehicle | Battery electric vehicle | A battery electric vehicle (BEV), pure electric vehicle, only-electric vehicle, fully electric vehicle or all-electric vehicle is a type of electric vehicle (EV) that uses electrical energy exclusively from an on-board battery pack to power one or more electric traction motors, on which the vehicle solely relies for propulsion. This definition excludes hybrid electric vehicles (HEVs, including mild, full and plug-in hybrids), which use internal combustion engines (ICEs) in adjunct to electric motors for propulsion; and fuel cell electric vehicles (FCEVs) and range-extended electric vehicles (REEVs), which consume fuel through a fuel cell or an ICE-driven generator to produce electricity needed for the electric motors. BEVs have no fuel tanks and replenish their energy storage by plugging into a charging station, electrical grid or getting a new battery at a battery swap station, and use motor controllers to modulate the output engine power and torque, thus eliminating the needed for clutches, transmissions and sophisticated engine cooling as seen in conventional ICE vehicles. BEVs include – but are not limited to – all battery-driven electric cars, buses, trucks, forklifts, motorcycles and scooters, bicycles, skateboards, railcars, boat and personal watercraft, although in common usage the term usually refers specifically to passenger cars.
In 2016, there were 210 million electric bikes worldwide used daily. Cumulative global sales of highway-capable light-duty pure electric car vehicles passed the one million unit milestone in September 2016. , the world's top selling all-electric car in history is the Tesla Model 3, with an estimated 645,000 sales, followed by the Nissan Leaf with over 500,000 sales .
History
During the 1880s, Gustave Trouvé, Thomas Parker and Andreas Flocken built experimental electric cars, but the first practical battery electric vehicles appeared during the 1890s. Battery vehicle milk floats expanded in 1931, and by 1967, gave Britain the largest electric vehicle fleet in the world.
Terminology
Hybrid electric vehicles use both electric motors and internal combustion engines, and are not considered pure or all-electric vehicles.
Hybrid electric vehicles whose batteries can be charged externally are called plug-in hybrid electric vehicles (PHEV) and run as BEVs during their charge-depleting mode. PHEVs with a series powertrain are also called range-extended electric vehicles (REEVs), such as the Chevrolet Volt and Fisker Karma.
Plug-in electric vehicles (PEVs) are a subcategory of electric vehicles that includes battery electric vehicles (BEVs) and plug-in hybrid vehicles (PHEVs).
The electric vehicle conversions of hybrid electric vehicles and conventional internal combustion engine vehicles (aka all-combustion vehicles) belong to one of the two categories.
In China, plug-in electric vehicles, together with hybrid electric vehicles are called new energy vehicles (NEVs). However, in the United States, neighborhood electric vehicles (NEVs) are battery electric vehicles that are legally limited to roads with posted speed limits no higher than , are usually built to have a top speed of , and have a maximum loaded weight of .
Vehicles by type
The concept of battery electric vehicles is to use charged batteries on board vehicles for propulsion. Battery electric cars are becoming more and more attractive with the higher oil prices and the advancement of new battery technology (lithium-ion) that have higher power and energy density (i.e., greater possible acceleration and more range with fewer batteries). Compared to older battery types such as lead-acid batteries. Lithium-ion batteries for example now have an energy density of 0.9–2.63 MJ/L whereas lead-acid batteries had an energy density of 0.36 MJ/L (so 2.5 to 7.3x higher). There is still a long way to go if comparing it to petroleum-based fuels and biofuels, however (gasoline having an energy density of 34.2 MJ/L -38x to 12.92x higher- and ethanol having an energy of 24 MJ/L -26x to 9.12x higher-). This is partially offset by higher conversion efficiency of electric motors – BEVs travel roughly 3x further than similar-size internal combustion vehicles per MJ of stored energy.
BEVs include automobiles, light trucks, and neighborhood electric vehicles.
Rail
Battery electric railcars:
Battery electric trains in the form of BEMUs (battery electric multiple units) are operated commercially in Japan. They are charged via pantographs, either when driving on electrified railway lines or during stops at specially equipped train stations. They use battery power for propulsion when driving on railway lines that are not electrified, and have successfully replaced diesel multiple units on some such lines.
Other countries have also tested or ordered such vehicles.
Locomotives:
Electric rail trolley:
Bus
Chattanooga, Tennessee, operates nine zero-fare electric buses, which have been in operation since 1992 and have carried 11.3 million passengers and covered a distance of . They were made locally by Advanced Vehicle Systems. Two of these buses were used for the 1996 Summer Olympics in Atlanta.
Beginning in the summer of 2000, Hong Kong Airport began operating a 16-passenger Mitsubishi Rosa electric shuttle bus, and in the fall of 2000, New York City began testing a 66-passenger battery-powered school bus, an all-electric version of the Blue Bird TC/2000. A similar bus was operated in Napa Valley, California, for 14 months ending in April 2004.
The 2008 Beijing Olympics used a fleet of 50 electric buses, which have a range of with the air conditioning on. They use lithium-ion batteries, and consume about . The buses were designed by the Beijing Institute of Technology and built by the Jinghua Coach. The batteries are replaced with fully charged ones at the recharging station to allow 24-hour operation of the buses.
In France, the electric bus phenomenon is in development, but some buses are already operating in numerous cities. PVI, a medium-sized company located in the Paris region, is one of the leaders of the market with its brand Gepebus (offering Oreos 2X and Oreos 4X).
In the United States, the first battery-electric, fast-charge bus has been in operation in Pomona, California, since September 2010 at Foothill Transit. The Proterra EcoRide BE35 uses lithium-titanate batteries and is able to fast-charge in less than 10 minutes.
In 2012, heavy-duty trucks and buses contributed 7% of global warming emissions in California.
In 2014, the first production model all-electric school bus was delivered to the Kings Canyon Unified School District in California's San Joaquin Valley. The bus was one of four the district ordered. This battery-electric school bus, which has four sodium nickel batteries, is the first modern electric school bus approved for student transportation by any state.
In 2016, including the light heavy-duty vehicles, there were roughly 1.5 million heavy-duty vehicles in California.
The same technology is used to power the Mountain View Community Shuttles. This technology was supported by the California Energy Commission, and the shuttle program is being supported by Google.
Thunder Sky
Thunder Sky (based in Hong Kong) builds lithium-ion batteries used in submarines and has three models of electric buses, the 10/21 passenger EV-6700 with a range of under 20 mins quick-charge, the EV-2009 city buses, and the 43 passenger EV-2008 highway bus, which has a range of under quick-charge (20 mins to 80 percent), and under full charge (25 mins). The buses will also be built in the United States and Finland.
Free Tindo
Tindo is an all-electric bus from Adelaide, Australia. The Tindo (aboriginal word for sun) is made by Designline International in New Zealand and gets its electricity from a solar PV system on Adelaide's central bus station. Rides are zero-fare as part of Adelaide's public transport system.
First Fast-Charge, Battery-Electric Transit Bus
Proterra's EcoRide BE35 transit bus, called the Ecoliner by Foothill Transit in West Covina, California, is a heavy-duty, fast charge, battery-electric bus. Proterra's ProDrive drive-system uses a UQM motor and regenerative braking that captures 90 percent of the available energy and returns it to the TerraVolt energy storage system, which in turn increases the total distance the bus can drive by 31–35 percent. It can travel on a single charge, is up to 600 percent more fuel-efficient than a typical diesel or CNG bus, and produces 44 percent less carbon than CNG. Proterra buses have had several problems, most notably in Philadelphia where the entire fleet was removed from service.
Trucks
For most of the 20th century, the majority of the world's battery electric road vehicles were British milk floats. The 21st century saw the massive development of BYD electric trucks.
Vans
In March 2012, Smith Electric Vehicles announced the release of the Newton Step-Van, an all-electric, zero-emission vehicle built on the versatile Newton platform that features a walk-in body produced by Indiana-based Utilimaster.
BYD supplies DHL with electric distribution fleet of commercial BYD T3.
Cars
Although electric cars often give good acceleration and have generally acceptable top speed, the lower electric potential energy of production batteries available in 2015 compared with the chemical potential energy of carbon-based fuels means that electric cars need batteries that are a fairly large fraction of the vehicle mass but still often give a relatively low range between charges. Recharging can also take significant lengths of time. For journeys within a single battery charge, rather than long journeys, electric cars are practical forms of transportation and can be recharged overnight.
Electric cars can significantly reduce city pollution by having zero emissions. Vehicle greenhouse gas savings depend on how the electricity is generated.
Electric cars are having a major impact in the auto industry given advantages in city pollution, less dependence on oil and combustion, and scarcity and expected rise in gasoline prices. World governments are pledging billions to fund development of electric vehicles and their components.
Formula E is a fully electric international single-seater championship. The series was conceived in 2012, and the inaugural championship started in Beijing on 13 September 2014. The series is sanctioned by the FIA. Alejandro Agag is the current CEO of Formula E.
The Formula E championship is currently contested by ten teams with two drivers each (after the withdrawal of Team Trulli, there are temporarily only nine teams competing). Racing generally takes place on temporary city-center street circuits which are approximately long. Currently, only the Mexico City ePrix takes place on a road course, a modified version of the Autódromo Hermanos Rodríguez.
Special-purpose vehicles
Special-purpose vehicles come in a wide range of types, ranging from relatively common ones such as golf carts, things like electric golf trolleys, milk floats, all-terrain vehicles, neighborhood electric vehicles, and a wide range of other devices. Certain manufacturers specialize in electric-powered "in plant" work machines.
Motorcycles, scooters and rickshaws
Three-wheeled vehicles include electric rickshaws, a powered variant of the cycle rickshaw. The large-scale adoption of electric two-wheelers can reduce traffic noise and road congestion but may necessitate adaptations of the existing urban infrastructure and safety regulations.
Ather Energy from India has launched their BLDC motor powered Ather 450 electric scooter with Lithium Ion batteries in 2018. Also from India, AVERA – a new and renewable energy company is going to launch two models of electric scooters at the end of 2018, with Lithium Iron Phosphate Battery technology.
Bicycles
India is the world's biggest market for bicycles at 22 million units per year. By 2024, electric two-wheelers will be a $2 billion market with over 3 million units being sold in India.
The Indian government is launching schemes and incentives to promote the adoption of electric vehicles in the country, and is aiming to be a manufacturing hub for electric vehicles within the next five years.
China has experienced an explosive growth of sales of non-assisted e-bikes including the scooter type, with annual sales jumping from 56,000 units in 1998 to over 21 million in 2008, and reaching an estimated 120 million e-bikes on the road in early 2010. China is the world's leading manufacturer of e-bikes, with 22.2 million units produced in 2009.
Personal transporters
An increasing variety of personal transporters are being manufactured, including the one-wheeled self-balancing unicycles, self-balancing scooters, electric kick scooters, and electric skateboards.
Boats
Several battery electric ships operate throughout the world, some for business. Electric ferries are being operated and constructed.
Technology
Motor controllers
The motor controller receives a signal from potentiometers linked to the accelerator pedal, and it uses this signal to determine how much electric power is needed. This DC power is supplied by the battery pack, and the controller regulates the power to the motor, supplying either variable pulse width DC or variable frequency variable amplitude AC, depending on the motor type. The controller also handles regenerative braking, whereby electrical power is gathered as the vehicle slows down and this power recharges the battery. In addition to power and motor management, the controller performs various safety checks such as anomaly detection, functional safety tests and failure diagnostics.
Battery pack
Most electric vehicles today use an electric battery, consisting of electrochemical cells with external connections in order to provide power to the vehicle.
Battery technology for EVs has developed from early lead-acid batteries used in the late 19th century to the 2010s, to lithium-ion batteries which are found in most EVs today. The overall battery is referred to as a battery pack, which is a group of multiple battery modules and cells. For example, the Tesla Model S battery pack has up to 7,104 cells, split into 16 modules with 6 groups of 74 cells in each. Each cell has a nominal voltage of 3–4 volts, depending on its chemical composition.
Motors
Electric cars have traditionally used series wound DC motors, a form of brushed DC electric motor. Separately excited and permanent magnet are just two of the types of DC motors available. More recent electric vehicles have made use of a variety of AC motor types, as these are simpler to build and have no brushes that can wear out. These are usually induction motors or brushless AC electric motors which use permanent magnets. There are several variations of the permanent magnet motor which offer simpler drive schemes and/or lower cost including the brushless DC electric motor.
Once electric power is supplied to the motor (from the controller), the magnetic field interaction inside the motor will turn the drive shaft and ultimately the vehicle's wheels.
Economy
EV battery storage is a key element for the global energy transition which is dependent on more electricity storage right now. As energy availability is the most important factor for the vitality of an economy the mobile storage infrastructure of EV batteries can be seen as one of the most meaningful infrastructure projects facilitating the energy transition to a fully sustainable economy based on renewables. A meta-study graphically showing the importance of electricity storage depicts the technology in context.
Environmental impact
Power generation
Electric vehicles produce no greenhouse gas (GHG) emissions in operation, but the electricity used to power them may do so in its generation. The two factors driving the emissions of battery electric vehicles are the carbon intensity of the electricity used to recharge the Electric Vehicle (commonly expressed in grams of per kWh) and the consumption of the specific vehicle (in kilometers/kWh).
The carbon intensity of electricity varies depending on the source of electricity where it is consumed. A country with a high share of renewable energy in its electricity mix will have a low C.I. In the European Union, in 2013, the carbon intensity had a strong geographic variability but in most of the member states, electric vehicles were "greener" than conventional ones. On average, electric cars saved 50–60% of emissions compared to diesel and gasoline fuelled engines.
Moreover, the de-carbonisation process is constantly reducing the GHG emissions due to the use of electric vehicles. In the European Union, on average, between 2009 and 2013 there was a reduction in the electricity carbon intensity of 17%. In a life-cycle assessment perspective, considering the GHG necessary to build the battery and its end-of-life, the GHG savings are 10–13% lower.
The open source VencoPy model framework can be used to study the interactions between vehicles, owners, and the electricity system at large.
Vehicle construction
GHGs are also emitted when the electric vehicle is being manufactured. The lithium-ion batteries used in the vehicle take more materials and energy to produce because of the extraction process of the lithium and cobalt essential to the battery. This means the bigger the electric vehicle, the more carbon dioxide emitted. The same size-to-emission relationship applies to manufacturing of all products.
Terrestrial Mining
The mines that are used to produce the lithium and cobalt used in the battery are also creating problems for the environment, as fish are dying up to downstream from mining operations due to chemical leaks and the chemicals also leak into the water sources the people that live near the mines use, creating health problems for the animals and people that live nearby.
Deep sea mining
Along with terrestrial mining, deep sea mining is a means by which vital minerals such as nickel, copper, cobalt, manganese, zinc, gold and rare-earth metals can be procured. As the name suggests, large robotic cutting machines are used to strip away large areas of the ocean floor in search of minerals embedded within it. These minerals appear as mineral formations such as polymetallic nodules that are roughly the size of a potato. Currently, sea mining projects are underway in areas such as the Clarion-Clipperton Zone (CCZ) in the Pacific Ocean. While there is an abundance of minerals to be found in the ocean, there are many concerns in regards to the environmental impact of deep sea mining.Marine habitats and ecosystems are not only widely understudied, they are also extremely temperamental and even slight disturbances can be incredibly destructive. Deep sea mining affects the quality of the water through sediment plumes and the release of carbon dioxide trapped within ocean floors, directly endangering marine life in the area.Sound pollution is also harmful to marine life in many mining sites, such as dolphins and whales.
Barriers to adoption
Current research suggests that BEVs (battery electric vehicles) are the most efficient in reducing GHGs (greenhouse gases). However, adoption of BEVs has varied globally, with China and Europe leading the world in BEV diffusion (see also:Electric car use by country) For other nations that have found diffusion more difficult, buyers generally express one or more of four main reasons for their hesitance in purchasing battery electric vehicles. These include: the cost of this type of vehicle, the availability of charging stations, their range versus that of an internal combustion engine, and the cost of repairs/replacement parts. Other factors affecting the adoption of BEV technology are more nuanced or political.
United States
In the United States, political ideology impacts the adoption of BEVs. Those who identify as Republican are less likely to purchase BEVs than those who identify as Democrats. This phenomenon likely has its roots in the positions of the parties regarding environmentalism and climate change. Historically, Republicans have expressed negative attitudes towards environmental and climate change policies; conversely, Democrats tend to be in favor of these types of policies A current example of this polarity can be found within both party's 2024 platforms. The preamble for the Democratic platform states, “We're fighting climate change, reducing pollution, and fueling a clean energy boom.” The preamble for the Republican platform states, “we must unleash American Energy…We will DRILL, BABY, DRILL and we will become Energy Independent, and even Dominant again." Moreover, a 2021 article titled, “7 Ways Oil and Gas Drilling is Bad for the Environment”, published by American non-profit The Wilderness Society states in its introduction that, “[o]il and gas drilling has a serious impact on our wildlands and communities. Drilling projects operate around the clock generating pollution, fueling climate change, disrupting wildlife and damaging public lands that were set aside to benefit all people.” A result of this political dichotomy, those who identify as members of the two main American parties will similarly have either a greater support or a greater opposition to such policies and, subsequently, of BEVs. Another important occurrence in recent years is the rise of right-wing populism within the Republican party under the leadership of Donald Trump. Trump and those within his party known as “MAGA-Republicans” have espoused greater skepticism of the effects of climate change and policies that aim to regulate industries such as that of fossil fuel. However, there are Republicans and other conservatives that are working towards changing these attitudes within party lines, which may allow for bipartisan cooperation in adopting cleaner energy technologies.
Japan
In Japan, where EV technology started developing after World War II, there is domestic resistance to the diffusion of this technology that results from both the general public’s wariness and the unique composition of the automotive industry in this country. Concerns from Japanese citizens are similar to those of the global public (i.e. infrastructure, price, grid capacity, performance, etc.) For automotive manufacturers, however, the diffusion of BEV technology has disruptive effects on the current infrastructure of automotive production. Known as the KEIRETSU system, the major car companies in Japan (i.e. Toyota, Honda, and Nissan [limitedly]) subcontract the production of specific parts to smaller, independent companies in an effort to make the overall production process more efficient.This system creates a top-down (“vertical”), hierarchical division of labor that includes hundreds of smaller Japanese manufacturing companies. The more “horizontal,” global cooperation-based model that EV production currently employs could be detrimental to those smaller Japanese companies employed by the major auto manufacturers. Automotive business leader, Akio Toyoda, chairman of Toyota stated recently that there are roughly 5.5 million jobs in jeopardy amidst the country's transition to EV and BEV technology. Groups such as The Japanese Automobile Manufacturers Association (JAMA), who serve the interest of Japan’s auto industry, have also argued that a transition to BEVs puts large amounts of jobs in the automotive industry at risk.
| Technology | Basics_7 | null |
14579421 | https://en.wikipedia.org/wiki/Introduction%20to%20viruses | Introduction to viruses | A virus is a tiny infectious agent that reproduces inside the cells of living hosts. When infected, the host cell is forced to rapidly produce thousands of identical copies of the original virus. Unlike most living things, viruses do not have cells that divide; new viruses assemble in the infected host cell. But unlike simpler infectious agents like prions, they contain genes, which allow them to mutate and evolve. Over 4,800 species of viruses have been described in detail out of the millions in the environment. Their origin is unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria.
Viruses are made of either two or three parts. All include genes. These genes contain the encoded biological information of the virus and are built from either DNA or RNA. All viruses are also covered with a protein coat to protect the genes. Some viruses may also have an envelope of fat-like substance that covers the protein coat, and makes them vulnerable to soap. A virus with this "viral envelope" uses it—along with specific receptors—to enter a new host cell. Viruses vary in shape from the simple helical and icosahedral to more complex structures. Viruses range in size from 20 to 300 nanometres; it would take 33,000 to 500,000 of them, side by side, to stretch to .
Viruses spread in many ways. Although many are very specific about which host species or tissue they attack, each species of virus relies on a particular method to copy itself. Plant viruses are often spread from plant to plant by insects and other organisms, known as vectors. Some viruses of humans and other animals are spread by exposure to infected bodily fluids. Viruses such as influenza are spread through the air by droplets of moisture when people cough or sneeze. Viruses such as norovirus are transmitted by the faecal–oral route, which involves the contamination of hands, food and water. Rotavirus is often spread by direct contact with infected children. The human immunodeficiency virus, HIV, is transmitted by bodily fluids transferred during sex. Others, such as the dengue virus, are spread by blood-sucking insects.
Viruses, especially those made of RNA, can mutate rapidly to give rise to new types. Hosts may have little protection against such new forms. Influenza virus, for example, changes often, so a new vaccine is needed each year. Major changes can cause pandemics, as in the 2009 swine influenza that spread to most countries. Often, these mutations take place when the virus has first infected other animal hosts. Some examples of such "zoonotic" diseases include coronavirus in bats, and influenza in pigs and birds, before those viruses were transferred to humans.
Viral infections can cause disease in humans, animals and plants. In healthy humans and animals, infections are usually eliminated by the immune system, which can provide lifetime immunity to the host for that virus. Antibiotics, which work against bacteria, have no impact, but antiviral drugs can treat life-threatening infections. Those vaccines that produce lifelong immunity can prevent some infections.
Discovery
In 1884, French microbiologist Charles Chamberland invented the Chamberland filter (or Chamberland–Pasteur filter), that contains pores smaller than bacteria. He could then pass a solution containing bacteria through the filter, and completely remove them. In the early 1890s, Russian biologist Dmitri Ivanovsky used this method to study what became known as the tobacco mosaic virus. His experiments showed that extracts from the crushed leaves of infected tobacco plants remain infectious after filtration.
At the same time, several other scientists showed that, although these agents (later called viruses) were different from bacteria and about one hundred times smaller, they could still cause disease. In 1899, Dutch microbiologist Martinus Beijerinck observed that the agent only multiplied when in dividing cells. He called it a "contagious living fluid" ()—or a "soluble living germ" because he could not find any germ-like particles. In the early 20th century, English bacteriologist Frederick Twort discovered viruses that infect bacteria, and French-Canadian microbiologist Félix d'Herelle described viruses that, when added to bacteria growing on agar, would lead to the formation of whole areas of dead bacteria. Counting these dead areas allowed him to calculate the number of viruses in the suspension.
The invention of the electron microscope in 1931 brought the first images of viruses. In 1935, American biochemist and virologist Wendell Meredith Stanley examined the tobacco mosaic virus (TMV) and found it to be mainly made from protein. A short time later, this virus was shown to be made from protein and RNA. Rosalind Franklin developed X-ray crystallographic pictures and determined the full structure of TMV in 1955. Franklin confirmed that viral proteins formed a spiral hollow tube, wrapped by RNA, and also showed that viral RNA was a single strand, not a double helix like DNA.
A problem for early scientists was that they did not know how to grow viruses without using live animals. The breakthrough came in 1931, when American pathologists Ernest William Goodpasture and Alice Miles Woodruff grew influenza, and several other viruses, in fertilised chickens' eggs. Some viruses could not be grown in chickens' eggs. This problem was solved in 1949, when John Franklin Enders, Thomas Huckle Weller, and Frederick Chapman Robbins grew polio virus in cultures of living animal cells. Over 4,800 species of viruses have been described in detail.
Origins
Viruses co-exist with life wherever it occurs. They have probably existed since living cells first evolved. Their origin remains unclear because they do not fossilize, so molecular techniques have been the best way to hypothesise about how they arose. These techniques rely on the availability of ancient viral DNA or RNA, but most viruses that have been preserved and stored in laboratories are less than 90 years old. Molecular methods have only been successful in tracing the ancestry of viruses that evolved in the 20th century. New groups of viruses might have repeatedly emerged at all stages of the evolution of life. There are three major theories about the origins of viruses:
Regressive theory Viruses may have once been small cells that parasitised larger cells. Eventually, the genes they no longer needed for a parasitic way of life were lost. The bacteria Rickettsia and Chlamydia are living cells that, like viruses, can reproduce only inside host cells. This lends credence to this theory, as their dependence on being parasites may have led to the loss of the genes that once allowed them to live on their own.
Cellular origin theory Some viruses may have evolved from bits of DNA or RNA that "escaped" from the genes of a larger organism. The escaped DNA could have come from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria.
Coevolution theory Viruses may have evolved from complex molecules of protein and DNA at the same time as cells first appeared on earth, and would have depended on cellular life for many millions of years.
There are problems with all of these theories. The regressive hypothesis does not explain why even the smallest of cellular parasites do not resemble viruses in any way. The escape or the cellular origin hypothesis does not explain the presence of unique structures in viruses that do not appear in cells. The coevolution, or "virus-first" hypothesis, conflicts with the definition of viruses, because viruses depend on host cells. Also, viruses are recognised as ancient, and to have origins that pre-date the divergence of life into the three domains. This discovery has led modern virologists to reconsider and re-evaluate these three classical hypotheses.
Structure
A virus particle, also called a virion, consists of genes made from DNA or RNA which are surrounded by a protective coat of protein called a capsid. The capsid is made of many smaller, identical protein molecules called capsomers. The arrangement of the capsomers can either be icosahedral (20-sided), helical, or more complex. There is an inner shell around the DNA or RNA called the nucleocapsid, made out of proteins. Some viruses are surrounded by a bubble of lipid (fat) called an envelope, which makes them vulnerable to soap and alcohol.
Size
Viruses are among the smallest infectious agents, and are too small to be seen by light microscopy; most of them can only be seen by electron microscopy. Their sizes range from 20 to 300 nanometres; it would take 30,000 to 500,000 of them, side by side, to stretch to one centimetre (0.4 in). In comparison, bacteria are typically around 1000 nanometres (1 micrometer) in diameter, and host cells of higher organisms are typically a few tens of micrometers. Some viruses such as megaviruses and pandoraviruses are relatively large viruses. At around 1000 nanometres, these viruses, which infect amoebae, were discovered in 2003 and 2013. They are around ten times wider (and thus a thousand times larger in volume) than influenza viruses, and the discovery of these "giant" viruses astonished scientists.
Genes
The genes of viruses are made from DNA (deoxyribonucleic acid) and, in many viruses, RNA (ribonucleic acid). The biological information contained in an organism is encoded in its DNA or RNA. Most organisms use DNA, but many viruses have RNA as their genetic material. The DNA or RNA of viruses consists of either a single strand or a double helix.
Viruses can reproduce rapidly because they have relatively few genes. For example, influenza virus has only eight genes and rotavirus has eleven. In comparison, humans have 20,000–25,000. Some viral genes contain the code to make the structural proteins that form the virus particle. Other genes make non-structural proteins found only in the cells the virus infects.
All cells, and many viruses, produce proteins that are enzymes that drive chemical reactions. Some of these enzymes, called DNA polymerase and RNA polymerase, make new copies of DNA and RNA. A virus's polymerase enzymes are often much more efficient at making DNA and RNA than the equivalent enzymes of the host cells, but viral RNA polymerase enzymes are error-prone, causing RNA viruses to mutate and form new strains.
In some species of RNA virus, the genes are not on a continuous molecule of RNA, but are separated. The influenza virus, for example, has eight separate genes made of RNA. When two different strains of influenza virus infect the same cell, these genes can mix and produce new strains of the virus in a process called reassortment.
Protein synthesis
Proteins are essential to life. Cells produce new protein molecules from amino acid building blocks based on information coded in DNA. Each type of protein is a specialist that usually only performs one function, so if a cell needs to do something new, it must make a new protein. Viruses force the cell to make new proteins that the cell does not need, but are needed for the virus to reproduce. Protein synthesis consists of two major steps: transcription and translation.
Transcription is the process where information in DNA, called the genetic code, is used to produce RNA copies called messenger RNA (mRNA). These migrate through the cell and carry the code to ribosomes where it is used to make proteins. This is called translation because the protein's amino acid structure is determined by the mRNA's code. Information is hence translated from the language of nucleic acids to the language of amino acids.
Some nucleic acids of RNA viruses function directly as mRNA without further modification. For this reason, these viruses are called positive-sense RNA viruses. In other RNA viruses, the RNA is a complementary copy of mRNA and these viruses rely on the cell's or their own enzyme to make mRNA. These are called negative-sense RNA viruses. In viruses made from DNA, the method of mRNA production is similar to that of the cell. The species of viruses called retroviruses behave completely differently: they have RNA, but inside the host cell a DNA copy of their RNA is made with the help of the enzyme reverse transcriptase. This DNA is then incorporated into the host's own DNA, and copied into mRNA by the cell's normal pathways.
Life-cycle
When a virus infects a cell, the virus forces it to make thousands more viruses. It does this by making the cell copy the virus's DNA or RNA, making viral proteins, which all assemble to form new virus particles.
There are six basic, overlapping stages in the life cycle of viruses in living cells:
Attachment is the binding of the virus to specific molecules on the surface of the cell. This specificity restricts the virus to a very limited type of cell. For example, the human immunodeficiency virus (HIV) infects only human T cells, because its surface protein, gp120, can only react with CD4 and other molecules on the T cell's surface. Plant viruses can only attach to plant cells and cannot infect animals. This mechanism has evolved to favour those viruses that only infect cells in which they are capable of reproducing.
Penetration follows attachment; viruses penetrate the host cell by endocytosis or by fusion with the cell.
Uncoating happens inside the cell when the viral capsid is removed and destroyed by viral enzymes or host enzymes, thereby exposing the viral nucleic acid.
Replication of virus particles is the stage where a cell uses viral messenger RNA in its protein synthesis systems to produce viral proteins. The RNA or DNA synthesis abilities of the cell produce the virus's DNA or RNA.
Assembly takes place in the cell when the newly created viral proteins and nucleic acid combine to form hundreds of new virus particles.
Release occurs when the new viruses escape or are released from the cell. Most viruses achieve this by making the cells burst, a process called lysis. Other viruses such as HIV are released more gently by a process called budding.
Effects on the host cell
Viruses have an extensive range of structural and biochemical effects on the host cell.These are called cytopathic effects. Most virus infections eventually result in the death of the host cell. The causes of death include cell lysis (bursting), alterations to the cell's surface membrane and apoptosis (cell "suicide"). Often cell death is caused by cessation of its normal activity due to proteins produced by the virus, not all of which are components of the virus particle.
Some viruses cause no apparent changes to the infected cell. Cells in which the virus is latent (inactive) show few signs of infection and often function normally. This causes persistent infections and the virus is often dormant for many months or years. This is often the case with herpes viruses.
Some viruses, such as Epstein–Barr virus, often cause cells to proliferate without causing malignancy; but some other viruses, such as papillomavirus, are an established cause of cancer. When a cell's DNA is damaged by a virus such that the cell cannot repair itself, this often triggers apoptosis. One of the results of apoptosis is destruction of the damaged DNA by the cell itself. Some viruses have mechanisms to limit apoptosis so that the host cell does not die before progeny viruses have been produced; HIV, for example, does this.
Viruses and diseases
There are many ways in which viruses spread from host to host but each species of virus uses only one or two. Many viruses that infect plants are carried by organisms; such organisms are called vectors. Some viruses that infect animals, including humans, are also spread by vectors, usually blood-sucking insects, but direct transmission is more common. Some virus infections, such as norovirus and rotavirus, are spread by contaminated food and water, by hands and communal objects, and by intimate contact with another infected person, while others like SARS-CoV-2 and influenza viruses are airborne. Viruses such as HIV, hepatitis B and hepatitis C are often transmitted by unprotected sex or contaminated hypodermic needles. To prevent infections and epidemics, it is important to know how each different kind of virus is spread.
In humans
Common human diseases caused by viruses include the common cold, influenza, chickenpox and cold sores. Serious diseases such as Ebola and AIDS are also caused by viruses. Many viruses cause little or no disease and are said to be "benign". The more harmful viruses are described as virulent.
Viruses cause different diseases depending on the types of cell that they infect.
Some viruses can cause lifelong or chronic infections where the viruses continue to reproduce in the body despite the host's defence mechanisms. This is common in hepatitis B virus and hepatitis C virus infections. People chronically infected with a virus are known as carriers. They serve as important reservoirs of the virus.
Endemic
If the proportion of carriers in a given population reaches a given threshold, a disease is said to be endemic. Before the advent of vaccination, infections with viruses were common and outbreaks occurred regularly. In countries with a temperate climate, viral diseases are usually seasonal. Poliomyelitis, caused by poliovirus often occurred in the summer months. By contrast colds, influenza and rotavirus infections are usually a problem during the winter months. Other viruses, such as measles virus, caused outbreaks regularly every third year. In developing countries, viruses that cause respiratory and enteric infections are common throughout the year. Viruses carried by insects are a common cause of diseases in these settings. Zika and dengue viruses for example are transmitted by the female Aedes mosquitoes, which bite humans particularly during the mosquitoes' breeding season.
Pandemic and emergent
Although viral pandemics are rare events, HIV—which evolved from viruses found in monkeys and chimpanzees—has been pandemic since at least the 1980s. During the 20th century there were four pandemics caused by influenza virus and those that occurred in 1918, 1957 and 1968 were severe. Before its eradication, smallpox was a cause of pandemics for more than 3,000 years. Throughout history, human migration has aided the spread of pandemic infections; first by sea and in modern times also by air.
With the exception of smallpox, most pandemics are caused by newly evolved viruses. These "emergent" viruses are usually mutants of less harmful viruses that have circulated previously either in humans or in other animals.
Severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) are caused by new types of coronaviruses. Other coronaviruses are known to cause mild infections in humans, so the virulence and rapid spread of SARS infections—that by July 2003 had caused around 8,000 cases and 800 deaths—was unexpected and most countries were not prepared.
A related coronavirus emerged in Wuhan, China, in November 2019 and spread rapidly around the world. Thought to have originated in bats and subsequently named severe acute respiratory syndrome coronavirus 2, infections with the virus cause a disease called COVID-19, that varies in severity from mild to deadly, and led to a pandemic in 2020. Restrictions unprecedented in peacetime were placed on international travel, and curfews imposed in several major cities worldwide.
In plants
There are many types of plant virus, but often they only cause a decrease in yield, and it is not economically viable to try to control them. Plant viruses are frequently spread from plant to plant by organisms called "vectors". These are normally insects, but some fungi, nematode worms and single-celled organisms have also been shown to be vectors. When control of plant virus infections is considered economical (perennial fruits, for example) efforts are concentrated on killing the vectors and removing alternate hosts such as weeds. Plant viruses are harmless to humans and other animals because they can only reproduce in living plant cells.
Bacteriophages
Bacteriophages are viruses that infect bacteria and archaea. They are important in marine ecology: as the infected bacteria burst, carbon compounds are released back into the environment, which stimulates fresh organic growth. Bacteriophages are useful in scientific research because they are harmless to humans and can be studied easily. These viruses can be a problem in industries that produce food and drugs by fermentation and depend on healthy bacteria. Some bacterial infections are becoming difficult to control with antibiotics, so there is a growing interest in the use of bacteriophages to treat infections in humans.
Host resistance
Innate immunity of animals
Animals, including humans, have many natural defences against viruses. Some are non-specific and protect against many viruses regardless of the type. This innate immunity is not improved by repeated exposure to viruses and does not retain a "memory" of the infection. The skin of animals, particularly its surface, which is made from dead cells, prevents many types of viruses from infecting the host. The acidity of the contents of the stomach destroys many viruses that have been swallowed. When a virus overcomes these barriers and enters the host, other innate defences prevent the spread of infection in the body. A special hormone called interferon is produced by the body when viruses are present, and this stops the viruses from reproducing by killing the infected cells and their close neighbours. Inside cells, there are enzymes that destroy the RNA of viruses. This is called RNA interference. Some blood cells engulf and destroy other virus-infected cells.
Adaptive immunity of animals
Specific immunity to viruses develops over time and white blood cells called lymphocytes play a central role. Lymphocytes retain a "memory" of virus infections and produce many special molecules called antibodies. These antibodies attach to viruses and stop the virus from infecting cells. Antibodies are highly selective and attack only one type of virus. The body makes many different antibodies, especially during the initial infection. After the infection subsides, some antibodies remain and continue to be produced, usually giving the host lifelong immunity to the virus.
Plant resistance
Plants have elaborate and effective defence mechanisms against viruses. One of the most effective is the presence of so-called resistance (R) genes. Each R gene confers resistance to a particular virus by triggering localised areas of cell death around the infected cell, which can often be seen with the unaided eye as large spots. This stops the infection from spreading. RNA interference is also an effective defence in plants. When they are infected, plants often produce natural disinfectants that destroy viruses, such as salicylic acid, nitric oxide and reactive oxygen molecules.
Resistance to bacteriophages
The major way bacteria defend themselves from bacteriophages is by producing enzymes which destroy foreign DNA. These enzymes, called restriction endonucleases, cut up the viral DNA that bacteriophages inject into bacterial cells.
Prevention and treatment of viral disease
Vaccines
Vaccines simulate a natural infection and its associated immune response, but do not cause the disease. Their use has resulted in the eradication of smallpox and a dramatic decline in illness and death caused by infections such as polio, measles, mumps and rubella. Vaccines are available to prevent over fourteen viral infections of humans and more are used to prevent viral infections of animals. Vaccines may consist of either live or killed viruses. Live vaccines contain weakened forms of the virus, but these vaccines can be dangerous when given to people with weak immunity. In these people, the weakened virus can cause the original disease. Biotechnology and genetic engineering techniques are used to produce "designer" vaccines that only have the capsid proteins of the virus. Hepatitis B vaccine is an example of this type of vaccine. These vaccines are safer because they can never cause the disease.
Antiviral drugs
Since the mid-1980s, the development of antiviral drugs has increased rapidly, mainly driven by the AIDS pandemic. Antiviral drugs are often nucleoside analogues, which masquerade as DNA building blocks (nucleosides). When the replication of virus DNA begins, some of the fake building blocks are used. This prevents DNA replication because the drugs lack the essential features that allow the formation of a DNA chain. When DNA production stops the virus can no longer reproduce. Examples of nucleoside analogues are aciclovir for herpes virus infections and lamivudine for HIV and hepatitis B virus infections. Aciclovir is one of the oldest and most frequently prescribed antiviral drugs.
Other antiviral drugs target different stages of the viral life cycle. HIV is dependent on an enzyme called the HIV-1 protease for the virus to become infectious. There is a class of drugs called protease inhibitors, which bind to this enzyme and stop it from functioning.
Hepatitis C is caused by an RNA virus. In 80% of those infected, the disease becomes chronic, and they remain infectious for the rest of their lives unless they are treated. There are effective treatments that use direct-acting antivirals. Treatments for chronic carriers of the hepatitis B virus have been developed by a similar strategy, using lamivudine and other anti-viral drugs. In both diseases, the drugs stop the virus from reproducing and the interferon kills any remaining infected cells.
HIV infections are usually treated with a combination of antiviral drugs, each targeting a different stage in the virus's life cycle. There are drugs that prevent the virus from attaching to cells, others that are nucleoside analogues and some poison the virus's enzymes that it needs to reproduce. The success of these drugs is proof of the importance of knowing how viruses reproduce.
Role in ecology
Viruses are the most abundant biological entity in aquatic environments; one teaspoon of seawater contains about ten million viruses, and they are essential to the regulation of saltwater and freshwater ecosystems. Most are bacteriophages, which are harmless to plants and animals. They infect and destroy the bacteria in aquatic microbial communities and this is the most important mechanism of recycling carbon in the marine environment. The organic molecules released from the bacterial cells by the viruses stimulate fresh bacterial and algal growth.
Microorganisms constitute more than 90% of the biomass in the sea. It is estimated that viruses kill approximately 20% of this biomass each day and that there are fifteen times as many viruses in the oceans as there are bacteria and archaea. They are mainly responsible for the rapid destruction of harmful algal blooms, which often kill other marine life.
The number of viruses in the oceans decreases further offshore and deeper into the water, where there are fewer host organisms.
Their effects are far-reaching; by increasing the amount of respiration in the oceans, viruses are indirectly responsible for reducing the amount of carbon dioxide in the atmosphere by approximately 3 gigatonnes of carbon per year.
Marine mammals are also susceptible to viral infections. In 1988 and 2002, thousands of harbour seals were killed in Europe by phocine distemper virus. Many other viruses, including caliciviruses, herpesviruses, adenoviruses and parvoviruses, circulate in marine mammal populations.
Viruses can also serve as an alternative food source for microorganisms which engage in virovory, supplying nucleic acids, nitrogen, and phosphorus through their consumption.
| Biology and health sciences | Concepts | Health |
4162694 | https://en.wikipedia.org/wiki/Litmus | Litmus | Litmus is a water-soluble mixture of different dyes extracted from lichens. It is often absorbed onto filter paper to produce one of the oldest forms of pH indicator, used to test materials for acidity. In an acidic medium, blue litmus paper turns red, while in a basic or alkaline medium, red litmus paper turns blue. In short, it is a dye and indicator which is used to place substances on a pH scale.
History
The word "litmus" comes from an Old Norse word for “moss used for dyeing”. About 1300, the Spanish physician Arnaldus de Villa Nova began using litmus to study acids and bases.
From the 16th century onwards, the blue dye was extracted from some lichens, especially in the Netherlands.
Natural sources
Litmus can be found in different species of lichens. The dyes are extracted from such species as Roccella tinctoria (South American), Roccella fuciformis (Angola and Madagascar), Roccella pygmaea (Algeria), Roccella phycopsis, Lecanora tartarea (Norway, Sweden), Variolaria dealbata, Ochrolechia parella, Parmotrema tinctorum, and Parmelia. Currently, the main sources are Roccella montagnei (Mozambique) and Dendrographa leucophoea (California).
Uses
The main use of litmus is to test whether a solution is acidic or basic, as blue litmus paper turns red under acidic conditions, and red litmus paper turns blue under basic or alkaline conditions, with the color change occurring over the pH range 4.5–8.3 at . Neutral litmus paper is purple. Wet litmus paper can also be used to test for water-soluble gases that affect acidity or basicity; the gas dissolves in the water and the resulting solution colors the litmus paper. For instance, ammonia gas, which is alkaline, turns red litmus paper blue. While all litmus paper acts as pH paper, the opposite is not true.
Litmus can also be prepared as an aqueous solution that functions similarly. Under acidic conditions, the solution is red, and under alkaline conditions, the solution is blue.
Chemical reactions other than acid–base can also cause a color change to litmus paper. For instance, chlorine gas turns blue litmus paper white; the litmus dye is bleached because hypochlorite ions are present. This reaction is irreversible, so the litmus is not acting as an indicator in this situation.
Chemistry
The litmus mixture has the CAS number 1393-92-6 and contains 10 to around 15 different dyes. All of the chemical components of litmus are likely to be the same as those of the related mixture known as orcein but in different proportions. In contrast with orcein, the principal constituent of litmus has an average molecular mass of 3300. Acid-base indicators on litmus owe their properties to a 7-hydroxyphenoxazone chromophore. Some fractions of litmus were given specific names including erythrolitmin (or erythrolein), azolitmin, spaniolitmin, leucoorcein, and leucazolitmin. Azolitmin shows nearly the same effect as litmus.
A recipe to make litmus out of the lichens, as outlined on a UC Santa Barbara website says:
Mechanism
Red litmus contains a weak diprotic acid. When it is exposed to a basic compound, the hydrogen ions react with the added base. The conjugate base formed from the litmus acid has a blue color, so the wet red litmus paper turns blue in an alkaline solution.
| Physical sciences | Chemical methods | Chemistry |
4166493 | https://en.wikipedia.org/wiki/Hourglass | Hourglass | An hourglass (or sandglass, sand timer, or sand clock) is a device used to measure the passage of time. It comprises two glass bulbs connected vertically by a narrow neck that allows a regulated flow of a substance (historically sand) from the upper bulb to the lower one due to gravity. Typically, the upper and lower bulbs are symmetric so that the hourglass will measure the same duration regardless of orientation. The specific duration of time a given hourglass measures is determined by factors including the quantity and coarseness of the particulate matter, the bulb size, and the neck width.
Depictions of an hourglass as a symbol of the passage of time are found in art, especially on tombstones or other monuments, from antiquity to the present day. The form of a winged hourglass has been used as a literal depiction of the Latin phrase ("time flies").
History
Antiquity
The origin of the hourglass is unclear. Its predecessor the clepsydra, or water clock, is known to have existed in Babylon and Egypt as early as the 16th century BCE.
Middle Ages
There are no records of the hourglass existing in Europe prior to the Late Middle Ages; the first documented example dates from the 14th century, a depiction in the 1338 fresco Allegory of Good Government by Ambrogio Lorenzetti.
Use of the marine sandglass has been recorded since the 14th century. The written records about it were mostly from logbooks of European ships. In the same period it appears in other records and lists of ships stores. The earliest recorded reference that can be said with certainty to refer to a marine sandglass dates from , in a receipt of Thomas de Stetesham, clerk of the King's ship La George, in the reign of Edward III of England; translated from the Latin, the receipt says: in 1345:
Marine sandglasses were popular aboard ships, as they were the most dependable measurement of time while at sea. Unlike the clepsydra, hourglasses using granular materials were not affected by the motion of a ship and less affected by temperature changes (which could cause condensation inside a clepsydra). While hourglasses were insufficiently accurate to be compared against solar noon for the determination of a ship's longitude (as an error of just four minutes would correspond to one degree of longitude), they were sufficiently accurate to be used in conjunction with a chip log to enable the measurement of a ship's speed in knots.
The hourglass also found popularity on land as an inexpensive alternative to mechanical clocks. Hourglasses were commonly seen in use in churches, homes, and work places to measure sermons, cooking time, and time spent on breaks from labor. Because they were being used for more everyday tasks, the model of the hourglass began to shrink. The smaller models were more practical and very popular as they made timing more discreet.
After 1500, the hourglass was not as widespread as it had been. This was due to the development of the mechanical clock, which became more accurate, smaller and cheaper, and made keeping time easier. The hourglass, however, did not disappear entirely. Although they became relatively less useful as clock technology advanced, hourglasses remained desirable in their design. The oldest known surviving hourglass resides in the British Museum in London.
Not until the 18th century did John Harrison come up with a marine chronometer that significantly improved on the stability of the hourglass at sea. Taking elements from the design logic behind the hourglass, he made a marine chronometer in 1761 that was able to accurately measure the journey from England to Jamaica accurate within five seconds.
Design
Little written evidence exists to explain why its external form is the shape that it is. The glass bulbs used, however, have changed in style and design over time. While the main designs have always been ampoule in shape, the bulbs were not always connected. The first hourglasses were two separate bulbs with a cord wrapped at their union that was then coated in wax to hold the piece together and let sand flow in between. It was not until 1760 that both bulbs were blown together to keep moisture out of the bulbs and regulate the pressure within the bulb that varied the flow.
Material
While some early hourglasses actually did use silica sand as the granular material to measure time, many did not use sand at all. The material used in most bulbs was "powdered marble, tin/lead oxides, [or] pulverized, burnt eggshell". Over time, different textures of granule matter were tested to see which gave the most constant flow within the bulbs. It was later discovered that for the perfect flow to be achieved the ratio of granule bead to the width of the bulb neck needed to be 1/12 or more but not greater than 1/2 the neck of the bulb.
Practical uses
Hourglasses were an early dependable and accurate measure of time. The rate of flow of the sand is independent of the depth in the upper reservoir, and the instrument will not freeze in cold weather. From the 15th century onwards, hourglasses were being used in a range of applications at sea, in the church, in industry, and in cookery.
During the voyage of Ferdinand Magellan around the globe, 18 hourglasses from Barcelona were in the ship's inventory, after the trip had been authorized by King Charles I of Spain. It was the job of a ship's page to turn the hourglasses and thus provide the times for the ship's log. Noon was the reference time for navigation, which did not depend on the glass, as the sun would be at its zenith. A number of sandglasses could be fixed in a common frame, each with a different operating time, e.g. as in a four-way Italian sandglass likely from the 17th century, in the collections of the Science Museum, in South Kensington, London, which could measure intervals of quarter, half, three-quarters, and one hour (and which were used in churches, for priests and ministers to measure lengths of sermons).
Modern practical uses
While hourglasses are no longer widely used for keeping time, some institutions do maintain them. Both houses of the Australian Parliament use three hourglasses to time certain procedures, such as divisions.
Sand timers are sometimes included with boardgames such as Pictionary and Boggle that place time constraints on rounds of play.
Symbolic uses
Unlike most other methods of measuring time, the hourglass concretely represents the present as being between the past and the future, and this has made it an enduring symbol of time as a concept.
The hourglass, sometimes with the addition of metaphorical wings, is often used as a symbol that human existence is fleeting, and that the "sands of time" will run out for every human life. It was used thus on pirate flags, to evoke fear through imagery associated with death. In England, hourglasses were sometimes placed in coffins, and they have graced gravestones for centuries. The hourglass was also used in alchemy as a symbol for hour.
The former Metropolitan Borough of Greenwich in London used an hourglass on its coat of arms, symbolising Greenwich's role as the origin of Greenwich Mean Time (GMT). The district's successor, the Royal Borough of Greenwich, uses two hourglasses on its coat of arms.
Modern symbolic uses
Recognition of the hourglass as a symbol of time has survived its obsolescence as a timekeeper. For example, the American television soap opera Days of Our Lives (1965–present) displays an hourglass in its opening credits, with narration by Macdonald Carey: "Like sands through the hourglass, so are the days of our lives."
Various computer graphical user interfaces may change the pointer to an hourglass while the program is in the middle of a task, and may not accept user input. During that period of time, other programs, such as those open in other windows, may work normally. When such an hourglass does not disappear, it suggests a program is in an infinite loop and needs to be terminated, or is waiting for some external event (such as the user inserting a CD).
Unicode has an HOURGLASS symbol at U+231B (⌛).
In the 21st century, the Extinction symbol came into use as a symbol of the Holocene extinction and climate crisis. The symbol features an hourglass to represent time "running out" for extinct and endangered species, and also to represent time "running out" for climate change mitigation.
Hourglass motif
Because of its symmetry, graphic signs resembling an hourglass are seen in the art of cultures which never encountered such objects. Vertical pairs of triangles joined at the apex are common in Native American art; both in North America, where it can represent, for example, the body of the Thunderbird or (in more elongated form) an enemy scalp, and in South America, where it is believed to represent a Chuncho jungle dweller. In Zulu textiles they symbolise a married man, as opposed to a pair of triangles joined at the base, which symbolise a married woman. Neolithic examples can be seen among Spanish cave paintings. Observers have even given the name "hourglass motif" to shapes which have more complex symmetry, such as a repeating circle and cross pattern from the Solomon Islands. Both the members of Project Tic Toc, from television series the Time Tunnel and the Challengers of the Unknown use symbols of the hourglass representing either time travel or time running out.
| Technology | Clocks | null |
17365007 | https://en.wikipedia.org/wiki/Galaxy%20filament | Galaxy filament | In cosmology, galaxy filaments are the largest known structures in the universe, consisting of walls of galactic superclusters. These massive, thread-like formations can commonly reach 50 to 80 megaparsecs ()—with the largest found to date being the Hercules-Corona Borealis Great Wall at around in length—and form the boundaries between voids. Due to the accelerating expansion of the universe, the individual clusters of gravitationally bound galaxies that make up galaxy filaments are moving away from each other at an accelerated rate; in the far future they will dissolve.
Galaxy filaments form the cosmic web and define the overall structure of the observable universe.
Discovery
Discovery of structures larger than superclusters began in the late 1980s. In 1987, astronomer R. Brent Tully of the University of Hawaii's Institute of Astronomy identified what he called the Pisces–Cetus Supercluster Complex. The CfA2 Great Wall was discovered in 1989, followed by the Sloan Great Wall in 2003.
In January 2013, researchers led by Roger Clowes of the University of Central Lancashire announced the discovery of a large quasar group, the Huge-LQG, which dwarfs previously discovered galaxy filaments in size. In November 2013, using gamma-ray bursts as reference points, astronomers discovered the Hercules–Corona Borealis Great Wall, an extremely large filament measuring more than 10 billion light-years across.
Filaments
The filament subtype of filaments have roughly similar major and minor axes in cross-section, along the lengthwise axis.
A short filament was proposed by Adi Zitrin and Noah Brosch—detected by identifying an alignment of star-forming galaxies—in the neighborhood of the Milky Way and the Local Group. The proposal of this filament, and of a similar but shorter filament, were the result of a study by McQuinn et al. (2014) based on distance measurements using the TRGB method.
Galaxy walls
The galaxy wall subtype of filaments have a significantly greater major axis than minor axis in cross-section, along the lengthwise axis.
A "Centaurus Great Wall" (or "Fornax Great Wall" or "Virgo Great Wall") has been proposed, which would include the Fornax Wall as a portion of it (visually created by the Zone of Avoidance) along with the Centaurus Supercluster and the Virgo Supercluster, also known as the Local Supercluster, within which the Milky Way galaxy is located (implying this to be the Local Great Wall).
A wall was proposed to be the physical embodiment of the Great Attractor, with the Norma Cluster as part of it. It is sometimes referred to as the Great Attractor Wall or Norma Wall. This suggestion was superseded by the proposal of a supercluster, Laniakea, that would encompass the Great Attractor, Virgo Supercluster, Hydra–Centaurus Superclusters.
A wall was proposed in 2000 to lie at z=1.47 in the vicinity of radio galaxy B3 0003+387.
A wall was proposed in 2000 to lie at z=0.559 in the northern Hubble Deep Field (HDF North).
Map of nearest galaxy walls
Large Quasar Groups
Large quasar groups (LQGs) are some of the largest structures known. They are theorized to be protohyperclusters/proto-supercluster-complexes/galaxy filament precursors.
Supercluster complex
Pisces–Cetus Supercluster Complex
Maps of large-scale distribution
| Physical sciences | Large-scale structures | Astronomy |
17367147 | https://en.wikipedia.org/wiki/Underground%20lake | Underground lake | An underground lake (also known as a subterranean lake) is a lake underneath the surface of the Earth. Most naturally occurring underground lakes are found in areas of karst topography, where limestone or other soluble rock has been weathered away, leaving a cave where water can flow and accumulate.
Natural underground lakes are an uncommon hydrogeological feature. More often, groundwater gathers in formations such as aquifers or springs.
The largest subterranean lake in the world is in Dragon's Breath Cave in Namibia, with an area of almost ; the second largest is The Lost Sea, located inside Craighead Caverns in Tennessee, United States, with an area of
Characteristics
An underground lake is any body of water that is similar in size to a surface lake and exists mostly or entirely underground; though, a precise scientific definition of what may be considered a "lake" is not yet well-established. Underground lakes could be classified as either "lakes" or "ponds", depending on characteristics of size, such as exposed surface area and/or depth.
The rarity of naturally-occurring underground lakes can be attributed to the way water behaves underground. Below the surface of the Earth, the amount of pressure exerted on groundwater increases, causing it to be absorbed into the soil. The boundary at which there is sufficient sub-terranean pressure to completely saturate the ground with water is called the water table. The area above the water table is called the "unsaturated zone," while the area below it is called the "saturated zone". In the saturated zone, pressure becomes the primary force driving the flow of water. Lakes form primarily under the force of gravity – water is pulled down to the lowest point in an area, and gathers into a lake. Any water below the water table will be under pressure, and so does not form a lake; instead, it forms an aquifer.
Naturally-occurring underground lakes can form in Karst areas, where the weathering of soluble rocks leaves behind caverns and other openings in the earth. Surface water can find its way underground through these openings and pool up in larger caverns to form lakes.
Underground lakes can be formed by human processes, such as the flooding of mines. Two examples of these are lakes found in the slate mines at Blaenau Ffestiniog, such as Croesor quarry, and a lake in the Hallein Salt Mine in Austria.
Examples
Craighead Caverns, in Tennessee, United States
Dragon's Breath Cave, in Namibia
Kow Ata, in Turkmenistan
Moqua Well, in Nauru
Saint-Léonard underground lake, in Switzerland
Cross Cave, in Slovenia
Gallery
| Physical sciences | Hydrology | Earth science |
17369680 | https://en.wikipedia.org/wiki/Woolly%20mammoth | Woolly mammoth | The woolly mammoth (Mammuthus primigenius) is an extinct species of mammoth that lived from the Middle Pleistocene until its extinction in the Holocene epoch. It was one of the last in a line of mammoth species, beginning with the African Mammuthus subplanifrons in the early Pliocene. The woolly mammoth began to diverge from the steppe mammoth about 800,000 years ago in Siberia. Its closest extant relative is the Asian elephant. The Columbian mammoth (Mammuthus columbi) lived alongside the woolly mammoth in North America, and DNA studies show that the two hybridised with each other. Mammoth remains had long been known in Asia before they became known to Europeans. The origin of these remains was long a matter of debate and often explained as being remains of legendary creatures. The mammoth was identified as an extinct species of elephant by Georges Cuvier in 1796.
The appearance and behaviour of this species are among the best studied of any prehistoric animal because of the discovery of frozen carcasses in Siberia and North America, as well as skeletons, teeth, stomach contents, dung, and depiction from life in prehistoric cave paintings. The woolly mammoth was roughly the same size as modern African elephants. Males reached shoulder heights between and weighed between . Females reached in shoulder heights and weighed between . A newborn calf weighed about . The woolly mammoth was well adapted to the cold environments present during glacial periods, including the last ice age. It was covered in fur, with an outer covering of long guard hairs and a shorter undercoat. The colour of the coat varied from dark to light. The ears and tail were short to minimise frostbite and heat loss. It had long, curved tusks and four molars, which were replaced six times during the lifetime of an individual. Its behaviour was similar to that of modern elephants, and it used its tusks and trunk for manipulating objects, fighting, and foraging. The diet of the woolly mammoth was mainly grasses and sedges. Individuals could probably reach the age of 60. Its habitat was the mammoth steppe, which stretched across northern Eurasia and North America.
The woolly mammoth coexisted with early humans, who used its bones and tusks for making art, tools, and dwellings, and hunted the species for food. The population of woolly mammoths declined at the end of the Late Pleistocene, with the last populations on mainland Siberia persisting until around 10,000 years ago, although isolated populations survived on St. Paul Island until 5,600 years ago and on Wrangel Island until 4,000 years ago. After its extinction, humans continued using its ivory as a raw material, a tradition that continues today. The completion of the mammoth genome project in 2015 sparked discussion about potentially reviving the woolly mammoth through several various methods. However, none of these approaches are currently feasible.
Taxonomy
Remains of woolly mammoths were long known by native Siberians and Native Americans, who had various ways of interpreting them. Remains later reached other parts of Asia and Europe, where they were also interpreted in various ways prior to modern science. The first woolly mammoth remains studied by scientists were examined by the British physician Hans Sloane in 1728 and consisted of fossilised teeth and tusks from Siberia. Sloane was the first to recognise that the remains belonged to elephants. Sloane turned to another biblical explanation for the presence of elephants in the Arctic, asserting that they had been buried during the Great Flood, and that Siberia had previously been tropical before a drastic climate change. Others interpreted Sloane's conclusion slightly differently, arguing the flood had carried elephants from the tropics to the Arctic. Sloane's paper was based on travellers' descriptions and a few scattered bones collected in Siberia and Britain. He discussed the question of whether or not the remains were from elephants, but drew no conclusions. In 1738, the German zoologist Johann Philipp Breyne argued that mammoth fossils represented some kind of elephant. He could not explain why a tropical animal would be found in such a cold area as Siberia, and suggested that they might have been transported there by the Great Flood.
In 1796, French biologist Georges Cuvier was the first to identify the woolly mammoth remains not as modern elephants transported to the Arctic, but as an entirely new species. He argued this species had gone extinct and no longer existed, a concept that was not widely accepted at the time. Following Cuvier's identification, the German naturalist Johann Friedrich Blumenbach gave the woolly mammoth its scientific name, Elephas primigenius, in 1799, placing it in the same genus as the Asian elephant (Elephas maximus). This name is Latin for "the first-born elephant". Cuvier coined the name Elephas mammonteus a few months later, but the former name was subsequently used. In 1828, the British naturalist Joshua Brookes used the name Mammuthus borealis for woolly mammoth fossils in his collection that he put up for sale, thereby coining a new genus name.
Where and how the word "mammoth" originated is unclear. According to the Oxford English Dictionary, it comes from an old Vogul word mēmoŋt, "earth-horn". It may be a version of mehemot, the Arabic version of the biblical word "behemoth". Another possible origin is Estonian, where maa means "earth", and mutt means "mole". The word was first used in Europe during the early 17th century, when referring to maimanto tusks discovered in Siberia. The American president Thomas Jefferson, who had a keen interest in palaeontology, was partially responsible for transforming the word "mammoth" from a noun describing the prehistoric elephant to an adjective describing anything of surprisingly large size. The first recorded use of the word as an adjective was in a description of a wheel of cheese (the "Cheshire Mammoth Cheese") given to Jefferson in 1802.
By the early 20th century, the taxonomy of extinct elephants was complex. In 1942, American palaeontologist Henry Fairfield Osborn's posthumous monograph on the Proboscidea was published, wherein he used various taxon names that had previously been proposed for mammoth species, including replacing Mammuthus with Mammonteus, as he believed the former name to be invalidly published. Mammoth taxonomy was simplified by various researchers from the 1970s onwards, all species were retained in the genus Mammuthus, and many proposed differences between species were instead interpreted as intraspecific variation.
Osborn chose two molars (found in Siberia and Osterode) from Blumenbach's collection at Göttingen University as the lectotype specimens for the woolly mammoth, since holotype designation was not practised in Blumenbach's time, and one of the specimens Blumenbach had described had been of the unrelated straight-tusked elephant (Palaeoloxodon antiquus). Soviet palaeontologist Vera Gromova further proposed the former should be considered the lectotype with the latter as paralectotype. Both molars were thought lost by the 1980s, and the more complete "Taimyr mammoth" found in Siberia in 1948 was therefore proposed as the neotype specimen in 1990. Resolutions to historical issues about the validity of the genus name Mammuthus and the type species designation of E. primigenius were also proposed. The paralectotype molar (specimen GZG.V.010.018) has since been located in the Göttingen University collection, identified by comparing it with Osborn's illustration of a cast.
Evolution
The earliest known members of the Proboscidea, the clade, which contains modern elephants, existed about 55 million years ago around the Tethys Sea. The closest known relatives of the Proboscidea are the sirenians (dugongs and manatees) and the hyraxes (an order of small, herbivorous mammals). The family Elephantidae existed 6 million years ago in Africa and includes the modern elephants and the mammoths. Among many now-extinct clades, the mastodon (Mammut) is only a distant relative of the mammoths and part of the separate family Mammutidae, which diverged 25 million years before the mammoths evolved. The Asian elephant is the closest extant relative of the mammoths. The following cladogram shows the placement of the woolly mammoth among Late Pleistocene and modern proboscideans, based on genetic data:
Within six weeks from 2005 to 2006, three teams of researchers independently assembled mitochondrial genome profiles of the woolly mammoth from ancient DNA, which allowed them to confirm the close evolutionary relationship between mammoths and Asian elephants (Elephas maximus). A 2015 DNA review confirmed Asian elephants as the closest living relative of the woolly mammoth. African elephants (Loxodonta africana) branched away from this clade around 6 million years ago, close to the time of the similar split between chimpanzees and humans. A 2010 study confirmed these relationships and suggested the mammoth and Asian elephant lineages diverged 5.8–7.8 million years ago, while African elephants diverged from an earlier common ancestor 6.6–8.8 million years ago.
In 2008, much of the woolly mammoth's chromosomal DNA was mapped. The analysis showed that the woolly mammoth and the African elephant are 98.55% to 99.40% identical. The team mapped the woolly mammoth's nuclear genome sequence by extracting DNA from the hair follicles of both a 20,000-year-old mammoth retrieved from permafrost and another that died 60,000 years ago. In 2012, proteins were confidently identified for the first time, collected from a 43,000-year-old woolly mammoth.
Since many remains of each species of mammoth are known from several localities, reconstructing the evolutionary history of the genus through morphological studies is possible. Mammoth species can be identified from the number of enamel ridges (or lamellar plates) on their molars; primitive species had few ridges, and the number increased gradually as new species evolved to feed on more abrasive food items. The crowns of the teeth became deeper in height and the skulls became taller to accommodate this. At the same time, the skulls became shorter from front to back to minimise the weight of the head. The short and tall skulls of woolly and Columbian mammoths (Mammuthus columbi) were the culmination of this process.
The first known members of the genus Mammuthus are the African species Mammuthus subplanifrons from the Pliocene, and M. africanavus from the Pleistocene. The former is thought to be the ancestor of later forms. Mammoths entered Europe around 3 million years ago. The earliest European mammoth has been named M. rumanus; it spread across Europe and China. Only its molars are known, which show that it had 8–10 enamel ridges. A population evolved 12–14 ridges, splitting off from and replacing the earlier type, becoming the southern mammoth (M. meridionalis) about 2–1.7 million years ago. In turn, this species was replaced by the steppe mammoth (M. trogontherii) with 18–20 ridges, which evolved in eastern Asia around 1 million years ago. Mammoths derived from M. trogontherii evolved molars with 26 ridges 400,000 years ago in Siberia and became the woolly mammoth. The earliest identified forms of woolly mammoth date to the Middle Pleistocene. Woolly mammoths entered North America about 100,000 years ago by crossing the Bering Strait.
Subspecies and hybridisation
Individuals and populations showing transitional morphologies between each of the mammoth species are known, and primitive and derived species coexisted until the former disappeared. The different species and their intermediate forms have been termed "chronospecies". Many taxa intermediate between M. primigenius and other mammoths have been proposed, but their validity is uncertain; depending on author, they are either considered primitive forms of an advanced species or advanced forms of a primitive species. Distinguishing and determining these intermediate forms has been called one of the most long-lasting and complicated problems in Quaternary palaeontology. Regional and intermediate species and subspecies such as M. intermedius, M. chosaricus, M. p. primigenius, M. p. jatzkovi, M. p. sibiricus, M. p. fraasi, M. p. leith-adamsi, M. p. hydruntinus, M. p. astensis, M. p. americanus, M. p. compressus, and M. p. alaskensis have been proposed.
A 2011 genetic study showed that two examined specimens of the Columbian mammoth were grouped within a subclade of woolly mammoths. This suggests that the two populations interbred and produced fertile offspring. A North American type formerly referred to as M. jeffersonii may be a hybrid between the two species. A 2015 study suggested that the animals in the range where M. columbi and M. primigenius overlapped formed a metapopulation of hybrids with varying morphology. It suggested that Eurasian M. primigenius had a similar relationship with M. trogontherii in areas where their range overlapped.
In 2021, DNA older than a million years was sequenced for the first time, from two mammoth teeth of Early Pleistocene age found in eastern Siberia. One tooth from Adycha (1–1.3 million years old) belonged to a lineage that was ancestral to later woolly mammoths, whereas the other from Krestovka (1.1–1.65 million years old) belonged to new lineage. The study found that half of the ancestry of Columbian mammoths came from relatives of the Krestovka lineage (which probably represented the first mammoths that colonised the Americas) and the other half from the lineage of woolly mammoths, with the hybridisation happening more than 420,000 years ago, during the Middle Pleistocene. Later woolly and Columbian mammoths also interbred occasionally, and mammoth species may have hybridised routinely when brought together by glacial expansion. These findings were the first evidence of hybrid speciation from ancient DNA. The study also found that genetic adaptations to cold environments, such as hair growth and fat deposits, were already present in the steppe mammoth lineage and were not unique to woolly mammoths.
Description
The appearance of the woolly mammoth is probably the best known of any prehistoric animal due to the many frozen specimens with preserved soft tissue and depictions by contemporary humans in their art. The average shoulder height for males of the species has been estimated at with a weight of , with females being smaller like living elephants, with a shoulder height of and a weight of . This size is comparable to the largest living elephant species, the African bush elephant (Loxodonta africana), but is considerably smaller than the earlier Mammuthus meridionalis and Mammuthus trogontherii and the contemporary Mammuthus columbi. The woolly mammoth exhibited size variation throughout its range, with individuals from Western Europe being considerably larger (with adult males estimated to be on average tall and in weight) than those found in Siberia (with adult males of this population being estimated on average tall and in weight). One of the largest recorded woolly mammoths is the Siegsdorf specimen from Germany, with an estimated shoulder height of and an estimated body mass of . A newborn calf would have weighed about .
Few frozen specimens have preserved genitals, so the sex is usually determined through examination of the skeleton. The best indication of sex is the size of the pelvic girdle, since the opening that functions as the birth canal is always wider in females than in males. Though the mammoths on Wrangel Island were smaller than those of the mainland, their size varied, and they were not small enough to be considered "island dwarfs". The last woolly mammoth populations are claimed to have decreased in size and increased their sexual dimorphism, but this was dismissed in a 2012 study.
Woolly mammoths had several adaptations to the cold, most noticeably the layer of fur covering all parts of their bodies. Other adaptations to cold weather include ears that are far smaller than those of modern elephants; they were about long and across, and the ear of the 6- to 12-month-old frozen calf "Dima" was under long. The small ears reduced heat loss and frostbite, and the tail was short for the same reason, only long in the "Berezovka mammoth". The tail contained 21 vertebrae, whereas the tails of modern elephants contain 28–33. Their skin was no thicker than that of present-day elephants, between . They had a layer of fat up to thick under the skin, which helped to keep them warm. Woolly mammoths had broad flaps of skin under their tails which covered the anus; this is also seen in modern elephants.
Other characteristic features depicted in cave paintings include a large, high, single-domed head and a sloping back with a high shoulder hump; this shape resulted from the spinous processes of the back vertebrae decreasing in length from front to rear. These features were not present in juveniles, which had convex backs like Asian elephants. Another feature shown in cave paintings was confirmed by the discovery of a frozen specimen in 1924, an adult nicknamed the "Middle Kolyma mammoth", which was preserved with a complete trunk tip. Unlike the trunk lobes of modern elephants, the upper "finger" at the tip of the trunk had a long pointed lobe and was long, while the lower "thumb" was and was broader. The trunk of "Dima" was long, whereas the trunk of the adult "Liakhov mammoth" was long. The well-preserved trunk of a juvenile specimen nicknamed "Yuka" was described in 2015, and it was shown to possess a fleshy expansion a third above the tip. Rather than oval as the rest of the trunk, this part was ellipsoidal in cross section, and double the size in diameter. The feature was shown to be present in two other specimens, of different sexes and ages.
Coat
The coat consisted of an outer layer of long, coarse "guard hair", which was on the upper part of the body, up to in length on the flanks and underside, and in diameter, and a denser inner layer of shorter, slightly curly under-wool, up to long and in diameter. The hairs on the upper leg were up to long, and those of the feet were long, reaching the toes. The hairs on the head were relatively short, but longer on the underside and the sides of the trunk. The tail was extended by coarse hairs up to long, which were thicker than the guard hairs. The woolly mammoth likely moulted seasonally, and the heaviest fur was shed during spring.
Since mammoth carcasses were more likely to be preserved, possibly only the winter coat has been preserved in frozen specimens. Modern elephants have much less hair, though juveniles have a more extensive covering of hair than adults. This is thought to be for thermoregulation, helping them lose heat in their hot environments. Comparison between the over-hairs of woolly mammoths and extant elephants show that they did not differ much in overall morphology. Woolly mammoths had numerous sebaceous glands in their skin, which secreted oils into their hair; this would have improved the wool's insulation, repelled water, and given the fur a glossy sheen.
Preserved woolly mammoth fur is orange-brown, but this is believed to be an artefact from the bleaching of pigment during burial. The amount of pigmentation varied from hair to hair and within each hair. A 2006 study sequenced the Mc1r gene (which influences hair colour in mammals) from woolly mammoth bones. Two alleles were found: a dominant (fully active) and a recessive (partially active) one. In mammals, recessive Mc1r alleles result in light hair. Mammoths born with at least one copy of the dominant allele would have had dark coats, while those with two copies of the recessive allele would have had light coats. A 2011 study showed that light individuals would have been rare. A 2014 study instead indicated that the colouration of an individual varied from nonpigmented on the overhairs, bicoloured, nonpigmented and mixed red-brown guard hairs, and nonpigmented underhairs, which would give a light overall appearance.
Dentition
Woolly mammoths had very long tusks (modified incisor teeth), which were more curved than those of modern elephants. The longest known male tusk is long (measured along the outside curve) and weighs , with a historical report of a long tusk found in Siberia, while the heaviest tusk is , suggested to have been when complete; and was a more typical size. Female tusks were smaller and thinner, and weighing . For comparison, the record for longest tusks of the African bush elephant is . The sheaths of the tusks were parallel and spaced closely. About a quarter of the length was inside the sockets. The tusks grew spirally in opposite directions from the base and continued in a curve until the tips pointed towards each other, sometimes crossing. In this way, most of the weight would have been close to the skull, and less torque would occur than with straight tusks.
The tusks were usually asymmetrical and showed considerable variation, with some tusks curving down instead of outwards and some being shorter due to breakage. Calves developed small milk tusks a few centimetres long at six months old, which were replaced by permanent tusks a year later. Tusk growth continued throughout life, but became slower as the animal reached adulthood. The tusks grew by each year. Some cave paintings show woolly mammoths with small or no tusks, but whether this reflected reality or was artistic license is unknown. Female Asian elephants have no tusks, but no fossil evidence indicates that any adult woolly mammoths lacked them.
Woolly mammoths had four functional molar teeth at a time—two in the upper jaw and two in the lower. About of the crown was within the jaw, and was above. The crown was continually pushed forwards and up as it wore down, comparable to a conveyor belt. The teeth had up to 26 separated ridges of enamel, which were themselves covered in "prisms" that were directed towards the chewing surface. These were quite wear-resistant and kept together by cementum and dentine. A mammoth had six sets of molars throughout a lifetime, which were replaced five times, though a few specimens with a seventh set are known. The latter condition could extend the lifespan of the individual, unless the tooth consisted of only a few plates. The first molars were about the size of those of a human, , the third were long, and the sixth were about long and weighed . The molars grew larger and contained more ridges with each replacement. The woolly mammoth is considered to have had the most complex molars of any elephant.
Palaeobiology
Adult woolly mammoths could effectively defend themselves from predators with their tusks, trunks and size; however, juveniles and weakened adults were vulnerable to pack hunters such as wolves, cave hyenas, and large felines. The tusks may have been used in intraspecies fighting, such as fights over territory or mates. Display of the large tusks of males could have been used to attract females and to intimidate rivals. Because of their curvature, the tusks were unsuitable for stabbing, but may have been used for hitting, as indicated by injuries to some fossil shoulder blades.
The very long hairs on the tail probably compensated for the shortness of the tail, enabling its use as a flyswatter, similar to the tail on modern elephants. As in modern elephants, the sensitive and muscular trunk worked as a limb-like organ with many functions. It was used for manipulating objects, and in social interactions. The well-preserved foot of the adult male "Yukagir mammoth" shows that the soles of the feet contained many cracks that would have helped in gripping surfaces during locomotion. Like modern elephants, woolly mammoths walked on their toes and had large, fleshy pads behind the toes.
Like modern elephants, woolly mammoths were likely very social and lived in matriarchal (female-led) family groups. This is supported by fossil assemblages and cave paintings showing groups, implying that most of their other social behaviours were likely similar to those of modern elephants. How many mammoths lived at one location at a time is unknown, as fossil deposits are often accumulations of individuals that died over long periods of time. The numbers likely varied by season and lifecycle events. Modern elephants can form large herds, sometimes consisting of multiple family groups, and these herds can include thousands of animals migrating together. Mammoths may have formed large herds more often, since animals that live in open areas are more likely to do this than those in forested areas. Trackways made by a woolly mammoth herd 11,300–11,000 years ago have been found in the St. Mary Reservoir in Canada, showing that in this case almost equal numbers of adults, subadults, and juveniles were found. The adults had a stride of , and the juveniles ran to keep up.
Woolly mammoth dental enamel from Poland has demonstrated that woolly mammoths were seasonally migratory. Recurring shifts in δ18O and 87Sr/86Sr found in layers of the enamel correspond to seasonal variations and indicate that Polish woolly mammoths inhabited southern Poland during winter but grazed the Polish midlands during summer.
Adaptations to cold
The woolly mammoth was probably the most specialised member of the family Elephantidae. In addition to their fur, they had lipopexia (fat storage) in their neck and withers, for times when food availability was insufficient during winter, and their first three molars grew more quickly than in the calves of modern elephants. The expansion identified on the trunk of "Yuka" and other specimens was suggested to function as a "fur mitten"; the trunk tip was not covered in fur, but was used for foraging during winter, and could have been heated by curling it into the expansion. The expansion could be used to melt snow if a shortage of water to drink existed, as melting it directly inside the mouth could disturb the thermal balance of the animal. As in reindeer and musk oxen, the haemoglobin of the woolly mammoth was adapted to the cold, with three mutations to improve oxygen delivery around the body and prevent freezing. This feature may have helped the mammoths to live at high latitudes.
In a 2015 study, high-quality genome sequences from three Asian elephants and two woolly mammoths were compared. About 1.4 million DNA nucleotide differences were found between mammoths and elephants, which affect the sequence of more than 1,600 proteins. Differences were noted in genes for a number of aspects of physiology and biology that would be relevant to Arctic survival, including development of skin and hair, storage and metabolism of adipose tissue, and perceiving temperature. Genes related to both sensing temperature and transmitting that sensation to the brain were altered. One of the heat-sensing genes encodes a protein, TRPV3, found in skin, which affects hair growth. When inserted into human cells, the mammoth's version of the protein was found to be less sensitive to heat than the elephant's. This is consistent with a previous observation that mice lacking active TRPV3 are likely to spend more time in cooler cage locations than wild-type mice, and have wavier hair. Several alterations in circadian clock genes were found, perhaps needed to cope with the extreme polar variation in length of daylight. Similar mutations are known in other Arctic mammals, such as reindeer.
A 2019 study of the woolly mammoth mitogenome suggest that these had metabolic adaptations related to extreme environments. A genetic study from 2023 found that the woolly mammoth had already acquired a broad range of genes associated with the development of skin and hair, fat storage, metabolism, and the immune system by the time the species appeared, and that these continued to evolve within the last 700,000 years, including a gene that resulted in mammoths of the Late Quaternary having small ears.
Diet
Food at various stages of digestion has been found in the intestines of several woolly mammoths, giving a good picture of their diet. Woolly mammoths sustained themselves on plant food, mainly grasses and sedges, which were supplemented with herbaceous plants, flowering plants, shrubs, mosses, and tree matter. The composition and exact varieties differed from location to location. Woolly mammoths needed a varied diet to support their growth, like modern elephants. An adult of 6 tonnes would need to eat daily, and may have foraged as long as 20 hours every day. The two-fingered tip of the trunk was probably adapted for picking up the short grasses of the last ice age (Quaternary glaciation, 2.58 million years ago to present) by wrapping around them, whereas modern elephants curl their trunks around the longer grass of their tropical environments. The trunk could be used for pulling off large grass tufts, delicately picking buds and flowers, and tearing off leaves and branches where trees and shrubs were present. The "Yukagir mammoth" had ingested plant matter that contained spores of dung fungus. Isotope analysis shows that woolly mammoths fed mainly on C3 plants, unlike horses and rhinos.
Scientists identified milk in the stomach and faecal matter in the intestines of the mammoth calf "Lyuba". The faecal matter may have been eaten by "Lyuba" to promote development of the intestinal microbes necessary for digestion of vegetation, as is the case in modern elephants. An isotope analysis of woolly mammoths from Yukon showed that the young nursed for at least 3 years and were weaned and gradually changed to a diet of plants when they were 2–3 years old. This is later than in modern elephants and may be due to a higher risk of predator attack or difficulty in obtaining food during the long periods of winter darkness at high latitudes.
The molars were adapted to their diet of coarse tundra grasses, with more enamel plates and a higher crown than their earlier, southern relatives. The woolly mammoth chewed its food by using its powerful jaw muscles to move the mandible forwards and close the mouth, then backwards while opening; the sharp enamel ridges thereby cut across each other, grinding the food. The ridges were wear-resistant to enable the animal to chew large quantities of food, which often contained grit. Woolly mammoths may have used their tusks as shovels to clear snow from the ground and reach the vegetation buried below, and to break ice to drink. This is indicated on many preserved tusks by flat, polished sections up to long, as well as scratches, on the part of the surface that would have reached the ground (especially at their outer curvature). The tusks were used for obtaining food in other ways, such as digging up plants and stripping off bark.
Life history
The lifespan of mammals is related to their size. Since modern elephants can reach the age of 60 years, the same is thought to be true for woolly mammoths, which were of a similar size. The age of a mammoth can be roughly determined by counting the growth rings of its tusks when viewed in cross section, but this does not account for its early years, as these are represented by the tips of the tusks, which are usually worn away. In the remaining part of the tusk, each major line represents a year, and weekly and daily ones can be found in between. Dark bands correspond to summers, so determining the season in which a mammoth died is possible. The growth of the tusks slowed when foraging became harder, for example during winter, during disease, or when a male was banished from the herd (male elephants live with their herds until about the age of 10). Mammoth tusks dating to the harshest period of the last glaciation 25–20,000 years ago show slower growth rates.
Woolly mammoths continued growing past adulthood, like other elephants. Unfused limb bones show that males grew until they reached the age of 40, and females grew until they were 25. The frozen calf "Dima" was tall when it died at the age of 6–12 months. At this age, the second set of molars would be in the process of erupting, and the first set would be worn out at 18 months of age. The third set of molars lasted for 10 years, and this process was repeated until the final, sixth set emerged when the animal was 30 years old. When the last set of molars was worn out, the animal would be unable to chew and feed, and it would die of starvation. A study of North American mammoths found that they often died during winter or spring, the hardest times for northern animals to survive.
Examination of preserved calves shows that they were all born during spring and summer, and since modern elephants have gestation periods of 21–22 months, the mating season probably was from summer to autumn. δ15N isotopic analysis of the teeth of "Lyuba" has demonstrated their prenatal development, and indicates its gestation period was similar to that of a modern elephant, and that it was born in spring.
The best-preserved head of a frozen adult specimen, that of a male nicknamed the "Yukagir mammoth", shows that woolly mammoths had temporal glands between the ear and the eye. This feature indicates that, like bull elephants, male woolly mammoths entered "musth", a period of heightened aggressiveness. The glands are used especially by males to produce an oily substance with a strong smell called temporin. Their fur may have helped in spreading the scent further. This was confirmed by a 2023 study that compared the testosterone level in the dentine of an adult African elephant tusk with that of a male woolly mammoth.
Palaeopathology
Evidence of several different bone diseases has been found in woolly mammoths. The most common of these was osteoarthritis, found in 2% of specimens. One specimen from Switzerland had several fused vertebrae as a result of this condition. The "Yukagir mammoth" had suffered from spondylitis in two vertebrae, and osteomyelitis is known from some specimens. Several specimens have healed bone fractures, showing that the animals had survived these injuries. Likewise, spondyloarthropathy has also been identified in woolly mammoth remains. An extra number of cervical vertebrae has been found in 33% of specimens from the North Sea region, probably due to a drop in numbers and subsequent inbreeding. Vertebral lesions in woolly mammoths have been speculated to have resulted from nutritional stress. Parasitic flies and protozoa were identified in the gut of the calf "Dima".
Distortion in the molars is the most common health problem found in woolly mammoth fossils. Sometimes, the replacement was disrupted, and the molars were pushed into abnormal positions, but some animals are known to have survived this. Teeth from Britain showed that 2% of specimens had periodontal disease, with half of these containing caries. The teeth sometimes had cancerous growths.
Distribution and habitat
The habitat of the woolly mammoth is known as "mammoth steppe" or "tundra steppe". This environment stretched across northern Asia, many parts of Europe, and the northern part of North America during the last ice age. It was similar to the grassy steppes of modern Russia, but the flora was more diverse, abundant, and grew faster. Grasses, sedges, shrubs, and herbaceous plants were present, and scattered trees were mainly found in southern regions. This habitat was not dominated by ice and snow, as is popularly believed, since these regions are thought to have been high-pressure areas at the time. The habitat of the woolly mammoth supported other grazing herbivores such as the woolly rhinoceros, wild horses, and bison. The Altai-Sayan assemblages are the modern biomes most similar to the "mammoth steppe". A 2014 study concluded that forbs (a group of herbaceous plants) were more important in the steppe-tundra than previously acknowledged, and that it was a primary food source for the ice-age megafauna.
The southernmost woolly mammoth specimen known is from the Shandong province of China and is 33,000 years old. The southernmost European remains are from the Depression of Granada in Spain and are of roughly the same age. DNA studies have helped determine the phylogeography of the woolly mammoth. A 2008 DNA study showed two distinct groups of woolly mammoths: one that became extinct 45,000 years ago and another one that became extinct 12,000 years ago. The two groups are speculated to be divergent enough to be characterised as subspecies. The group that became extinct earlier stayed in the middle of the high Arctic, while the group with the later extinction had a much wider range. Recent stable isotope studies of Siberian and New World mammoths have shown there were differences in climatic conditions on either side of the Bering land bridge (Beringia), with Siberia being more uniformly cold and dry throughout the Late Pleistocene. During the Younger Dryas age, woolly mammoths briefly expanded into north-east Europe, whereafter the mainland populations became extinct.
A 2008 genetic study showed that some of the woolly mammoths that entered North America through the Bering land bridge from Asia migrated back about 300,000 years ago and had replaced the previous Asian population by about 40,000 years ago, not long before the entire species became extinct. Fossils of woolly mammoths and Columbian mammoths have been found together in a few localities of North America, including the Hot Springs sinkhole of South Dakota where their regions overlapped. It is unknown whether the two species were sympatric and lived there simultaneously, or if the woolly mammoths may have entered these southern areas during times when Columbian mammoth populations were absent there.
Relationship with humans
Modern humans coexisted with woolly mammoths during the Upper Palaeolithic period when humans entered Europe from Africa between 30,000 and 40,000 years ago. Before this, Neanderthals had coexisted with mammoths during the Middle Palaeolithic and already used mammoth bones for tool-making and building materials. Woolly mammoths were very important to ice age humans, and human survival may have depended on the mammoth in some areas. Evidence for such coexistence was not recognised until the 19th century. William Buckland published his discovery of the Red Lady of Paviland skeleton in 1823, which was found in a cave alongside woolly mammoth bones, but he mistakenly denied that these were contemporaries. In 1864, Édouard Lartet found an engraving of a woolly mammoth on a piece of mammoth ivory in the Abri de la Madeleine cave in Dordogne, France. The engraving was the first widely accepted evidence for the coexistence of humans with prehistoric extinct animals and is the first contemporary depiction of such a creature known to modern science.
The woolly mammoth is the third-most depicted animal in ice age art, after horses and bison, and these images were produced between 35,000 and 11,500 years ago. Today, more than 500 depictions of woolly mammoths are known, in media ranging from cave paintings and engravings on the walls of 46 caves in Russia, France, and Spain to engravings and sculptures (termed "portable art") made from ivory, antler, stone and bone. Cave paintings of woolly mammoths exist in several styles and sizes. The French Rouffignac Cave has the most depictions, 159, and some of the drawings are more than in length. Other notable caves with mammoth depictions are the Chauvet Cave, Les Combarelles Cave, and Font-de-Gaume. A depiction in the Cave of El Castillo may instead show Palaeoloxodon, the "straight-tusked elephant".
"Portable art" can be more accurately dated than cave art since it is found in the same deposits as tools and other ice age artefacts. The largest collection of portable mammoth art, consisting of 62 depictions on 47 plaques, was found in the 1960s at an excavated open-air camp near Gönnersdorf in Germany. A correlation between the number of mammoths depicted and the species that were most often hunted does not seem to exist, since reindeer bones are the most frequently found animal remains at the site. Two spear throwers shaped as woolly mammoths have been found in France. Some portable mammoth depictions may not have been produced where they were discovered, but could have moved around by ancient trading.
Exploitation
Woolly mammoth bones were used as construction material for dwellings by both Neanderthals and modern humans during the ice age. More than 70 such dwellings are known, mainly from the East European Plain. The bases of the huts were circular, and ranged from . The arrangement of dwellings varied, and ranged from apart, depending on location. Large bones were used as foundations for the huts, tusks for the entrances, and the roofs were probably skins held in place by bones or tusks. Some huts had floors that extended below ground. Some of the bones used for materials may have come from mammoths killed by humans, but the state of the bones, and the fact that bones used to build a single dwelling varied by several thousands of years in age, suggests that they were collected remains of long-dead animals. Woolly mammoth bones were made into various tools, furniture, and musical instruments. Large bones, such as shoulder blades, were used to cover dead human bodies during burial.
Woolly mammoth ivory was used to create art objects. Several Venus figurines, including the Venus of Brassempouy and the Venus of Lespugue, were made from this material. Weapons made from ivory, such as daggers, spears, and a boomerang, are known. A 2019 study found that woolly mammoth ivory was the most suitable bony material for the production of big game projectile points during the Late Plesistocene. To be able to process the ivory, the large tusks had to be chopped, chiseled, and split into smaller, more manageable pieces. Some ivory artefacts show that tusks had been straightened, and how this was achieved is unknown.
Woolly mammoths were an important food source for both modern humans and Neanderthals. Several woolly mammoth specimens show evidence of being butchered by humans, which is indicated by breaks, cut marks, and associated stone tools. How much prehistoric humans relied on woolly mammoth meat is unknown, since many other large herbivores were available. Many mammoth carcasses may have been scavenged by humans rather than hunted. Some cave paintings show woolly mammoths in structures interpreted as pitfall traps. Few specimens show direct, unambiguous evidence of having been hunted by humans. A Siberian specimen with a spearhead embedded in its shoulder blade shows that a spear had been thrown at it with great force.
At a site in southern Poland that contains bones from over 100 mammoths, stone spear tips have been found embedded in bones, and many stone spear points in the site were damaged from impact against mammoth bones, indicating that mammoths were the major prey for people at the time. A specimen from the Mousterian age of Italy shows evidence of spear hunting by Neanderthals. The juvenile specimen nicknamed "Yuka" is the first frozen mammoth with evidence of human interaction. It shows evidence of having been killed by a large predator, and of having been scavenged by humans shortly after. Some of its bones had been removed, and were found nearby. A site near the Yana River in Siberia has revealed several specimens with evidence of human hunting, but the finds were interpreted to show that the animals were not hunted intensively, but perhaps mainly when ivory was needed. Two woolly mammoths from Wisconsin, the "Schaefer" and "Hebior mammoths", show evidence of having been butchered by Paleo-Indians.
Extinction
Most woolly mammoth populations disappeared during the late Pleistocene and mid-Holocene, coinciding with the extinction of most North American Pleistocene megafauna (including the Columbian mammoth) as well as the extinctions or extirpations of steppe-associated fauna of Eurasia that coexisted with the mammoth species (such as the woolly rhinoceros, the cave lion, reindeer, saiga, the Arctic fox, and the steppe lemming). This extinction formed part of the Late Pleistocene extinctions, which began 40,000 years ago and peaked between 14,000 and 11,500 years ago. Scientists are divided over whether hunting or climate change, which led to the shrinkage of its habitat, was the main factor that contributed to the extinction of the woolly mammoth, or whether it was due to a combination of the two. Evidence from tusk-derived δ18O values suggests that climate change was not the direct cause of Eurasian woolly mammoths, as δ18O did not significantly vary in areas where woolly mammoths died out and where they persisted for longer into the Holocene.
Whatever the cause, large mammals are generally more vulnerable than smaller ones due to their smaller population size and low reproduction rates. Climatic patterns during the Last Interglacial (130–116 kyr BP) suggest that woolly mammoths and associated steppe faunas were sensitive to contractions of steppe-tundra habitats since they were adapted to cold, dry, and open environments. Genetic results and climatic models both indicate that habitats suitable for the woolly mammoth in Eurasia contracted during the interglacial period, which would have caused population bottleneck effects that restricted its range to a few northern areas. As the climate favoured colder environments, however, woolly mammoth populations rebounded during later glacial periods.
The Last Glacial Period of the late Pleistocene is considered that of the maximum geographic distribution of the woolly mammoth, occupying most of Europe, northern Asia, and northern North America, although several barriers such as ice sheets, high mountain chains, deserts, year-round water surfaces, and other grasslands prevented them from spreading farther. Towards the end of the Last Glacial period, from around 15,000 years ago, the mammoth steppe that the woolly mammoth inhabited was gradually replaced across most of Siberia with wet tundra and boreal and temperate forest, which the woolly mammoth would have found to be unfavourable habitat.
Different woolly mammoth populations did not die out simultaneously across their range, but gradually became extinct over time. The dynamics of different woolly mammoth populations varied as they experienced very different magnitudes of climatic and human impacts over time, suggesting that extinction causes would have varied by population. Most populations disappeared between 14,000 and 10,000 years ago. In Britain, woolly mammoths were still present between 14,500 and 14,000 BP. The youngest fossils of the mainland population are from the Kyttyk Peninsula of Siberia and date to 9,650 years ago.
A small population of woolly mammoths survived on St. Paul Island, Alaska, well into the Holocene, with their extinction on the island being tightly constrained to around 5,600 years ago based on direct dating of bones and environmental proxies. This population is suggested to have gone extinct as a result of sea-level rise and increasing dryness of the island reducing freshwater availability, along with mammoth activity degrading the few freshwater sources on the island. The last population known from fossils remained on Wrangel Island in the Arctic Ocean until 4,000 years ago, well into the start of human civilization and several centuries subsequent to the construction of the Great Pyramid and Sphinx of ancient Egypt. However some studies have asserted that environmental DNA supports the existence of small mainland populations that died out at around the same time as their island counterparts; two studies in 2021 found that based on environmental DNA, mammoths survived in the Yukon until about 5,700 years ago, roughly concurrent with the St. Paul population and on the Taymyr Peninsula of Siberia until 3,900 to 4,100 years ago, roughly concurrent with the Wrangel population. The Taymyr Peninsula, with its drier habitat, may have served as a refugium for the mammoth steppe, supporting mammoths and other widespread Ice Age mammals such as wild horses (Equus sp.). However, ancient environmental DNA in cold environments can be reworked from older sediments into younger sediments that clearly post-date extinction, raising doubt about validity of these dates.
DNA sequencing of remains of two mammoths, one from Siberia 44,800 years BP and one from Wrangel Island 4,300 years BP, indicates two major population crashes: one around 280,000 years ago, from which the population recovered, and a second about 12,000 years ago, near the ice age's end, from which it did not. The Wrangel Island mammoths were isolated for 5,000 years by rising post-ice-age sea level, and resultant inbreeding in their small population of about 300 to 1,000 individuals led to a 20% to 30% loss of heterozygosity and a 65% loss in mitochondrial DNA diversity. The population seems to have subsequently been stable, without suffering further significant loss of genetic diversity. Genetic evidence thus implies the extinction of this final population was sudden, rather than the culmination of a gradual decline.
Before their extinction, the Wrangel Island mammoths had accumulated numerous genetic defects due to their small population; in particular, a number of genes for olfactory receptors and urinary proteins became nonfunctional, possibly because they had lost their selective value on the island environment. It is not clear whether these genetic changes contributed to their extinction. It has been proposed that these changes are consistent with the concept of genomic meltdown, however, this has been contested by later analysis of the genomes of some of the last mammoths on Wrangel Island, which suggests that highly deleterious mutations had been significantly purged to levels lower than that in mainland populations, though the level of moderately deleterious mutations was elevated. The sudden disappearance of an apparently stable population may be more consistent with a catastrophic event, possibly related to climate (such as icing of the snowpack), disease, or a human hunting expedition.
The disappearance is relatively close in time with the first evidence of humans on the island, though other authors have suggested that woolly mammoths were almost certainly extinct for several centuries prior to the presence of humans on Wrangel Island (which dates to around 3,600 years ago). The woolly mammoths of eastern Beringia (modern Alaska and Yukon) had similarly died out about 13,300 years ago, soon (roughly 1.000 years) after the first appearance of humans in the area, which parallels the fate of all the other late Pleistocene proboscideans (mammoths, gomphotheres, and mastodons), as well as most of the rest of the megafauna, of the Americas. In contrast, the St. Paul Island mammoth population apparently died out significantly before human arrival.
Changes in climate shrank suitable mammoth habitat from 42,000 years ago to , a roughly 90% decrease, by 6,000 years ago. Woolly mammoths survived an even greater loss of habitat at the end of the Penultimate Glacial Period and the onset of the Last Interglacial, approximately 125,000 years ago. Studies of an 11,300–11,000-year-old trackway in south-western Canada showed that M. primigenius was in decline while coexisting with humans, since far fewer tracks of juveniles were identified than would be expected in a normal herd. It has been suggested that human hunting exerted significant pressure on woolly mammoth populations for thousands of years across their range, making the population abundance of woolly mammoths considerably lower than it would have been otherwise even prior to their range decline, and likely hastened the range collapse of woolly mammoths in response to climate change.
Fossil specimens
Woolly mammoth fossils have been found in many different types of deposits, including former rivers and lakes, and in "Doggerland" in the North Sea, which was dry at times during the ice age. Such fossils are usually fragmentary and contain no soft tissue. Accumulations of modern elephant remains have been termed "elephants' graveyards", as these sites were erroneously thought to be where old elephants went to die. Similar accumulations of woolly mammoth bones have been found; these are thought to be the result of individuals dying near or in the rivers over thousands of years, and their bones eventually being brought together by the streams. Some accumulations are thought to be the remains of herds that died together at the same time, perhaps due to flooding. Natural traps, such as kettle holes, sink holes, and mud, have trapped mammoths in separate events over time.
Apart from frozen remains, the only soft tissue known is from a specimen that was preserved in a petroleum seep in Starunia, Poland. Frozen remains of woolly mammoths have been found in the northern parts of Siberia and Alaska, with far fewer finds in the latter. Such remains are mostly found above the Arctic Circle, in permafrost. Soft tissue apparently was less likely to be preserved between 30,000 and 15,000 years ago, perhaps because the climate was milder during that period. Most specimens have partially degraded before discovery, due to exposure or to being scavenged. This "natural mummification" required the animal to have been buried rapidly in liquid or semisolids such as silt, mud, and icy water, which then froze.
The presence of undigested food in the stomach and seed pods still in the mouth of many of the specimens suggests neither starvation nor exposure is likely. The maturity of this ingested vegetation places the time of death in autumn rather than in spring, when flowers would be expected. The animals may have fallen through ice into small ponds or potholes, entombing them. Many are certainly known to have been killed in rivers, perhaps through being swept away by floods. In one location, by the Byoryolyokh River in Yakutia in Siberia, more than 8,000 bones from at least 140 mammoths have been found in a single spot, apparently having been swept there by the current.
Frozen specimens
Between 1692 and 1806, a handful of reports of frozen mammoth remains with soft tissue were published reached Europe, though none were collected during that time. While frozen woolly mammoth carcasses had been excavated by Europeans as early as 1728, the first fully documented specimen was discovered near the delta of the Lena River in 1799 by Ossip Schumachov, a Siberian hunter. While in Yakutsk in 1806, Michael Friedrich Adams heard about the frozen mammoth. Adams recovered the entire skeleton, apart from the tusks, which Shumachov had already sold, and one foreleg, most of the skin, and nearly of hair. During his return voyage, he purchased a pair of tusks that he believed were the ones that Shumachov had sold. Adams brought all to the Zoological Museum of the Zoological Institute of the Russian Academy of Sciences, and the task of mounting the skeleton was given to Wilhelm Gottlieb Tilesius. This was one of the first attempts at reconstructing the skeleton of an extinct animal. Most of the reconstruction is correct, but Tilesius placed each tusk in the opposite socket, so that they curved outward instead of inward. The error was not corrected until 1899, and the correct placement of mammoth tusks was still a matter of debate into the 20th century.
The 1901 excavation of the "Berezovka mammoth" is the best documented of the early finds. It was discovered at the Siberian Berezovka River (after a dog had noticed its smell), and the Russian authorities financed its excavation. The entire expedition took 10 months, and the specimen had to be cut to pieces before it could be transported to St. Petersburg. Most of the skin on the head as well as the trunk had been scavenged by predators, and most of the internal organs had rotted away. It was identified as a 35- to 40-year-old male, which had died 35,000 years ago. The animal still had grass between its teeth and on the tongue, showing that it had died suddenly. One of its shoulder blades was broken, which may have happened when it fell into a crevasse. It may have died of asphyxiation, as indicated by its erect penis. One third of a replica of the mammoth in the Museum of Zoology of St. Petersburg is covered in skin and hair of the "Berezovka mammoth".
By 1929, the remains of 34 mammoths with frozen soft tissues (skin, flesh, or organs) had been documented. Only four of them were relatively complete. Since then, about that many more have been found. In most cases, the flesh showed signs of decay before its freezing and later desiccation. Since 1860, Russian authorities have offered rewards of up to for finds of frozen woolly mammoth carcasses. Often, such finds were kept secret due to superstition. Several carcasses have been lost because they were not reported, and one was fed to dogs. Despite the rewards, native Yakuts were also reluctant to report mammoth finds to the authorities due to bad treatment of them in the past. In more recent years, scientific expeditions have been devoted to finding carcasses instead of relying solely on chance encounters. The most famous frozen specimen from Alaska is a calf nicknamed "Effie", which was found in 1948. It consists of the head, the trunk, and a foreleg and is about 25,000 years old.
In 1977, the well-preserved carcass of a seven- to eight-month-old woolly mammoth calf named "Dima" was discovered. This carcass was recovered near a tributary of the Kolyma River in northeastern Siberia. This specimen weighed about at death and was high and long. Radiocarbon dating determined that "Dima" died about 40,000 years ago. Its internal organs are similar to those of modern elephants, but its ears are only one-tenth the size of those of an African elephant of similar age. A less complete juvenile, nicknamed "Mascha", was found on the Yamal Peninsula in 1988. It was 3–4 months old, and a laceration on its right foot may have been the cause of death. It is the westernmost frozen mammoth found.
In 1997, a piece of mammoth tusk was discovered protruding from the tundra of the Taymyr Peninsula in Siberia, Russia. In 1999, this 20,380-year-old carcass and 25 tons of surrounding sediment were transported by an Mi-26 heavy lift helicopter to an ice cave in Khatanga. The specimen was nicknamed the "Jarkov mammoth". In October 2000, the careful defrosting operations in this cave began with the use of hair dryers to keep the hair and other soft tissues intact.
In 2002, a well-preserved carcass was discovered near the Maxunuokha River in northern Yakutia, which was recovered during three excavations. This adult male specimen was called the "Yukagir mammoth" and is estimated to have lived around 18,560 years ago, been tall at the shoulder, and weighed between 4 and 5 tonnes. It is one of the best-preserved mammoths ever found due to the almost complete head, covered in skin, but without the trunk. Some postcranial remains were found, some with soft tissue.
In 2007, the carcass of a female calf nicknamed "Lyuba" was discovered near the Yuribey River, where it had been buried for 41,800 years. By cutting a section through a molar and analysing its growth lines, they found that the animal had died at the age of one month. The mummified calf weighed , was high and in length. At the time of discovery, its eyes and trunk were intact and some fur remained on its body. Its organs and skin are very well preserved. "Lyuba" is believed to have been suffocated by mud in a river that its herd was crossing. After death, its body may have been colonised by bacteria that produce lactic acid, which "pickled" it, preserving the mammoth in a nearly pristine state.
In 2010, a juvenile nicknamed it "Yuka" was found in Siberia, the first known adolescent female, estimated to have been 2.5 years. It had man-made cut marks, and its skull and pelvis had been removed prior to discovery, but were found nearby. After being discovered, the skin of "Yuka" was prepared to produce a taxidermy mount. In 2019, a group of researchers managed to obtain signs of biological activity after transferring nuclei of "Yuka" into mouse oocytes.
In 2013, a well-preserved carcass was found on Maly Lyakhovsky Island, one of the islands in the New Siberian Islands archipelago, a female between 50 and 60 years old at the time of death. The carcass contained well-preserved muscular tissue. When it was extracted from the ice, liquid blood spilled from the abdominal cavity. The finders interpreted this as indicating woolly mammoth blood possessed antifreezing properties. In 2022, a complete female baby woolly mammoth was found by a miner in the Klondike gold fields of Yukon, Canada. The specimen is estimated to have died 30,000 years ago and was nicknamed "Nun cho ga", meaning "big baby animal" in the local Hän language. It is the best preserved woolly mammoth mummy found in North America, and was the same size as Lyuba. In 2025, a 50,000 calf nicknamed "Yana", after the Yana River basin of Yakutia where it was found, was announced. It was found by local residents and was described as the "best preserved" mammoth carcass, since it was found before it could be eaten by animals.
Cultural significance
The woolly mammoth has remained culturally significant long after its extinction. Indigenous peoples of Siberia had long found what are now known to be woolly mammoth remains, collecting their tusks for the ivory trade, and had a diverse range of mythological interpretations of mammoth remains. The Mansi and the Khanty peoples conceived of mammoths as giant birds, with the Mansi (along with the Nenets) believing that they were responsible for the creation of mountains and lakes. In the mythology of the Evenk people, mammoths were responsible for the creation of the world, digging up the land from the ocean floor with their tusks. The Selkup believed that mammoths lived underground and guarded the underworld, while the Yakuts regarded mammoths as water spirits.
The indigenous peoples of North America used woolly mammoth ivory and bone for tools and art. As in Siberia, North American natives had "myths of observation" explaining the remains of woolly mammoths and other elephants; the Bering Strait Inupiat believed the bones came from burrowing creatures, while other peoples associated them with primordial giants or "great beasts". Observers have interpreted legends from several Native American peoples as containing folk memory of extinct elephants, though other scholars are skeptical that folk memory could survive such a long time.
Woolly mammoth tusks had been articles of trade in Asia long before Europeans became acquainted with them. Güyük, the 13th-century Khan of the Mongols, is reputed to have sat on a throne made from mammoth ivory. Inspired by the Siberian natives' concept of the mammoth as an underground creature, it was recorded in the 16th-century Chinese pharmaceutical encyclopedia, Ben Cao Gangmu, as yin shu, "the hidden rodent". Remains of various extinct elephants were known by Europeans for centuries but were generally interpreted as the remains of legendary creatures such as behemoths or giants, based on biblical accounts. They were thought to be remains of modern elephants that had been brought to Europe during the Roman Republic, for example the war elephants of Hannibal and Pyrrhus of Epirus, or animals that had wandered north.
Siberian mammoth ivory is reported to have been exported to Russia and Europe in the 10th century. The first Siberian ivory to reach western Europe was brought to London in 1611. When Russia occupied Siberia, the ivory trade grew and it became a widely exported commodity, with huge amounts being excavated. From the 19th century and onwards, woolly mammoth ivory became a highly prized commodity, used as raw material for many products. Today, it is still in great demand as a replacement for the now-banned export of elephant ivory, and has been referred to as "white gold". Local dealers estimate that 10 million mammoths are still frozen in Siberia, and conservationists have suggested that this could help save the living species of elephants from extinction. Elephants are hunted by poachers for their ivory, but if this could instead be supplied by the already extinct mammoths, the demand could instead be met by these. Trade in elephant ivory has been forbidden in most places following the 1989 Lausanne Conference, but dealers have been known to label it as mammoth ivory to get it through customs. Mammoth ivory looks similar to elephant ivory, but the former is browner and the Schreger lines are coarser in texture. In the 21st century, global warming has made access to Siberian tusks easier, since the permafrost thaws more quickly, exposing the mammoths embedded within it.
Stories abound about frozen woolly mammoth meat that was consumed once defrosted, especially that of the "Berezovka mammoth", but most of these are considered dubious. The carcasses were in most cases decayed, and the stench so unbearable that only wild scavengers and the dogs accompanying the finders showed any interest in the flesh. Such meat apparently was once recommended against illness in China, and Siberian natives have occasionally cooked the meat of frozen carcasses they discovered. According to one of the more famous stories, members of the Explorers Club dined on the meat of a frozen mammoth from Alaska in 1951. In 2016, a group of researchers genetically examined a sample of the meal, and found it to belong to a green sea turtle (it had also been claimed to belong to Megatherium). The researchers concluded that the dinner had been a publicity stunt. In 2011, the Chinese palaeontologist Lida Xing livestreamed while eating meat from a Siberian mammoth leg (thoroughly cooked and flavoured with salt) and told his audience it tasted bad and like soil. This triggered controversy and gained mixed reactions, but Xing stated he did it to promote science. In 2023, An Australian cultured meat start-up, Vow, revealed a lab-grown "mammoth meatball" produced using a DNA sequence from the woolly mammoth. The meatball sparked conversations about the potential of cultured meat as a sustainable food source, highlighting its environmental benefits compared to traditional agriculture.
Alleged survival
There have been occasional claims that the woolly mammoth is not extinct and that small, isolated herds might survive in the vast and sparsely inhabited tundra of the Northern Hemisphere. In the 19th century, several reports of "large shaggy beasts" were passed on to the Russian authorities by Siberian tribesmen, but no scientific proof ever surfaced. A French chargé d'affaires working in Vladivostok, M. Gallon, said in 1946 that in 1920, he had met a Russian fur-trapper who claimed to have seen living giant, furry "elephants" deep into the taiga. Due to the large area of Siberia, the possibility that woolly mammoths survived into more recent times cannot be completely ruled out, but evidence indicates that they became extinct thousands of years ago. These natives had likely gained their knowledge of woolly mammoths from carcasses they encountered, which is likely the source of their legends of the animal.
In the late 19th century, rumours existed about surviving mammoths in Alaska. In 1899, Henry Tukeman detailed his killing of a mammoth in Alaska and his subsequent donation of the specimen to the Smithsonian Institution in Washington, DC. The museum denied the story. The Swedish writer Bengt Sjögren suggested in 1962 that the myth began when the American biologist Charles Haskins Townsend travelled in Alaska, saw Inuit trading mammoth tusks, asked if mammoths were still living in Alaska, and provided them with a drawing of the animal. Bernard Heuvelmans included the possibility of residual populations of Siberian mammoths in his 1955 book, On The Track Of Unknown Animals; while his book was a systematic investigation into possible unknown species, it became the basis of the cryptozoology movement.
Possible revival
The existence of preserved soft tissue remains and DNA of woolly mammoths has led to the idea that the species could be resurrected by scientific means. Several methods have been proposed to achieve this. Cloning would involve removal of the DNA-containing nucleus of the egg cell of a female elephant and replacement with a nucleus from woolly mammoth tissue. The cell would then be stimulated into dividing and inserted back into a female elephant. The resulting calf would have the genes of the woolly mammoth, although its fetal environment would be different. Most intact mammoths have had little usable DNA because of their conditions of preservation. There is not enough to guide the production of an embryo.
A second method involves artificially inseminating an elephant egg cell with sperm cells from a frozen woolly mammoth carcass. The resulting offspring would be an elephant–mammoth hybrid, and the process would have to be repeated so more hybrids could be used in breeding. After several generations of cross-breeding these hybrids, an almost pure woolly mammoth would be produced. The fact that sperm cells of modern mammals are viable for 15 years at most after deepfreezing makes this method unfeasible.
Several projects are working on gradually replacing the genes in elephant cells with mammoth genes. By 2015 and using the new CRISPR DNA editing technique, one team, led by George Church, had some woolly mammoth genes edited into the genome of an Asian elephant; focusing on cold-resistance initially, the target genes are for the external ear size, subcutaneous fat, hemoglobin, and hair attributes. If any method is ever successful, a suggestion has been made to introduce the hybrids to a wildlife reserve in Siberia called the Pleistocene Park.
Some researchers question the ethics of such recreation attempts. In addition to the technical problems, not much habitat is left that would be suitable for elephant-mammoth hybrids. Because the species was social and gregarious, creating a few specimens would not be ideal. The time and resources required would be enormous, and the scientific benefits would be unclear, suggesting these resources should instead be used to preserve extant elephant species which are endangered. The ethics of using elephants as surrogate mothers in hybridisation attempts has been questioned, as most embryos would not survive, and knowing the exact needs of a hybrid elephant–mammoth calf would be impossible. Another concern is the introduction of unknown pathogens if de-extinction efforts were to succeed. In 2021, an Austin-based company raised funds to reintroduce the species in the Arctic tundra.
| Biology and health sciences | Proboscidea | Animals |
5560619 | https://en.wikipedia.org/wiki/Giant%20oarfish | Giant oarfish | The giant oarfish (Regalecus glesne) is a species of oarfish of the family Regalecidae. It is an oceanodromous species with a worldwide distribution, excluding polar regions. Other common names include Pacific oarfish, king of herrings, ribbonfish, and streamer fish.
R. glesne is the world's longest ray-finned fish. Its shape is ribbon-like, narrow laterally, with a dorsal fin along its entire length, stubby pectoral fins, and long, oar-shaped pelvic fins, from which its common name is derived. Its coloration is silver and blue with spots of dark pigmentation, and its fins are crimson. Its physical characteristics and undulating mode of swimming have led to speculation that it might be the source of many "sea serpent" sightings.
Taxonomy
R. glesne was first described by Peter Ascanius in 1772. The genus name, Regalecus (from Latin ‘regalis’ meaning royal), signifies "belonging to a king"; the specific epithet glesne is from "Glesnaes", the name of a farm at Glesvær (not far from Norway's second largest city of Bergen), where the type specimen was found.
Its "king of herrings" nickname may derive from its crownlike appendages and from being sighted near shoals of herring, which fishermen thought were being guided by this fish. Its common name, oarfish, is probably an allusion to the shape of its pelvic fins, or else it may refer to the long slender shape of the fish itself.
Distribution
The giant oarfish has a worldwide distribution, having been found as far north as 72°N and as far south as 52°S, but is most commonly found in the tropics to middle latitudes. It has been categorized as oceanodromous, following its primary food source. It can be found in both the Atlantic and Pacific Oceans, though it is more widely distributed in the Atlantic. The fish is thought to be cosmopolitan in distribution, though it is not found in the polar regions. It is thought to inhabit the sunlit epipelagic to dimly lit mesopelagic zones. The deepest verified account of R. glesne is 463–492m (1519–1614 ft) from the Gulf of Mexico, as part of the Gulf SEPRENT project.
Description
This species is the world's longest bony fish, reaching a record length of about 7–8 m (23–26 ft), and a maximum record weight of 272 kg (600 lbs). Older, much longer estimates are now considered "very likely inaccurate". It is commonly measured to 3 m (9.8 ft) in total length.
Few R. glesne larvae have been identified and described in situ. These larvae exhibit an elongated body with rays extending from the occipital crest and a long pelvic fin, identical to that of the adult fish. Unlike the adult form of the species, the skin of the larvae is almost entirely transparent with intermittent spots of dark coloration along the organism's dorsum and head. This dark pigmentation is presumably an adaptation developed for counter-shading when the adult fish is vertical in the water column. Additionally, the larvae possess a caudal fin with four fin rays, which is a trait not present in the adult form of the species. In some larger juvenile specimens, body coloration similar to that of the adult form was observed. Observations of larvae specimen of Regalecus glesne captured off the island coast of Palagruža analyzed the size of these specimens. The larvae specimen was measured to be 103.4 mm with a body height of around 7 mm.
Adults have a pale silver ribbonlike body shape that is laterally compressed and extremely elongated with a dorsal fin along its entire length from between its eyes to the tip of its tail, ranging in color from faint pinkish to a bright red. The body often has dark wavy markings resembling spots or stripes. There is a black coloration of the membrane between the opercle and the other head bones. A series of faint horizontal stripes is evident in some specimens, while absent in others. The skin is scaleless, with extensive tuberculation.
The dorsal fin rays are soft and number between 414 and 449 in total. At the head of the fish, the first 10–12 of these dorsal fin rays are lengthened, forming the distinctive red crest associated with the species. Its pectoral and pelvic fins are nearly adjacent. The pectoral fins are stubby while the pelvic fins are long, single-rayed, and reminiscent of an oar in shape, widening at the tip. There are no anal fins. The caudal fin is usually under 2 m in length, with most well under 1 m and has four rays. In most specimens, the caudal fins are badly broken or absent entirely. Its head is small with the protrusible jaw typical of lampriformes. The species has 33 to 47 gill rakers on the first gill arch, no teeth, and the inside of the mouth is black. It has a pair of large eyes just above the mouth.
The organs of the giant oarfish are concentrated toward the head end of the body, possibly enabling it to survive losing large portions of its tail. It has no swim bladder. The liver of R. glesne is orange or red, the likely result of astaxanthin in its diet. The lateral line begins above and behind the eye then, descending to the lower third of the body, extends to the caudal tip. There is a postabdominal gastric caecum, a tube which extends from the end of the stomach to the end of the body. The function of this structure is unknown, as no food items have been observed within it. It is not necessary for vital functions, as Regalecus have lost half or all of the caecum and survived without it.
R. glesne may be confused with the Russell's Oarfish or R. russelii. The two can be distinguished by the number of rays in the second dorsal fin crest (11 in R. glesne and one in R. russellii). R. glesne also has a smaller snout-vent length, about one-fourth of the standard body length, whereas R. russellii has a larger snout-vent length, about one-third of the standard body length. R. glense has a longer abdomen than R. russellii. R. russellii has more gill rakers (47–60), and a single dorsal fin crest with a single ray, whereas R. glesne has fewer gill rakers (33–47) and second dorsal fin crest with 5–11 rays. There are also a difference in the number of pre-anus dorsal fin rays, with R. russellii having less than 82 and R. glesne over 90.
Life cycle
The only reliable record of the early stages of Regalecus is a report of eggs from the western Pacific, identified using DNA barcoding techniques, and a juvenile (13.7 mm in standard length) identified from developed morphological features. R. glesne eggs are observed to be circular in shape, with numerous short spines (ca. 0.04 mm) that were uniformly scattered all over the chorion.
Behavior
Little is known about oarfish behavior. It has been observed swimming by means of undulating its dorsal fin, and also swimming in a vertical position using undulatory movements of both its body and dorsal fin. In 2010, scientists filmed a giant oarfish in the Gulf of Mexico swimming in the mesopelagic layer, the first footage of a reliably identified R. glesne in its natural setting. The footage was caught during a survey, using an ROV in the vicinity of Thunder Horse PDQ, and shows the fish swimming in a columnar orientation, tail downward.
Feeding
There is little known about the feeding habits of Regalecus. Most accounts report the stomach and gut as empty, or with colored liquid inside. There is one account of R. glesne with a gut content of thousands of krill. Another report of the stomach contents of two adult R. glesne consisted of 43 heads and 7 individuals of Mediterranean krill.
Growth
The number of crests in R. glesne increases as the fish grows. Juveniles begin with a single dorsal fin ray. After the larvae grows to about 50 mm, the rays following the first ray grow increasingly ornate and elongate.
Parasites
There are few noted parasitoids of Regalecidae. An adult female R. glesne was found to be host to at least 63 plerocercoids (the infective larvae of tapeworms) consistent with the characteristics of the larvae of the genus Clistobothrium.
Self-Amputation
R. glesne shows evidence of self-amputation of the body posterior to the vent. This amputation can either be just involving the caudal fin and a small number of vertebrae, or it may be the entire posterior part of the body. As the organs of R. glesne are concentrated in the front portion of the body, these amputations do not damage any vital organs. These amputations are noted to occur several times throughout the lifetime of the fish (serial autotomy), and all fish over 1.5 m long have bodies shortened by this. It is unclear why these amputations occur, as oarfish have no documented natural predators, so it is unlikely to be a predation response. Despite a common misconception that oarfish are preyed on by sharks, no documented shark attacks on oarfish have been documented. There is one recorded instance of a pod of pilot whales attacking an oarfish, but they did not eat it.
Population Size
There have been no documented attempts to quantify the population size of R. glesne. There is at least one population in the Northern Atlantic, and an isolated reproductive population in the Mediterranean. Very early life stages have been found near the Gulf Coast of Florida and off the coast of Canada. Eggs have been found in the waters of New Zealand and near the West Mariana ridge in the western North Pacific. The species is listed as "Least Concern" on the IUCN Red List.
Relationship with humans
R. glesne is not fished commercially, but it is an occasional bycatch in commercial nets. When cooked, the taste of an oarfish is described as “like paper.” R. glesne was offered to a dog who regularly consumes fish, and was refused. Six people agreed to try fried oarfish and said that the taste was suitable, but the flesh was extremely flaccid, and overall objectionable.
Due to their size, elongated bodies, and undulating swimming pattern, giant oarfish are presumed to be responsible for some sea serpent sightings. Formerly considered rare, the species is now suspected to be relatively common, although sightings of healthy specimens in their natural habitat are unusual.
The giant oarfish, and the related R. russelii, are sometimes known as "earthquake fish" because they are popularly believed to surface before and after an earthquake.
The Egyptian deity Ḥȝyšš, of which 16 depictions are known, is described as a horse-headed snake god and found on coffins and sarcophagi. It has been proposed that this is a depiction of R. glesne, based on the similarity of the elongated fins and coloration.
| Biology and health sciences | Acanthomorpha | Animals |
5564486 | https://en.wikipedia.org/wiki/Screw%20mechanism | Screw mechanism | The screw is a mechanism that converts rotational motion to linear motion, and a torque (rotational force) to a linear force. It is one of the six classical simple machines. The most common form consists of a cylindrical shaft with helical grooves or ridges called threads around the outside. The screw passes through a hole in another object or medium, with threads on the inside of the hole that mesh with the screw's threads. When the shaft of the screw is rotated relative to the stationary threads, the screw moves along its axis relative to the medium surrounding it; for example rotating a wood screw forces it into wood. In screw mechanisms, either the screw shaft can rotate through a threaded hole in a stationary object, or a threaded collar such as a nut can rotate around a stationary screw shaft. Geometrically, a screw can be viewed as a narrow inclined plane wrapped around a cylinder.
Like the other simple machines a screw can amplify force; a small rotational force (torque) on the shaft can exert a large axial force on a load. The smaller the pitch (the distance between the screw's threads), the greater the mechanical advantage (the ratio of output to input force). Screws are widely used in threaded fasteners to hold objects together, and in devices such as screw tops for containers, vises, screw jacks and screw presses.
Other mechanisms that use the same principle, also called screws, do not necessarily have a shaft or threads. For example, a corkscrew is a helix-shaped rod with a sharp point, and an Archimedes' screw is a water pump that uses a rotating helical chamber to move water uphill. The common principle of all screws is that a rotating helix can cause linear motion.
History
The screw was one of the last of the simple machines to be invented. It first appeared in Mesopotamia during the Neo-Assyrian period (911-609) BC, and then later appeared in Ancient Egypt and Ancient Greece.
Records indicate that the water screw, or screw pump, was first used in Ancient Egypt, some time before the Greek philosopher Archimedes described the Archimedes screw water pump around 234 BC. Archimedes wrote the earliest theoretical study of the screw as a machine, and is considered to have introduced the screw in Ancient Greece. By the first century BC, the screw was used in the form of the screw press and the Archimedes' screw.
Greek philosophers defined the screw as one of the simple machines and could calculate its (ideal) mechanical advantage. For example, Heron of Alexandria (52 AD) listed the screw as one of the five mechanisms that could "set a load in motion", defined it as an inclined plane wrapped around a cylinder, and described its fabrication and uses,
including describing a tap for cutting female screw threads.
Because their complicated helical shape had to be laboriously cut by hand, screws were only used as linkages in a few machines in the ancient world. Screw fasteners only began to be used in the 15th century in clocks, after screw-cutting lathes were developed. The screw was also apparently applied to drilling and moving materials (besides water) around this time, when images of augers and drills began to appear in European paintings. The complete dynamic theory of simple machines, including the screw, was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche ("On Mechanics").
Lead and pitch
The fineness or coarseness of a screw's threads are defined by two closely related quantities:
The lead is defined as the axial distance (parallel to the screw's axis) the screw travels in one complete revolution (360°) of the shaft. The lead determines the mechanical advantage of the screw; the smaller the lead, the higher the mechanical advantage.
The pitch is defined as the axial distance between the crests of adjacent threads.
In most screws, called "single start" screws, which have a single helical thread wrapped around them, the lead and pitch are equal. They only differ in "multiple start" screws, which have several intertwined threads. In these screws the lead is equal to the pitch multiplied by the number of starts. Multiple-start screws are used when a large linear motion for a given rotation is desired, for example in screw caps on bottles, and ball point pens.
Handedness
The helix of a screw's thread can twist in two possible directions, which is known as handedness. Most screw threads are oriented so that when seen from above, the screw shaft moves away from the viewer (the screw is tightened) when turned in a clockwise direction. This is known as a right-handed (RH) thread, because it follows the right hand grip rule: when the fingers of the right hand are curled around the shaft in the direction of rotation, the thumb will point in the direction of motion of the shaft. Threads oriented in the opposite direction are known as left-handed (LH).
By common convention, right-handedness is the default handedness for screw threads. Therefore, most threaded parts and fasteners have right-handed threads. One explanation for why right-handed threads became standard is that for a right-handed person, tightening a right-handed screw with a screwdriver is easier than tightening a left-handed screw, because it uses the stronger supinator muscle of the arm rather than the weaker pronator muscle. Since most people are right-handed, right-handed threads became standard on threaded fasteners.
Screw linkages in machines are exceptions; they can be right- or left-handed depending on which is more applicable. Left-handed screw threads are also used in some other applications:
Where the rotation of a shaft would cause a conventional right-handed nut to loosen rather than to tighten due to fretting induced precession. Examples include:
The left hand pedal on a bicycle.
The left-hand screw holding a circular saw blade or a bench grinder wheel on.
In some devices that have threads on either end, like turnbuckles and removable pipe segments. These parts have one right-handed and one left-handed thread, so that turning the piece tightens or loosens both threads at the same time.
In some gas supply connections to prevent dangerous misconnections. For example, in gas welding the flammable gas supply line is attached with left-handed threads, so it will not be accidentally switched with the oxygen supply, which uses right-handed threads.
To make them useless to the public (thus discouraging theft), left-handed light bulbs are used in some railway and subway stations.
Coffin lids are said to have been traditionally held on with left-handed screws.
Screw threads
Different shapes (profiles) of threads are used in screws employed for different purposes. Screw threads are standardized so that parts made by different manufacturers will mate correctly.
Thread angle
The thread angle is the included angle, measured at a section parallel to the axis, between the two bearing faces of the thread. The angle between the axial load force and the normal to the bearing surface is approximately equal to half the thread angle, so the thread angle has a great effect on the friction and efficiency of a screw, as well as the wear rate and the strength. The greater the thread angle, the greater the angle between the load vector and the surface normal, so the larger the normal force between the threads required to support a given load. Therefore, increasing the thread angle increases the friction and wear of a screw.
The outward facing angled thread bearing surface, when acted on by the load force, also applies a radial (outward) force to the nut, causing tensile stress. This radial bursting force increases with increasing thread angle. If the tensile strength of the nut material is insufficient, an excessive load on a nut with a large thread angle can split the nut.
The thread angle also has an effect on the strength of the threads; threads with a large angle have a wide root compared with their size and are stronger.
Types of threads
In threaded fasteners, large amounts of friction are acceptable and usually wanted, to prevent the fastener from unscrewing. So threads used in fasteners usually have a large 60° thread angle:
(a) V thread - These are used in self-tapping screws such as wood screws and sheet metal screws which require a sharp edge to cut a hole, and where additional friction is needed to make sure the screw remains motionless, such as in setscrews and adjustment screws, and where the joint must be fluid tight as in threaded pipe joints.
(b) American National - This has been replaced by the almost identical Unified Thread Standard. It has the same 60° thread angle as the V thread but is stronger because of the flat root. Used in bolts, nuts, and a wide variety of fasteners.
(c) Metric thread - These threads are specified and common for ISO and DIN standards.
(d) Whitworth or British Standard - Very similar British standard replaced by the Unified Thread Standard.
In machine linkages such as lead screws or jackscrews, in contrast, friction must be minimized. Therefore, threads with smaller angles are used:
(e) Square thread - This is the strongest and lowest friction thread, with a 0° thread angle, and does not apply bursting force to the nut. However it is difficult to fabricate, requiring a single point cutting tool due to the need to undercut the edges. It is used in high-load applications such as jackscrews and lead screws but has been mostly replaced by the Acme thread. A modified square thread with a small 5° thread angle is sometimes used instead, which is cheaper to manufacture.
(f) Acme thread - With its 28° thread angle this has higher friction than the square thread, but is easier to manufacture and can be used with a split nut to adjust for wear. It is widely used in vises, C-clamps, valves, scissor jacks and lead screws in machines like lathes.
(g) Buttress thread - This is used in high-load applications in which the load force is applied in only one direction, such as screw jacks. With a 0° angle of the bearing surface it is as efficient as the square thread but stronger and easier to manufacture.
(h) Knuckle thread - Similar to a square thread in which the corners have been rounded to protect them from damage, also giving it higher friction. In low-strength applications it can be manufactured cheaply from sheet stock by rolling. It is used in light bulbs and sockets.
Uses
Because of its self-locking property (see below) the screw is widely used in threaded fasteners to hold objects or materials together: the wood screw, sheet metal screw, stud, and bolt and nut.
The self-locking property is also key to the screw's use in a wide range of other applications, such as the corkscrew, screw top container lid, threaded pipe joint, vise, C-clamp, and screw jack.
Screws are also used as linkages in machines to transfer power, in the worm gear, lead screw, ball screw, and roller screw. Due to their low efficiency, screw linkages are seldom used to carry high power, but are more often employed in low power, intermittent uses such as positioning actuators.
Rotating helical screw blades or chambers are used to move material in the Archimedes' screw, auger earth drill, and screw conveyor.
The micrometer uses a precision calibrated screw for measuring lengths with great accuracy.
The screw propeller, although it shares the name screw, works on very different physical principles from the above types of screw, and the information in this article is not applicable to it.
Distance moved
The linear distance a screw shaft moves when it is rotated through an angle of degrees is:
where is the lead of the screw.
The distance ratio of a simple machine is defined as the ratio of the distance the applied force moves to the distance the load moves. For a screw it is the ratio of the circular distance din a point on the edge of the shaft moves to the linear distance dout the shaft moves. If r is the radius of the shaft, in one turn a point on the screw's rim moves a distance of 2πr, while its shaft moves linearly by the lead distance l. So the distance ratio is
Frictionless mechanical advantage
The mechanical advantage MA of a screw is defined as the ratio of axial output force Fout applied by the shaft on a load to the rotational force Fin applied to the rim of the shaft to turn it. For a screw with no friction (also called an ideal screw), from conservation of energy the work done on the screw by the input force turning it is equal to the work done by the screw on the load force:
Work is equal to the force multiplied by the distance it acts, so the work done in one complete turn of the screw is and the work done on the load is . So the ideal mechanical advantage of a screw is equal to the distance ratio:
It can be seen that the mechanical advantage of a screw depends on its lead, . The smaller the distance between its threads, the larger the mechanical advantage, and the larger the force the screw can exert for a given applied force. However most actual screws have large amounts of friction and their mechanical advantage is less than given by the above equation.
Torque form
The rotational force applied to the screw is actually a torque . Because of this, the input force required to turn a screw depends on how far from the shaft it is applied; the farther from the shaft, the less force is needed to turn it. The force on a screw is not usually applied at the rim as assumed above. It is often applied by some form of lever; for example a bolt is turned by a wrench whose handle functions as a lever. The mechanical advantage in this case can be calculated by using the length of the lever arm for r in the above equation. This extraneous factor r can be removed from the above equation by writing it in terms of torque:
Actual mechanical advantage and efficiency
Because of the large area of sliding contact between the moving and stationary threads, screws typically have large frictional energy losses. Even well-lubricated jack screws have efficiencies of only 15% - 20%, the rest of the work applied in turning them is lost to friction. When friction is included, the mechanical advantage is no longer equal to the distance ratio but also depends on the screw's efficiency. From conservation of energy, the work Win done on the screw by the input force turning it is equal to the sum of the work done moving the load Wout, and the work dissipated as heat by friction Wfric in the screw
The efficiency η is a dimensionless number between 0 and 1 defined as the ratio of output work to input work
Work is defined as the force multiplied by the distance moved, so and and therefore
or in terms of torque
So the mechanical advantage of an actual screw is reduced from what it would be in an ideal, frictionless screw by the efficiency . Because of their low efficiency, in powered machinery screws are not often used as linkages to transfer large amounts of power but are more often used in positioners that operate intermittently.
Self-locking property
Large frictional forces cause most screws in practical use to be "self-locking", also called "non-reciprocal" or "non-overhauling". This means that applying a torque to the shaft will cause it to turn, but no amount of axial load force against the shaft will cause it to turn back the other way, even if the applied torque is zero. This is in contrast to some other simple machines which are "reciprocal" or "non locking" which means if the load force is great enough they will move backwards or "overhaul". Thus, the machine can be used in either direction. For example, in a lever, if the force on the load end is too large it will move backwards, doing work on the applied force. Most screws are designed to be self-locking, and in the absence of torque on the shaft will stay at whatever position they are left. However, some screw mechanisms with a large enough pitch and good lubrication are not self-locking and will overhaul, and a very few, such as a push drill, use the screw in this "backwards" sense, applying axial force to the shaft to turn the screw. Other reasons for the screws to come loose are incorrect design of assembly and external forces such as shock, vibration and dynamic loads causing slipping on the threaded and mated/clamped surfaces.
This self-locking property is one reason for the very large use of the screw in threaded fasteners such as wood screws, sheet metal screws, studs and bolts. Tightening the fastener by turning it puts compression force on the materials or parts being fastened together, but no amount of force from the parts will cause the screw to turn backwards and untighten. This property is also the basis for the use of screws in screw top container lids, vises, C-clamps, and screw jacks. A heavy object can be raised by turning the jack shaft, but when the shaft is released it will stay at whatever height it is raised to.
A screw will be self-locking if and only if its efficiency is below 50%.
Whether a screw is self-locking ultimately depends on the pitch angle and the coefficient of friction of the threads; very well-lubricated, low friction threads with a large enough pitch may "overhaul". Also considerations should be made to ensure that clamped components are clamped tight enough to prevent movement completely. If not, slipping in the threads or clamping surface can occur.
| Technology | Basics_8 | null |
5565588 | https://en.wikipedia.org/wiki/Climate%20system | Climate system | Earth's climate system is a complex system with five interacting components: the atmosphere (air), the hydrosphere (water), the cryosphere (ice and permafrost), the lithosphere (earth's upper rocky layer) and the biosphere (living things). Climate is the statistical characterization of the climate system. It represents the average weather, typically over a period of 30 years, and is determined by a combination of processes, such as ocean currents and wind patterns. Circulation in the atmosphere and oceans transports heat from the tropical regions to regions that receive less energy from the Sun. Solar radiation is the main driving force for this circulation. The water cycle also moves energy throughout the climate system. In addition, certain chemical elements are constantly moving between the components of the climate system. Two examples for these biochemical cycles are the carbon and nitrogen cycles.
The climate system can change due to internal variability and external forcings. These external forcings can be natural, such as variations in solar intensity and volcanic eruptions, or caused by humans. Accumulation of greenhouse gases in the atmosphere, mainly being emitted by people burning fossil fuels, is causing climate change. Human activity also releases cooling aerosols, but their net effect is far less than that of greenhouse gases. Changes can be amplified by feedback processes in the different climate system components.
Components
The atmosphere envelops the earth and extends hundreds of kilometres from the surface. It consists mostly of inert nitrogen (78%), oxygen (21%) and argon (0.9%). Some trace gases in the atmosphere, such as water vapour and carbon dioxide, are the gases most important for the workings of the climate system, as they are greenhouse gases which allow visible light from the Sun to penetrate to the surface, but block some of the infrared radiation the Earth's surface emits to balance the Sun's radiation. This causes surface temperatures to rise.
The hydrological cycle is the movement of water through the climate system. Not only does the hydrological cycle determine patterns of precipitation, it also has an influence on the movement of energy throughout the climate system.
The hydrosphere proper contains all the liquid water on Earth, with most of it contained in the world's oceans. The ocean covers 71% of Earth's surface to an average depth of nearly , and ocean heat content is much larger than the heat held by the atmosphere. It contains seawater with a salt content of about 3.5% on average, but this varies spatially. Brackish water is found in estuaries and some lakes, and most freshwater, 2.5% of all water, is held in ice and snow.
The cryosphere contains all parts of the climate system where water is solid. This includes sea ice, ice sheets, permafrost and snow cover. Because there is more land in the Northern Hemisphere compared to the Southern Hemisphere, a larger part of that hemisphere is covered in snow. Both hemispheres have about the same amount of sea ice. Most frozen water is contained in the ice sheets on Greenland and Antarctica, which average about in height. These ice sheets slowly flow towards their margins.
The Earth's crust, specifically mountains and valleys, shapes global wind patterns: vast mountain ranges form a barrier to winds and impact where and how much it rains. Land closer to open ocean has a more moderate climate than land farther from the ocean. For the purpose of modelling the climate, the land is often considered static as it changes very slowly compared to the other elements that make up the climate system. The position of the continents determines the geometry of the oceans and therefore influences patterns of ocean circulation. The locations of the seas are important in controlling the transfer of heat and moisture across the globe, and therefore, in determining global climate.
Lastly, the biosphere also interacts with the rest of the climate system. Vegetation is often darker or lighter than the soil beneath, so that more or less of the Sun's heat gets trapped in areas with vegetation. Vegetation is good at trapping water, which is then taken up by its roots. Without vegetation, this water would have run off to the closest rivers or other water bodies. Water taken up by plants instead evaporates, contributing to the hydrological cycle. Precipitation and temperature influences the distribution of different vegetation zones. Carbon assimilation from seawater by the growth of small phytoplankton is almost as much as land plants from the atmosphere. While humans are technically part of the biosphere, they are often treated as a separate components of Earth's climate system, the anthroposphere, because of human's large impact on the planet.
Flows of energy, water and elements
Energy and general circulation
The climate system receives energy from the Sun, and to a far lesser extent from the Earth's core, as well as tidal energy from the Moon. The Earth gives off energy to outer space in two forms: it directly reflects a part of the radiation of the Sun and it emits infra-red radiation as black-body radiation. The balance of incoming and outgoing energy, and the passage of the energy through the climate system, determines Earth's energy budget. When the total of incoming energy is greater than the outgoing energy, Earth's Energy Imbalance is positive and the climate system is warming. If more energy goes out, the energy imbalance is negative and Earth experiences cooling.
More energy reaches the tropics than the polar regions and the subsequent temperature difference drives the global circulation of the atmosphere and oceans. Air rises when it warms, flows polewards and sinks again when it cools, returning to the equator. Due to the conservation of angular momentum, the Earth's rotation diverts the air to the right in the Northern Hemisphere and to the left in the Southern hemisphere, thus forming distinct atmospheric cells. Monsoons, seasonal changes in wind and precipitation that occur mostly in the tropics, form due to the fact that land masses heat up more easily than the ocean. The temperature difference induces a pressure difference between land and ocean, driving a steady wind.
Ocean water that has more salt has a higher density and differences in density play an important role in ocean circulation. The thermohaline circulation transports heat from the tropics to the polar regions. Ocean circulation is further driven by the interaction with wind. The salt component also influences the freezing point temperature. Vertical movements can bring up colder water to the surface in a process called upwelling, which cools down the air above.
Hydrological cycle
The hydrological cycle or water cycle describes how it is constantly moved between the surface of the Earth and the atmosphere. Plants evapotranspirate and sunlight evaporates water from oceans and other water bodies, leaving behind salt and other minerals. The evaporated freshwater later rains back onto the surface. Precipitation and evaporation are not evenly distributed across the globe, with some regions such as the tropics having more rainfall than evaporation, and others having more evaporation than rainfall. The evaporation of water requires substantial quantities of energy, whereas a lot of heat is released during condensation. This latent heat is the primary source of energy in the atmosphere.
Biogeochemical cycles
Chemical elements, vital for life, are constantly cycled through the different components of the climate system. The carbon cycle is directly important for climate as it determines the concentrations of two important greenhouse gases in the atmosphere: and methane. In the fast part of the carbon cycle, plants take up carbon dioxide from the atmosphere using photosynthesis; this is later re-emitted by the breathing of living creatures. As part of the slow carbon cycle, volcanoes release by degassing, releasing carbon dioxide from the Earth's crust and mantle. As in the atmosphere makes rain a bit acidic, this rain can slowly dissolve some rocks, a process known as weathering. The minerals that are released in this way, transported to the sea, are used by living creatures whose remains can form sedimentary rocks, bringing the carbon back to the lithosphere.
The nitrogen cycle describes the flow of active nitrogen. As atmospheric nitrogen is inert, micro-organisms first have to convert this to an active nitrogen compound in a process called fixing nitrogen, before it can be used as a building block in the biosphere. Human activities play an important role in both carbon and nitrogen cycles: the burning of fossil fuels has displaced carbon from the lithosphere to the atmosphere, and the use of fertilizers has vastly increased the amount of available fixed nitrogen.
Changes within the climate system
Climate is constantly varying, on timescales that range from seasons to the lifetime of the Earth. Changes caused by the system's own components and dynamics are called internal climate variability. The system can also experience external forcing from phenomena outside of the system (e.g. a change in Earth's orbit). Longer changes, usually defined as changes that persist for at least 30 years, are referred to as climate changes, although this phrase usually refers to the current global climate change. When the climate changes, the effects may build on each other, cascading through the other parts of the system in a series of climate feedbacks (e.g. albedo changes), producing many different effects (e.g. sea level rise).
Internal variability
Components of the climate system vary continuously, even without external pushes (external forcing). One example in the atmosphere is the North Atlantic Oscillation (NAO), which operates as an atmospheric pressure see-saw. The Portuguese Azores typically have high pressure, whereas there is often lower pressure over Iceland. The difference in pressure oscillates and this affects weather patterns across the North Atlantic region up to central Eurasia. For instance, the weather in Greenland and Canada is cold and dry during a positive NAO. Different phases of the North Atlantic oscillation can be sustained for multiple decades.
The ocean and atmosphere can also work together to spontaneously generate internal climate variability that can persist for years to decades at a time. Examples of this type of variability include the El Niño–Southern Oscillation, the Pacific decadal oscillation, and the Atlantic Multidecadal Oscillation. These variations can affect global average surface temperature by redistributing heat between the deep ocean and the atmosphere; but also by altering the cloud, water vapour or sea ice distribution, which can affect the total energy budget of the earth.
The oceanic aspects of these oscillations can generate variability on centennial timescales due to the ocean having hundreds of times more mass than the atmosphere, and therefore larger heat capacity and thermal inertia. For example, alterations to ocean processes such as thermohaline circulation play a key role in redistributing heat in the world's oceans. Understanding internal variability helped scientists to attribute recent climate change to greenhouse gases.
External climate forcing
On long timescales, the climate is determined mainly by how much energy is in the system and where it goes. When the Earth's energy budget changes, the climate follows. A change in the energy budget is called a forcing. When the change is caused by something outside of the five components of the climate system, it is called an external forcing. Volcanoes, for example, result from deep processes within the earth that are not considered part of the climate system. Human actions, off-planet changes, such as solar variation and incoming asteroids, are also external to the climate system's five components.
The primary value to quantify and compare climate forcings is radiative forcing.
Incoming sunlight
The Sun is the predominant source of energy input to the Earth and drives atmospheric circulation. The amount of energy coming from the Sun varies on shorter time scales, including the 11-year solar cycle and longer-term time scales. While the solar cycle is too small to directly warm and cool Earth's surface, it does influence a higher layer of the atmosphere directly, the stratosphere, which may have an effect on the atmosphere near the surface.
Slight variations in the Earth's motion can cause large changes in the seasonal distribution of sunlight reaching the Earth's surface and how it is distributed across the globe, although not to the global and yearly average sunlight. The three types of kinematic change are variations in Earth's eccentricity, changes in the tilt angle of Earth's axis of rotation, and precession of Earth's axis. Together these produce Milankovitch cycles, which affect climate and are notable for their correlation to glacial and interglacial periods.
Greenhouse gases
Greenhouse gases trap heat in the lower part of the atmosphere by absorbing longwave radiation. In the Earth's past, many processes contributed to variations in greenhouse gas concentrations. Currently, emissions by humans are the cause of increasing concentrations of some greenhouse gases, such as , methane and . The dominant contributor to the greenhouse effect is water vapour (~50%), with clouds (~25%) and (~20%) also playing an important role. When concentrations of long-lived greenhouse gases such as are increased, temperature and water vapour increase. Accordingly, water vapour and clouds are not seen as external forcings but as feedback.
The weathering of carbonates and silicates removes carbon from the atmosphere.
Aerosols
Liquid and solid particles in the atmosphere, collectively named aerosols, have diverse effects on the climate. Some primarily scatter sunlight, cooling the planet, while others absorb sunlight and warm the atmosphere. Indirect effects include the fact that aerosols can act as cloud condensation nuclei, stimulating cloud formation. Natural sources of aerosols include sea spray, mineral dust, meteorites and volcanoes. Still, humans also contribute as a human activity, such as the combustion of biomass or fossil fuels, releases aerosols into the atmosphere. Aerosols counteract some of the warming effects of emitted greenhouse gases until they fall back to the surface in a few years or less.
Although volcanoes are technically part of the lithosphere, which is part of the climate system, volcanism is defined as an external forcing agent. On average, there are only several volcanic eruptions per century that influence Earth's climate for longer than a year by ejecting tons of SO2 into the stratosphere. The sulfur dioxide is chemically converted into aerosols that cause cooling by blocking a fraction of sunlight to the Earth's surface. Small eruptions affect the atmosphere only subtly.
Land use and cover change
Changes in land cover, such as change of water cover (e.g. rising sea level, drying up of lakes and outburst floods) or deforestation, particularly through human use of the land, can affect the climate. The reflectivity of the area can change, causing the region to capture more or less sunlight. In addition, vegetation interacts with the hydrological cycle, so precipitation is also affected. Landscape fires release greenhouse gases into the atmosphere and release black carbon, which darkens snow, making it easier to melt.
Responses and feedbacks
The different elements of the climate system respond to external forcing in different ways. One important difference between the components is the speed at which they react to a forcing. The atmosphere typically responds within a couple of hours to weeks, while the deep ocean and ice sheets take centuries to millennia to reach a new equilibrium.
The initial response of a component to an external forcing can be damped by negative feedbacks and enhanced by positive feedbacks. For example, a significant decrease of solar intensity would quickly lead to a temperature decrease on Earth, which would then allow ice and snow cover to expand. The extra snow and ice has a higher albedo or reflectivity, and therefore reflects more of the Sun's radiation back into space before it can be absorbed by the climate system as a whole; this in turn causes the Earth to cool down further.
| Physical sciences | Climatology: General | Earth science |
1561831 | https://en.wikipedia.org/wiki/Geology%20of%20the%20Himalayas | Geology of the Himalayas | The geology of the Himalayas is a record of the most dramatic and visible creations of the immense mountain range formed by plate tectonic forces and sculpted by weathering and erosion. The Himalayas, which stretch over 2400 km between the Namcha Barwa syntaxis at the eastern end of the mountain range and the Nanga Parbat syntaxis at the western end, are the result of an ongoing orogeny — the collision of the continental crust of two tectonic plates, namely, the Indian Plate thrusting into the Eurasian Plate. The Himalaya-Tibet region supplies fresh water for more than one-fifth of the world population, and accounts for a quarter of the global sedimentary budget. Topographically, the belt has many superlatives: the highest rate of uplift (nearly 10 mm/year at Nanga Parbat), the highest relief (8848 m at Mt. Everest Chomolangma), among the highest erosion rates at 2–12 mm/yr, the source of some of the greatest rivers and the highest concentration of glaciers outside of the polar regions. This last feature earned the Himalaya its name, originating from the Sanskrit for "the abode of the snow".
From south to north the Himalaya (Himalaya orogen) is divided into 4 parallel tectonostratigraphic zones and 5 thrust faults which extend across the length of Himalaya orogen. Each zone, flanked by the thrust faults on its north and south, has stratigraphy (type of rocks and their layering) different from the adjacent zones. From south to north, the zones and the major faults separating them are the Main Frontal Thrust (MFT), Subhimalaya Zone (also called Sivalik), Main Boundary Thrust (MBT), Lesser Himalaya (further subdivided into the "Lesser Himalayan Sedimentary Zone (LHSZ) and the Lesser Himalayan Crystalline Nappes (LHCN)), Main Central thrust (MCT), Higher (or Greater) Himalayan crystallines (HHC), South Tibetan detachment system (STD), Tethys Himalaya (TH), and the Indus‐Tsangpo Suture Zone (ISZ). North of this lies the Transhimalaya in Tibet which is outside the Himalayas. The Himalayas border the Indo-Gangetic Plain to the south, Pamir Mountains to the west in Central Asia, and the Hengduan Mountains to the east on the China–Myanmar border.
From east to west the Himalayas are divided into 3 regions, Eastern Himalaya, Central Himalaya, and Western Himalaya, which collectively house several nations and states.
Making of the Himalayas
During Late Precambrian and the Palaeozoic, the Indian subcontinent, bounded to the north by the Cimmerian Superterranes, was part of Gondwana and was separated from Eurasia by the Paleo-Tethys Ocean (Fig. 1). During that period, the northern part of India was affected by a late phase of the Pan-African orogeny which is marked by an unconformity between Ordovician continental conglomerates and the underlying Cambrian marine sediments. Numerous granitic intrusions dated at around 500 Ma are also attributed to this event.
In the Early Carboniferous, an early stage of rifting developed between the Indian subcontinent and the Cimmerian Superterranes. During the Early Permian, this rift developed into the Neotethys ocean (Fig. 2). From that time on, the Cimmerian Superterranes drifted away from Gondwana towards the north. Nowadays, Iran, Afghanistan and Tibet are partly made up of these terranes.
In the Norian (210 Ma), a major rifting episode split Gondwana in two parts. The Indian continent became part of East Gondwana, together with Australia and Antarctica. However, the separation of East and West Gondwana, together with the formation of oceanic crust, occurred later, in the Callovian (160-155 Ma). The Indian plate then broke off from Australia and Antarctica in the Early Cretaceous (130-125 Ma) with the opening of the "South Indian Ocean" (Fig. 3).
In the Late Cretaceous (84 Ma), the Indian plate began its very rapid northward drift covering a distance of about 6000 km, with the oceanic-oceanic subduction continuing until the final closure of the oceanic basin and the obduction of oceanic ophiolite onto India and the beginning of continent-continent tectonic interaction starting at about 65 Ma in the Central Himalaya. The change of the relative speed between the Indian and Asian plates from very fast (18-19.5 cm/yr) to fast (4.5 cm/yr) at about 55 Ma is circumstantial support for collision then. Since then there has been about 2500 km of crustal shortening and rotating of India by 45° counterclockwise in the Northwestern Himalaya to 10°-15° counterclockwise in North Central Nepal relative to Asia (Fig. 4).
While most of the oceanic crust was "simply" subducted below the Tibetan block during the northward motion of India, at least three major mechanisms have been put forward, either separately or jointly, to explain what happened, since collision, to the 2500 km of "missing continental crust".
The first mechanism also calls upon the subduction of the Indian continental crust below Tibet.
Second is the extrusion or escape tectonics mechanism which sees the Indian plate as an indenter that squeezed the Indochina block out of its way.
The third proposed mechanism is that a large part (~1000 km () or ~800 to ~1200 km) of the 2500 km of crustal shortening was accommodated by thrusting and folding of the sediments of the passive Indian margin together with the deformation of the Tibetan crust.
Even though it is more than reasonable to argue that this huge amount of crustal shortening most probably results from a combination of these three mechanisms, it is nevertheless the last mechanism which created the high topographic relief of the Himalaya.
The Himalayan tectonics result in long term deformation. This includes shortening across the Himalayas that range from 900 to 1,500 km. Said shortening is a product of the significant ongoing seismic activity. The continued convergence of the Indian plate with the Eurasian plate results in mega earthquakes. These seismic events can reach greater than MW 8 and result in intense damage to infrastructure. The mid-crustal ramp in the Himalayas is a key geologic feature in the history for both long-term and short-term seismic processes linked to deformation and shortening. Over the last 15 Ma, the ramp has gradually moved south due to duplexing, accretion, and tectonic undercutting.
The ongoing active collision of the Indian and Eurasian continental plates challenges one hypothesis for plate motion which relies on subduction.
Major tectonic subdivisions of the Himalaya
One of the most striking aspects of the Himalayan orogen is the lateral continuity of its major tectonic elements. The Himalaya is classically divided into four tectonic units that can be followed for more than 2400 km along the belt (Fig. 5 and Fig. 7).
Sub-Himalayan (Churia Hills or Sivaliks) tectonic plate
The Sub-Himalayan tectonic plate is sometimes referred to as the Cis-Himalayan tectonic plate in the older literature. It forms the southern foothills of the Himalayan Range and is essentially composed of Miocene to Pleistocene molassic sediments derived from the erosion of the Himalaya. These molasse deposits, known as the "Murree and Sivaliks Formations", are internally folded and imbricated. The Sub-Himalayan Range is thrust along the Main Frontal Thrust over the Quaternary alluvium deposited by the rivers coming from the Himalaya (Ganges, Indus, Brahmaputra and others), which demonstrates that the Himalaya is still a very active orogen.
Lesser Himalaya (LH) tectonic plate
The Lesser Himalaya (LH) tectonic plate is mainly formed by Upper Proterozoic to lower Cambrian detrital sediments from the passive Indian margin intercalated with some granites and acid volcanics (1840 ±70 Ma). These sediments are thrust over the Sub-himalayan range along the Main Boundary Thrust (MBT). The Lesser Himalaya often appears in tectonic windows (Kishtwar or Larji-Kulu-Rampur windows) within the High Himalaya Crystalline Sequence.
Central Himalayan Domain, (CHD) or High Himalaya tectonic plate
The Central Himalayan Domain forms the backbone of the Himalayan orogen and encompasses the areas with the highest topographic relief (highest peaks). It is commonly separated into four zones.
High Himalayan Crystalline Sequence (HHCS)
Approximately 30 different names exist in the literature to describe this unit; the most frequently found equivalents are "Greater Himalayan Sequence", "Tibetan Slab" and "High Himalayan Crystalline". It is a 30-km-thick, medium- to high-grade metamorphic sequence of metasedimentary rocks which are intruded in many places by granites of Ordovician (c. 500 Ma) and early Miocene (c. 22 Ma) age. Although most of the metasediments forming the HHCS are of late Proterozoic to early Cambrian age, much younger metasediments can also be found in several areas, e.g. Mesozoic in the Tandi syncline of Nepal and Warwan Valley of Kistwar in Kashmir, Permian in the "Tschuldo slice", Ordovician to Carboniferous in the "Sarchu area" on Leh-Manali Highway. It is now generally accepted that the metasediments of the HHCS represent the metamorphic equivalents of the sedimentary series forming the base of the overlying "Tethys Himalaya". The HHCS forms a major nappe which is thrust over the Lesser Himalaya along the "Main Central Thrust" (MCT).
Tethys Himalaya (TH)
The Tethys Himalaya is an approximately 100-km-wide synclinorium formed by strongly folded and imbricated, weakly metamorphosed sedimentary series. Several nappes, termed the "North Himalayan Nappes", have also been described within this unit. An almost complete stratigraphic record ranging from the Upper Proterozoic to the Eocene is preserved within the sediments of the TH. Stratigraphic analysis of these sediments yields important indications on the geological history of the northern continental margin of the Indian sub-continent from its Gondwanian evolution to its continental collision with Eurasia. The transition between the generally low-grade sediments of the "Tethys Himalaya" and the underlying low- to high-grade rocks of the "High Himalayan Crystalline Sequence" is usually progressive. But in many places along the Himalayan belt, this transition zone is marked by a major structure, the "Central Himalayan Detachment System", also known as the "South Tibetan Detachment System" or "North Himalayan Normal Fault", which has indicators of both extension and compression. See ongoing geologic studies section below.
Nyimaling-Tso Morari Metamorphic Dome (NTMD)
"Nyimaling-Tso Morari Metamorphic Dome" in the Ladakh region, the "Tethys Himalaya synclinorium" passes gradually to the north in a large dome of greenschist to eclogitic metamorphic rocks. As with the HHCS, these metamorphic rocks represent the metamorphic equivalent of the sediments forming the base of the Tethys Himalaya. The "Precambrian Phe Formation" is also here intruded by several Ordovician (c. 480 Ma) granites.
Lamayuru and Markha Units (LMU)
The Lamayuru and Markha Units are formed by flyschs and olistholiths deposited in a turbiditic environment, on the northern part of the Indian continental slope and in the adjoining Neotethys basin. The age of these sediments ranges from Late Permian to Eocene.
Exhumation of Metamorphic Rocks
The metamorphic rocks of the Himalaya can be very useful in deciphering and coming up with models of tectonic relationships. According to Kohn (2014), the exhumation of metamorphic rocks can be explained by the Main Himalayan Thrust. Although the mechanism of emplacing higher grade metamorphic rocks on top of lower grade metamorphic rocks still strongly debated, Kohn believes that it is due to long periods of transportation of higher grade metamorphic rocks on the Main Himalayan Thrust. Essentially, the longer the higher grade rocks were spatially interacting with the thrust, the farther they were transported.
The exhumation of eclogite and granulite rocks can be explained by several different models. The first model includes slab tear where the lower plate tore off into the mantle leading to high amounts of rebound. The second model states that the rocks got to a certain point in subduction and then were forced back up through the channel they came down due to a space problem. The third model states that the thick continental crust of India further exacerbated the space problem and caused the corner flow of those rocks back up the channel. The fourth model includes the rocks being transported along the Main Himalayan Thrust.
Indus Suture Zone (ISZ) (or Yarlung-Tsangpo Suture Zone) tectonic plate
ISZ, also called Indus-Yarlung suture zone, Yarlung-Zangpo Suture Zone or Yarlung-Tsangpo Suture Zone, defines the zone of collision between the Indian Plate and the Ladakh Batholith (also Transhimalaya or Karakoram-Lhasa Block) to the north. This suture zone is formed by:
Ophiolite Mélanges, composed of an intercalation of flysch and ophiolites from the Neotethys oceanic crust.
Dras Volcanics, relicts of a Late Cretaceous to Late Jurassic volcanic island arc and consist of basalts, dacites, volcanoclastites, pillow lavas and minor radiolarian cherts
Indus Molasse, a continental clastic rock sequence (with rare interbeds of marine saltwater sediments) comprising alluvial fan, braided stream and fluvio-lacustrine sediments derived mainly from the Ladakh batholith but also from the suture zone itself and the Tethys Himalaya. These molasses are post-collisional and thus Eocene to post-Eocene.
Indus Suture Zone, representing the northern limit of the Himalaya. Further to the North is the so-called Transhimalaya, or more locally Ladakh Batholith, which corresponds essentially to an active margin of Andean type. Widespread volcanism in this volcanic arc was caused by the melting of the mantle at the base of the Tibetan bloc, triggered by the dehydration of the subducting Indian oceanic crust.
Seismic activity
The modern day rate of convergence between the Indian and Eurasian plates is measured to be approximately 17 mm/yr. This convergence is accommodated through seismic activity in active fault zones. As a result, the Himalayan range is one of the most seismically active regions in the world. This region has experienced many high magnitude earthquakes in the last 100 years, including the 1905 Kangra Earthquake, 1975 Kinnaur Earthquake, 1991 Uttarkashi Earthquake, and the 1999 Chamoli Earthquake, all of which were recorded at magnitudes equal or greater than Mw 6.6.
A recent study (Parija et al, 2021) sought to quantify the Coulomb Stress Transfer in the Western Himalayas. Coulomb stress transfer is used to quantify how earthquakes release stress, identifying areas that are put under increased stress and those that have been unloaded. This study and those like it are important in understanding the current state of fault zones in the region, as well as their potential for rupture in the future.
| Physical sciences | Geologic features | Earth science |
1564379 | https://en.wikipedia.org/wiki/Pyrus%20communis | Pyrus communis | Pyrus communis, the common pear, is a species of pear native to central and eastern Europe, and western Asia.
It is one of the most important fruits of temperate regions, being the species from which most orchard pear cultivars grown in Europe, North America, and Australia have been developed. Two other species of pear, the Nashi pear (Pyrus pyrifolia) and the hybrid Chinese white or ya pear (Pyrus × bretschneideri, ) are more widely grown in East Asia.
Subtaxa
The following subspecies are currently accepted:
Pyrus communis subsp. caucasica – Turkey, Caucasus
Pyrus communis subsp. communis – Entire range except Caucasus
Origin
The cultivated Common pear (P. communis subsp. communis) is thought to be descended from two subspecies of wild pears, categorized as P. communis subsp. pyraster (syn. P. pyraster) and P. communis subsp. caucasica (syn. P. caucasica), which are interfertile with domesticated pears. Archeological evidence shows these pears "were collected from the wild long before their introduction into cultivation", according to Zohary and Hopf. Although they point to finds of pears in sites in Neolithic and Bronze Age European sites, "reliable information on pear cultivation first appears in the works of the Greek and the Roman writers." Theophrastus, Cato the Elder, and Pliny the Elder all present information about the cultivation and grafting of pears.
Cultivation
Common pear trees are not quite as hardy as apples, but nearly so. However, they do require some winter chilling to produce fruit. A number of Lepidoptera caterpillars feed on pear tree leaves.
For best and most consistent quality, common pears are picked when the fruit matures, but before they are ripe. Fruit allowed to ripen on the tree often drops before it can be picked, and in any event will be hard to pick without bruising. Pears store (and ship) well in their mature but unripe state if kept cold, and can be ripened later, a process called bletting. Some varieties, such as Beurre d'Anjou, ripen only with exposure to cold.
Fermented pear juice is called perry. In Britain, the place name "Perry" can indicate the historical presence of pear trees.
Relatively few cultivars of European or Asian pears are widely grown worldwide. Only about 20–25 European and 10–20 Asian cultivars represent virtually all the pears of commerce. Almost all European cultivars were chance seedlings or selections originating in western Europe, mostly France. The Asian cultivars all originated in Japan and China. 'Bartlett' (Williams) is the most common pear cultivar in the world, representing about 75% of US pear production.
Major cultivars
Selected common pear cultivars
Those marked have gained the Royal Horticultural Society's Award of Garden Merit.
'Abate Fetel' (syn. Abbé Fetel; a major cultivar in Italy)
'Ayers' (USA - an interspecific P. communis × P. pyrifolia hybrid from the University of Tennessee)
'Bambinella' (Malta)
'Beth'
Beurré Hardy/Gellerts Butterbirne
'Blake's Pride' (USA)
'Blanquilla' (or 'pera de agua' and 'blanquilla de Aranjuez', Spain)
'Butirra Precoce Morettini'
'Carmen'
'Clara Frijs' (major cultivar in Denmark)
'Concorde' (England - a seedling of 'Conference' × 'Doyenné du Comice)
'Conference' (England, 1894; the most popular commercial variety in the UK)
'Corella' (Australia)
'Coscia' (very early maturing cultivar from Italy)
'Don Guindo' (Spain - strong yellow, flavoured taste)
'Doyenné du Comice' (France)
'Dr. Jules Guyot'
'Forelle' (Germany)
'Glou Morceau' (Belgium, 1750)
'Gorham' (USA)
'Gracioen' (Belgium)
'Harrow Delight' (Canada)
'Harrow Sweet' (Canada)
'Joséphine de Malines' (Belgium - obtained by Esperen, pomologist and mayor of Malines in the 19th century; one of the best late season pears)
'Kieffer' (USA - a hybrid of the Chinese "sand pear", P. pyrifolia and probably 'Bartlett')
'Laxton's Superb' (England; no longer used due to high susceptibility to fireblight)
'Louise Bonne of Jersey'
'Luscious' (USA)
'Merton Pride' (England, 1941)
'Onward' (UK)
'Orient' (USA - an interspecific P. communis × P. pyrifolia hybrid)
'Packham's Triumph' (Australia, 1896)
'Pineapple' (USA - an interspecific P. communis × P. pyrifolia hybrid)
'Red Bartlett' (USA - There are three major red-skinned mutant clones: 'Max Red Bartlett', 'Sensation Red Bartlett', 'Rosired Bartlett')
'Rocha' (Portugal)
'Rosemarie' (South Africa)
'Seckel' (USA; late 17th century Philadelphia area; still produced, naturally resistant to fireblight)
'Starkrimson', also called Red Clapp's, is a red-skinned 1939 Michigan bud mutation of Clapp's Favourite. Its thick, smooth skin is a uniform, bright and intense red, and its creamy flesh is sweet and aromatic.
'Summer Beauty'
'Sudduth'
'Taylor's Gold' (New Zealand - a russeted mutant clone of 'Comice')
Triomphe de Vienne
'Williams Bonne Chrétienne'
Gallery
| Biology and health sciences | Pomes | Plants |
1564394 | https://en.wikipedia.org/wiki/Electromagnetic%20shielding | Electromagnetic shielding | In electrical engineering, electromagnetic shielding is the practice of reducing or redirecting the electromagnetic field (EMF) in a space with barriers made of conductive or magnetic materials. It is typically applied to enclosures, for isolating electrical devices from their surroundings, and to cables to isolate wires from the environment through which the cable runs (). Electromagnetic shielding that blocks radio frequency (RF) electromagnetic radiation is also known as RF shielding.
EMF shielding serves to minimize electromagnetic interference. The shielding can reduce the coupling of radio waves, electromagnetic fields, and electrostatic fields. A conductive enclosure used to block electrostatic fields is also known as a Faraday cage. The amount of reduction depends very much upon the material used, its thickness, the size of the shielded volume and the frequency of the fields of interest and the size, shape and orientation of holes in a shield to an incident electromagnetic field.
Materials used
Typical materials used for electromagnetic shielding include thin layer of metal, sheet metal, metal screen, and metal foam. Common sheet metals for shielding include copper, brass, nickel, silver, steel, and tin. Shielding effectiveness, that is, how well a shield reflects or absorbs/suppresses electromagnetic radiation, is affected by the physical properties of the metal. These may include conductivity, solderability, permeability, thickness, and weight. A metal's properties are an important consideration in material selection. For example, electrically dominant waves are reflected by highly conductive metals like copper, silver, and brass, while magnetically dominant waves are absorbed/suppressed by a less conductive metal such as steel or stainless steel. Further, any holes in the shield or mesh must be significantly smaller than the wavelength of the radiation that is being kept out, or the enclosure will not effectively approximate an unbroken conducting surface.
Another commonly used shielding method, especially with electronic goods housed in plastic enclosures, is to coat the inside of the enclosure with a metallic ink or similar material. The ink consists of a carrier material loaded with a suitable metal, typically copper or nickel, in the form of very small particulates. It is sprayed on to the enclosure and, once dry, produces a continuous conductive layer of metal, which can be electrically connected to the chassis ground of the equipment, thus providing effective shielding.
Electromagnetic shielding is the process of lowering the electromagnetic field in an area by barricading it with conductive or magnetic material. Copper is used for radio frequency (RF) shielding because it absorbs radio and other electromagnetic waves. Properly designed and constructed RF shielding enclosures satisfy most RF shielding needs, from computer and electrical switching rooms to hospital CAT-scan and MRI facilities.
EMI (electromagnetic interference) shielding is of great research interest and several new types of nanocomposites made of ferrites, polymers, and 2D materials are being developed to obtain more efficient RF/microwave-absorbing materials (MAMs). EMI shielding is often achieved by electroless plating of copper as most popular plastics are non-conductive or by special conductive paint.
Example of applications
One example is a shielded cable, which has electromagnetic shielding in the form of a wire mesh surrounding an inner core conductor. The shielding impedes the escape of any signal from the core conductor, and also prevents signals from being added to the core conductor.
Some cables have two separate coaxial screens, one connected at both ends, the other at one end only, to maximize shielding of both electromagnetic and electrostatic fields.
The door of a microwave oven has a screen built into the window. From the perspective of microwaves (with wavelengths of 12 cm) this screen finishes a Faraday cage formed by the oven's metal housing. Visible light, with wavelengths ranging between 400 nm and 700 nm, passes easily through the screen holes.
RF shielding is also used to prevent access to data stored on RFID chips embedded in various devices, such as biometric passports.
NATO specifies electromagnetic shielding for computers and keyboards to prevent passive monitoring of keyboard emissions that would allow passwords to be captured; consumer keyboards do not offer this protection primarily because of the prohibitive cost.
RF shielding is also used to protect medical and laboratory equipment to provide protection against interfering signals, including AM, FM, TV, emergency services, dispatch, pagers, ESMR, cellular, and PCS. It can also be used to protect the equipment at the AM, FM or TV broadcast facilities.
Another example of the practical use of electromagnetic shielding would be defense applications. As technology improves, so does the susceptibility to various types of nefarious electromagnetic interference. The idea of encasing a cable inside a grounded conductive barrier can provide mitigation to these risks.
How it works
Electromagnetic radiation consists of coupled electric and magnetic fields. The electric field produces forces on the charge carriers (i.e., electrons) within the conductor. As soon as an electric field is applied to the surface of an ideal conductor, it induces a current that causes displacement of charge inside the conductor that cancels the applied field inside, at which point the current stops. See Faraday cage for more explanation.
Similarly, varying magnetic fields generate eddy currents that act to cancel the applied magnetic field. (The conductor does not respond to static magnetic fields unless the conductor is moving relative to the magnetic field.) The result is that electromagnetic radiation is reflected from the surface of the conductor: internal fields stay inside, and external fields stay outside.
Several factors serve to limit the shielding capability of real RF shields. One is that, due to the electrical resistance of the conductor, the excited field does not completely cancel the incident field. Also, most conductors exhibit a ferromagnetic response to low-frequency magnetic fields, so that such fields are not fully attenuated by the conductor. Any holes in the shield force current to flow around them, so that fields passing through the holes do not excite opposing electromagnetic fields. These effects reduce the field-reflecting capability of the shield.
In the case of high-frequency electromagnetic radiation, the above-mentioned adjustments take a non-negligible amount of time, yet any such radiation energy, as far as it is not reflected, is absorbed by the skin (unless it is extremely thin), so in this case there is no electromagnetic field inside either. This is one aspect of a greater phenomenon called the skin effect. A measure of the depth to which radiation can penetrate the shield is the so-called skin depth.
Magnetic shielding
Equipment sometimes requires isolation from external magnetic fields. For static or slowly varying magnetic fields (below about 100 kHz) the Faraday shielding described above is ineffective. In these cases shields made of high magnetic permeability metal alloys can be used, such as sheets of permalloy and mu-metal or with nanocrystalline grain structure ferromagnetic metal coatings. These materials do not block the magnetic field, as with electric shielding, but rather draw the field into themselves, providing a path for the magnetic field lines around the shielded volume. The best shape for magnetic shields is thus a closed container surrounding the shielded volume. The effectiveness of this type of shielding depends on the material's permeability, which generally drops off at both very low magnetic field strengths and high field strengths where the material becomes saturated. Therefore, to achieve low residual fields, magnetic shields often consist of several enclosures, one inside the other, each of which successively reduces the field inside it. Entry holes within shielding surfaces may degrade their performance significantly.
Because of the above limitations of passive shielding, an alternative used with static or low-frequency fields is active shielding, in which a field created by electromagnets cancels the ambient field within a volume. Solenoids and Helmholtz coils are types of coils that can be used for this purpose, as well as more complex wire patterns designed using methods adapted from those used in coil design for magnetic resonance imaging. Active shields may also be designed accounting for the electromagnetic coupling with passive shields, referred to as hybrid shielding, so that there is broadband shielding from the passive shield and additional cancellation of specific components using the active system.
Additionally, superconducting materials can expel magnetic fields via the Meissner effect.
Mathematical model
Suppose that we have a spherical shell of a (linear and isotropic) diamagnetic material with relative permeability with inner radius and outer radius We then put this object in a constant magnetic field:
Since there are no currents in this problem except for possible bound currents on the boundaries of the diamagnetic material, then we can define a magnetic scalar potential that satisfies Laplace's equation:
where
In this particular problem there is azimuthal symmetry so we can write down that the solution to Laplace's equation in spherical coordinates is:
After matching the boundary conditions
at the boundaries (where is a unit vector that is normal to the surface pointing from side 1 to side 2), then we find that the magnetic field inside the cavity in the spherical shell is:
where is an attenuation coefficient that depends on the thickness of the diamagnetic material and the magnetic permeability of the material:
This coefficient describes the effectiveness of this material in shielding the external magnetic field from the cavity that it surrounds. Notice that this coefficient appropriately goes to 1 (no shielding) in the limit that . In the limit that this coefficient goes to 0 (perfect shielding). When , then the attenuation coefficient takes on the simpler form:
which shows that the magnetic field decreases like
| Technology | Signal transmission | null |
1564498 | https://en.wikipedia.org/wiki/Christmas%20Island%20red%20crab | Christmas Island red crab | The Christmas Island red crab (Gecarcoidea natalis) is a species of land crab that is endemic to Christmas Island and Cocos (Keeling) Islands in the Indian Ocean. Although restricted to a relatively small area, an estimated 43.7 million adult red crabs once lived on Christmas Island alone, but the accidental introduction of the yellow crazy ant is believed to have killed about 10–15 million of these in recent years. Christmas Island red crabs make an annual mass migration to the sea to lay their eggs in the ocean. Although its population is under great assault by the ants, as of 2020 the red crab had not been assessed by the International Union for Conservation of Nature (IUCN) and it was not listed on their Red List. The crab's annual mass migration to the sea for spawning is described as an "epic" event. Millions emerge at the same time, halting road traffic and covering the ground in a thick carpet of crabs.
Description
Christmas Island red crabs are large crabs with the carapace measuring up to wide. The claws are usually of equal size, unless one becomes injured or detached, in which case the limb will regenerate. The male crabs are generally larger than the females, while adult females have a much broader abdomen (only apparent above 3 years of age) and usually have smaller claws. Bright red is their most common color, but some can be orange or the much rarer purple.
Ecology and behaviour
Behaviour
Like most land crabs, red crabs use gills to breathe and must take great care to conserve body moisture. Although red crabs are diurnal, they usually avoid direct sunlight so as not to dry out, and, despite lower temperatures and higher humidity, they are almost completely inactive at night. Red crabs also dig burrows to shelter themselves from the sun and will usually stay in the same burrow through the year; during the dry season, they will cover the entrance to the burrow to maintain a higher humidity inside, and will stay there for 3 months until the start of the wet season. Apart from the breeding season, red crabs are solitary animals and will defend their burrow from intruders.
Migration and breeding
For most of the year, red crabs can be found within Christmas Islands' forests. Each year they migrate to the coast to breed; the beginning of the wet season (usually October/November) allows the crabs to increase their activity and stimulates their annual migration. The timing of their migration is also linked to the phases of the moon. During this migration, red crabs abandon their burrows and travel to the coast to mate and spawn. This normally requires at least a week, with the male crabs usually arriving before the females. Once on the shore, the male crabs excavate burrows, which they must defend from other males. Mating occurs in or near the burrows. Soon after mating the males return to the forest while the females remain in the burrow for another two weeks. During this period they lay their eggs and incubate them in their abdominal brood pouch to facilitate their development. At the end of the incubation period the females leave their burrows and release their eggs into the ocean, precisely at the turn of the high tide during the last quarter of the moon. The females then return to the forest while the crab larvae spend another 3–4 weeks at sea before returning to land as juvenile crabs.
Life cycle
The eggs released by the females immediately hatch upon contact with sea water and clouds of crab larvae will swirl near the shore until they are swept out to sea, where they remain for 3–4 weeks. During this time, the larvae go through several larval stages, eventually developing into shrimp-like animals called megalopae. The megalopae gather near the shore for 1–2 days before changing into young crabs only across. The young crabs then leave the water to make a 9-day journey to the centre of the island. For the first three years of their lives, the young crabs will remain hidden in rock outcrops, fallen tree branches and debris on the forest floor. Red crabs grow slowly, reaching sexual maturity at around 4–5 years, at which point they begin participating in the annual migration. During their early growth phases, red crabs will moult several times. Mature red crabs will moult once a year, usually in the safety of their burrow. Their lifespan is about 12 years.
Diet
Christmas Island red crabs are opportunistic omnivorous scavengers. They mostly eat fallen leaves, fruits, flowers and seedlings, but will also feed on dead animals (including cannibalising other red crabs), and human rubbish. The non-native giant African land snail is also another food choice for the crabs. Red crabs have virtually no competition for food due to their dominance of the forest floor.
Predators
Adult red crabs have no natural predators on Christmas Island. The yellow crazy ant, an invasive species accidentally introduced to Christmas Island and Australia from Africa, is believed to have killed 10–15 million red crabs (one-quarter to one-third of the total population) in recent years. In total (including killed), the ants are believed to have displaced 15–20 million red crabs on Christmas Island.
During their larval stage, millions of red crab larvae are eaten by fish and large filter-feeders such as manta rays and whale sharks which visit Christmas Island during the red crab breeding season.
Coconut crabs (alternatively known as robber crabs) have also been filmed on Christmas Island preying on red crabs.
Early inhabitants of Christmas Island rarely mentioned these crabs. It is possible that their current large population size was caused by the extinction of the endemic Maclear's rat, Rattus macleari in 1903, which may have limited the crab's population.
Population
Surveys have found a density of 0.09–0.57 adult red crabs per square metre, equalling an estimated total population of 43.7 million on Christmas Island. Less information is available for the population in the Cocos (Keeling) Islands, but numbers there are relatively low. Based on genetic evidence, it appears that the Cocos (Keeling) red crabs are relatively recent immigrants from Christmas Island, and for conservation purposes the two can be managed as a single population.
Relationship with humans
During their annual breeding migration, red crabs will often have to cross several roads to get to their breeding grounds and then back to the forest. As a result, thousands of red crabs are crushed by vehicles and sometimes cause accidents due to their tough exoskeletons which are capable of puncturing tires. To ensure the safety of both the crabs and humans, local park rangers work to ensure that crabs can safely make their journey from the centre of the island to the sea; along heavily travelled roads, they set up aluminium barriers whose purpose is to funnel the crabs towards small underpasses so that they can safely traverse the roads. Other infrastructure to assist the crab migration includes a five-metre-high "crab bridge". In recent years, the human inhabitants of Christmas Island have become more tolerant and respectful of the crabs during their annual migration and are now more cautious while driving, which helps to minimise crab casualties. Their small size, high water content and poor meat quality mean they are not considered edible by humans.
| Biology and health sciences | Crabs and hermit crabs | Animals |
1565138 | https://en.wikipedia.org/wiki/Plateosaurus | Plateosaurus | Plateosaurus (probably meaning "broad lizard", often mistranslated as "flat lizard") is a genus of plateosaurid dinosaur that lived during the Late Triassic period, around 214 to 204 million years ago, in what is now Central and Northern Europe. Plateosaurus is a basal (early) sauropodomorph dinosaur, a so-called "prosauropod". The type species is Plateosaurus trossingensis; before 2019, that honor was given to Plateosaurus engelhardti, but it was ruled as undiagnostic (i.e. indistinguishable from other dinosaurs) by the ICZN. Currently, there are three valid species; in addition to P. trossingensis, P. longiceps and P. gracilis are also known. However, others have been assigned in the past, and there is no broad consensus on the species taxonomy of plateosaurid dinosaurs. Similarly, there are a plethora of synonyms (invalid duplicate names) at the genus level.
Discovered in 1834 by Johann Friedrich Engelhardt and described three years later by Hermann von Meyer, Plateosaurus was the fifth named dinosaur genus that is still considered valid. Although it had been described before Richard Owen formally named Dinosauria in 1842, it was not one of the three genera used by Owen to define the group, because at the time, it was poorly known and difficult to identify as a dinosaur. It is now among the dinosaurs best known to science: over 100 skeletons have been found, some of them nearly complete. The abundance of its fossils in Swabia, Germany, has led to the nickname Schwäbischer Lindwurm (Swabian lindworm).
Plateosaurus was a bipedal herbivore with a small skull on a long, flexible neck, sharp but plump plant-crushing teeth, powerful hind limbs, short but muscular arms and grasping hands with large claws on three fingers, possibly used for defence and feeding. Unusually for a dinosaur, Plateosaurus showed strong developmental plasticity: instead of having a fairly uniform adult size, fully grown individuals were between long and weighed between . Commonly, the animals lived for at least 12 to 20 years, but the maximum life span is not known.
Despite the great quantity and excellent quality of the fossil material, Plateosaurus was for a long time one of the most misunderstood dinosaurs. Some researchers proposed theories that were later shown to conflict with geological and palaeontological evidence, but have become the paradigm of public opinion. Since 1980 the taxonomy (relationships), taphonomy (how the animals became embedded and fossilised), biomechanics (how their skeletons worked), and palaeobiology (life circumstances) of Plateosaurus have been re-studied in detail, altering the interpretation of the animal's biology, posture and behaviour.
Discovery and history
In 1834, physician Johann Friedrich Engelhardt discovered some vertebrae and leg bones at Heroldsberg near Nuremberg, Germany. Three years later German palaeontologist Hermann von Meyer designated them as the type specimen of a new genus, Plateosaurus. Since then, remains of well over 100 individuals of Plateosaurus have been discovered at various locations throughout Europe.
Material assigned to Plateosaurus has been found at over 50 localities in Germany (mainly along the Neckar and Pegnitz river valleys), Switzerland (Frick) and France. Three localities are of special importance, because they yielded specimens in large numbers and of unusually good quality: near Halberstadt in Saxony-Anhalt, Germany; Trossingen in Baden-Württemberg, Germany; and Frick. Between the 1910s and 1930s, excavations in a clay pit in Saxony-Anhalt revealed between 39 and 50 skeletons that belonged to Plateosaurus, along with teeth and a small number of bones of the theropod Liliensternus, and two skeletons and some fragments of the turtle Proganochelys. Some of the plateosaur material was assigned to P. longiceps, a species described by palaeontologist Otto Jaekel in 1914. Most of the material found its way to the Museum für Naturkunde in Berlin, where much of it was destroyed during World War II. The Halberstadt quarry today is covered by a housing development.
The second major German locality with Plateosaurus finds, a quarry in Trossingen in the Black Forest, was worked repeatedly in the 20th century. Between 1911 and 1932, excavations during six field seasons led by German palaeontologists Eberhard Fraas (1911–1912), Friedrich von Huene (1921–23), and finally Reinhold Seemann (1932) revealed a total of 35 complete or partially complete skeletons of Plateosaurus, as well as fragmentary remains of approximately 70 more individuals. The large number of specimens from Swabia had already caused German palaeontologist Friedrich August von Quenstedt to nickname the animal Schwäbischer Lindwurm (Swabian lindworm or Swabian dragon). Much of the Trossingen material was destroyed in 1944, when the Naturaliensammlung in Stuttgart (predecessor to the State Museum of Natural History Stuttgart (SMNS)) burnt to the ground after an Allied bombing raid. Luckily, however, a 2011 study by SMNS curator Rainer Schoch found that, at least from the finds of Seemann's 1932 excavation, "the scientifically most valuable material is still available".
The Plateosaurus skeletons in a clay pit of the Tonwerke Keller AG in Frick, Switzerland, were first noticed in 1976. While the bones are often significantly deformed by taphonomic processes, Frick yields skeletons of P. trossingensis comparable in completeness and position to those of Trossingen.
In 1997, workers of an oil platform of the Snorre oil field, located at the northern end of the North Sea within the Lunde Formation, were drilling through sandstone for oil exploration when they stumbled on a fossil they believed to be plant material. The drill core containing the fossil was extracted from below the seafloor. Martin Sander and Nicole Klein, palaeontologists of the University of Bonn, analysed the bone microstructure and concluded that the rock preserved fibrous bone tissue from a fragment of a limb bone belonging to Plateosaurus, making it the first dinosaur found in Norway. Material referred to Plateosaurus has also been found in the Fleming Fjord Formation of East Greenland, but they were given the new genus name Issi in 2021.
The type series of Plateosaurus engelhardti included "roughly 45 bone fragments", of which nearly half are lost. The remaining material is kept in the Institute for Palaeontology of the University of Erlangen-Nuremberg, Germany. From these bones, German palaeontologist Markus Moser in 2003 selected a partial sacrum (series of fused hip vertebrae) as a lectotype. The type locality is not known for certain, but Moser attempted to infer it from previous publications and the colour and preservation of the bones. He concluded that the material probably stems from the "Buchenbühl", roughly south of Heroldsberg.
The type specimen of Plateosaurus gracilis, an incomplete postcranium, is kept at the Staatliches Museum für Naturkunde Stuttgart, Germany, and the type locality is Heslach, a suburb of the same city.
The type specimen of Plateosaurus trossingensis is SMNS 132000, stored in the same museum as P. gracilis. Its type locality is Trossingen, within the Trossingen Formation.
The type specimen of Plateosaurus longiceps is MB R.1937, which is stored in the Museum für Naturkunde in Berlin. Its type locality is Halberstadt, located in Saxony-Anhalt and the Trossingen Formation.
Etymology
The etymology of the name Plateosaurus is not entirely clear, as the original description contains no information and various authors have offered differing interpretations. German geologist Hanns Bruno Geinitz in 1846 gave "(πλᾰτῠ́ς, breit)" [English: broad] as the origin of the name, with von Meyer's Latin spelling Plateosaurus evidently derived from the stem of πλᾰτέος (plateos), the genitive case of the masculine adjective platys in Ancient Greek. In the same year, Agassiz proposed that the name derives from the Ancient Greek πλατη (platê – "paddle", "rudder"; Agassiz translates this as Latin pala = "spade") and σαυρος (sauros – "lizard"). Agassiz consequently renamed the genus Platysaurus, probably from Greek πλατυς (platys – "broad, flat, broad-shouldered"), creating an invalid junior synonym. Later authors often referred to this derivation, and the secondary meaning "flat" of πλατυς, so that Plateosaurus is often translated as "flat lizard". Often, claims were made that πλατυς is supposed to have been intended as a reference to flat bones, for example the laterally flattened teeth of Plateosaurus, but the teeth and other flat bones such as the pubic bones and some skull elements were unknown at the time of description.
Von Meyer's original short description from 1837 did not provide an etymology for Plateosaurus, but noted (as translated into English by British biologist Thomas Henry Huxley in 1870): "The bones belong to a gigantic Saurian, which, in virtue of the mass and hollowness of its limb-bones, is allied to Iguanodon and to Megalosaurus, and will belong to the second division of my Saurian system." Von Meyer later gave the formal name Pachypodes or Pachypoda ("thick feet") to his second division of "Saurians with Limbs Similar to Heavy Land Mammalia", but the group was a synonym of Richard Owen's Dinosauria from 1842.
In 1855, von Meyer published a detailed description of Plateosaurus with illustrations, but again gave no details on the etymology. He repeatedly referred to its gigantic size ("Riesensaurus" = giant lizard) and massive limbs ("schwerfüssig"), comparing Plateosaurus to large modern land mammals, but did not describe any important features that fit the terms "flat" or "shaped like an oar." Researcher Ben Creisler therefore concluded that "broad lizard" is the most suitable translation, and possibly was intended to emphasise the giant size of the animal, in particular its robust limb bones.
Von Meyer had authored a popular audience book in 1852 Ueber Die Reptilien und Säugethiere Der Verschiedenen Zeiten Der Erde [On the Reptiles and Mammals from the Different Time Periods of the Earth] based to two public lectures. In the book on page 44, he briefly described Plateosaurus, using the term "breit" [broad] for different features, including "broad, strong limb bones," noting that it had: "mehreren verwachsenen Wirbeln bestehende Heiligenbein, breite, starke Gliedmaassenknochen von 1 1⁄2 Fuss Länge mit einer geräumigen Markhöhle, zierliche Krystalle von Nadeleisenerz einschliessend, so wie Zehenglieder, welche ebenfalls breit und hohl waren...; es wäre diess der älteste bis jetzt aufgefundene Pachypode." [a sacrum composed of several fused vertebrae, broad, strong limb bones 1 1⁄2 feet long with an ample medullary cavity enclosing finely formed crystals of Goethite iron ore, as well as toe phalanges, which were also broad and hollow...; it would be the oldest pachypode [dinosaur] yet found.]
Valid species
The taxonomic history of Plateosaurus is "long and confusing" and a "chaotic tangle of names". As of 2019, only three species are universally accepted as valid: the type species P. trossingensis, P. longiceps, and P. gracilis, previously assigned to its own genus Sellosaurus. Moser performed the most extensive and detailed investigation of all plateosaurid material from Germany and Switzerland, concluding that all Plateosaurus and most other prosauropod material from the Keuper stems from the same species as the type material of Plateosaurus engelhardti. However, this is problematic due to the undiagnostic state of the lectotype. Moser considered Sellosaurus to be the same genus as Plateosaurus, but did not discuss whether S. gracilis and P. engelhardti belong to the same species. Palaeontologist Adam Yates of the University of the Witwatersrand cast further doubt on the generic separation. He included the type material of Sellosaurus gracilis in Plateosaurus as P. gracilis and reintroduced the old name Efraasia for some material that had been assigned to Sellosaurus. In 1926, von Huene had already concluded the two genera were the same.
Yates has cautioned that P. gracilis may be a metataxon, which means that there is neither evidence that the material assigned to it is monophyletic (belongs to one species), nor that it is paraphyletic (belongs to several species). This is the case because the holotype of P. (Sellosaurus) gracilis has no skull, and the other specimens consist of skulls and material that overlaps too little with the holotype to make it certain that it belongs to the same taxon. It is therefore possible that the known material contains more species belonging to Plateosaurus.
Some scientists regard other species as valid as well, for example P. erlenbergensis and P. engelhardti. These claims are problematic since both P. erlenbergensis and P. engelhardti have undiagnostic type specimens.
All named species of Plateosaurus except the type species, P. gracilis, or P. longiceps have turned out to be junior synonyms of the type species or invalid names. Von Huene practically erected a new species and sometimes a new genus for each relatively complete find from Trossingen (three species of Pachysaurus and seven of Plateosaurus) and Halberstadt (one species of Gresslyosaurus and eight of Plateosaurus). Later, he merged several of these species, but remained convinced that more than one genus and more than one species of Plateosaurus was present in both localities. Jaekel also believed that the Halberstadt material included several plateosaurid dinosaurs, as well as non-plateosaurid prosauropods. Systematic research by Galton drastically reduced the number of genera and species. Galton synonymised all cranial material, and described differences between the syntypes of P. engelhardti and the Trossingen material, which he referred to P. longiceps. Galton recognised P. trossingensis (P. fraasianus and P. integer are junior objective synonyms) to be identical to P. longiceps. Markus Moser, however, showed that P. longiceps is itself a junior synonym of P. engelhardti. Furthermore, a variety of species in other genera were created for material belonging to P. engelhardti, including Dimodosaurus poligniensis, Gresslyosaurus robustus, Gresslyosaurus torgeri, Pachysaurus ajax, Pachysaurus giganteus, Pachysaurus magnus and Pachysaurus wetzelianus. G. ingens has been considered separate from Plateosaurus, pending a revision of the material.
The skull of AMNH FARB 6810, the best-preserved skull of Plateosaurus that has been taken apart during preparation and is thus available as separate bones, was described anew in 2011. The authors of that publication, palaeontologists Albert Prieto-Márquez and Mark A. Norell, refer the skull to P. erlenbergensis, a species erected in 1905 by Friedrich von Huene and regarded as a synonym of P. engelhardti by Markus Moser. If the P. erlenbergensis holotype is diagnostic (i.e., has enough characters to be distinct from other material), it is the correct name for the material assigned to P. longiceps Jaekel, 1913.
Aside from fossils clearly belonging to Plateosaurus, there is much prosauropod material from the German Knollenmergel in museum collections, most of it labeled as Plateosaurus, that does not belong to the type species and possibly not to Plateosaurus at all. Some of this material is not diagnostic; other material has been recognised to be different, but was never sufficiently described.
Description
Plateosaurus had the typical body shape of a herbivorous bipedal dinosaur: a small skull, a long and flexible neck composed of 10 cervical vertebrae, a stocky body, and a long, mobile tail composed of at least 40 caudal vertebrae. The arms of Plateosaurus were very short, even compared to most other "prosauropods". However, they were strongly built, with hands adapted for powerful grasping. The shoulder girdle was narrow (often misaligned in skeletal mounts and drawings), with the clavicles (collar bones) touching at the body's midline, as in other basal sauropodomorphs. The hind limbs were held under the body, with slightly flexed knees and ankles, and the foot was digitigrade, meaning the animal walked on its toes. The proportionally long lower leg and metatarsus show that Plateosaurus could run quickly on its hind limbs. The tail of Plateosaurus was typically dinosaurian, muscular and with high mobility.
The skull of Plateosaurus is small and narrow, rectangular in side view, and nearly three times as long as it is high. There is an almost rectangular lateral temporal foramen at the back. The large, round orbit (eye socket), the sub-triangular antorbital fenestra and the oval naris (nostril) are of almost equal size. The jaws carried many small, leaf-shaped, socketed teeth: 5 to 6 per premaxilla, 24 to 30 per maxilla, and 21 to 28 per dentary (lower jaw). The thick, leaf-shaped, bluntly serrated tooth crowns were suitable for crushing plant material. The low position of the jaw joint gave the chewing muscles great leverage, so that Plateosaurus could deliver a powerful bite. These features suggest that it fed primarily to exclusively on plants. Its eyes were directed to the sides, rather than the front, providing all-round vision to watch for predators. Some fossil skeletons have preserved sclerotic rings (rings of bone plates that protect the eye).
The ribs were connected to the dorsal (trunk) vertebrae with two joints, acting together as a simple hinge joint, which has allowed researchers to reconstruct the inhaled and exhaled positions of the ribcage. The difference in volume between these two positions defines the air exchange volume (the amount of air moved with each breath), determined to be approximately 20 L for a P. engelhardti individual estimated to have weighed 690 kg, or 29 mL/kg bodyweight. This is a typical value for birds, but not for mammals, and indicates that Plateosaurus probably had an avian-style flow-through lung, although indicators for postcranial pneumaticity (air sacs of the lung invading the bones to reduce weight) can be found on the bones of only a few individuals, and were only recognised in 2010. Combined with evidence from bone histology this indicates that Plateosaurus was endothermic.
The type species of Plateosaurus is P. trossingensis. Adults of this species reached in length, and ranged in mass from . The geologically older species, P. gracilis (formerly named Sellosaurus gracilis), was somewhat smaller, with a total length of .
Classification
Plateosaurus is a member of a group of early herbivores known as "prosauropods". The group is not a monophyletic group (thus given in quotation marks), and most researchers prefer the term basal sauropodomorph. Plateosaurus was the first "prosauropod" to be described, and gives its name to the family Plateosauridae as the type genus. Initially, when the genus was poorly known, it was only included in Sauria, being some kind of reptile, but not in any more narrowly defined taxon. In 1845, von Meyer created the group Pachypodes (a defunct junior synonym of Dinosauria) to include Plateosaurus, Iguanodon, Megalosaurus and Hylaeosaurus. Plateosauridae was proposed by Othniel Charles Marsh in 1895 within Theropoda. Later it was moved to "Prosauropoda" by von Huene, a placement that was accepted by most authors. Before the advent of cladistics in paleontology during the 1980s, with its emphasis on monophyletic groups (clades), Plateosauridae was defined loosely, as large, broad-footed, broad-handed forms with relatively heavy skulls, unlike the smaller "anchisaurids" and sauropod-like "melanorosaurids". Reevaluation of "prosauropods" in light of the new methods of analysis led to the reduction of Plateosauridae. For many years the clade only included Plateosaurus and various junior synonyms, but later two more genera were considered to belong to it: Sellosaurus and possibly Unaysaurus. Of these, Sellosaurus is probably another junior synonym of Plateosaurus.
Basal sauropodomorph phylogeny simplified after Yates, 2007. This is only one of many proposed cladograms for basal sauropodomorphs. Some researchers do not agree that plateosaurs were the direct ancestors of sauropods.
Palaeobiology
Posture and gait
Practically every imaginable posture has been suggested for Plateosaurus in the scientific literature at some point. Von Huene assumed digitigrade bipedality with erect hind limbs for the animals he excavated at Trossingen, with the backbone held at a steep angle (at least during rapid locomotion). In contrast, Jaekel, the main investigator of the Halberstadt material, initially concluded that the animals walked quadrupedally, like lizards, with a sprawling limb position, plantigrade feet, and laterally undulating the body. Only a year later, Jaekel instead favoured a clumsy, kangaroo-like hopping, a change of heart for which he was mocked by German zoologist Gustav Tornier, who interpreted the shape of the articulation surfaces in the hip and shoulder as typically reptilian. Fraas, the first excavator of the Trossingen lagerstätte, also favoured a reptilian posture. Müller-Stoll listed a number of characters required for an erect limb posture that Plateosaurus supposedly lacked, concluding that the lizard-like reconstructions were correct. However, most of these adaptations are actually present in Plateosaurus.
From 1980 on, a better understanding of dinosaur biomechanics, and studies by palaeontologists Andreas Christian and Holger Preuschoft on the resistance to bending of the back of Plateosaurus, led to widespread acceptance of an erect, digitigrade limb posture and a roughly horizontal position of the back. Many researchers were of the opinion that Plateosaurus could use both quadrupedal gaits (for slow speeds) and bipedal gaits (for rapid locomotion), and Wellnhofer insisted that the tail curved strongly downward, making a bipedal posture impossible. However, Moser showed that the tail was in fact straight.
The bipedal-quadrupedal consensus was changed by a detailed study of the forelimbs of Plateosaurus by Bonnan and Senter (2007), which clearly showed that Plateosaurus was incapable of pronating its hands. The pronated position in some museum mounts had been achieved by exchanging the position of radius and ulna in the elbow. The lack of forelimb pronation meant that Plateosaurus was an obligate (i.e. unable to walk in any other way) biped. Further indicators for a purely bipedal mode of locomotion are the great difference in limb length (the hind limb is roughly twice as long as the forelimb), the very limited motion range of the forelimb, and the fact that the centre of mass rests squarely over the hind limbs. A recent study based on the cross-sectional geometry of long limb bones, comparisons with extant taxa and inference models also confirmed a bipedal posture and erect stance for Plateosaurus.
Plateosaurus shows a number of cursorial adaptations, including an erect hind limb posture, a relatively long lower leg, an elongated metatarsus and a digitigrade foot posture. However, in contrast to mammalian cursors, the moment arms of the limb extending muscles are short, especially in the ankle, where a distinct, moment arm-increasing tuber on the calcaneum is missing. This means that in contrast to running mammals, Plateosaurus probably did not use gaits with aerial, unsupported phases. Instead, Plateosaurus must have increased speed by using higher stride frequencies, created by rapid and powerful limb retraction. Reliance on limb retraction instead of extension is typical for non-avian dinosaurs.
Feeding and diet
Important cranial characteristics (such as jaw articulation) of most "prosauropods" are closer to those of herbivorous reptiles than those of carnivorous ones, and the shape of the tooth crown is similar to that of modern herbivorous or omnivorous iguanas. The maximum width of the crown was greater than that of the root for the teeth of most "prosauropods", including Plateosaurus; this results in a cutting edge similar to those of extant herbivorous or omnivorous reptiles. Paul Barrett proposed that prosauropods supplemented their mostly herbivorous diets with small prey or carrion, thus making them omnivores.
So far, no fossil of Plateosaurus has been found with gastroliths (gizzard stones) in the stomach area. The old, widely cited idea that all large dinosaurs, implicitly also Plateosaurus, swallowed gastroliths to digest food because of their relatively limited ability to deal with food orally has been refuted by a study on gastrolith abundance, weight, and surface structure in fossils compared to alligators and ostriches by Oliver Wings. The use of gastroliths for digestion seems to have developed on the line from basal theropods to birds, with a parallel development in Psittacosaurus.
Life history and metabolism
Similar to all non-avian dinosaurs studied to date, Plateosaurus grew in a pattern that is unlike that of both extant mammals and birds. In the closely related sauropods with their typical dinosaurian physiology, growth was initially rapid, continuing somewhat more slowly well beyond sexual maturity, but was determinate, i.e. the animals stopped growing at a maximum size. Mammals grow rapidly, but sexual maturity falls typically at the end of the rapid growth phase. In both groups, the final size is relatively constant, with humans atypically variable. Extant reptiles show a sauropod-like growth pattern, initially rapid, then slowing after sexual maturity, and almost, but not fully, stopping in old age. However, their initial growth rate is much lower than in mammals, birds and dinosaurs. The reptilian growth rate is also very variable, so that individuals of the same age may have very different sizes, and final size also varies significantly. In extant animals, this growth pattern is linked to behavioural thermoregulation and a low metabolic rate (i.e. ectothermy), and is called "developmental plasticity". (Note that is not the same as neural developmental plasticity).
Plateosaurus followed a trajectory similar to sauropods, but with a varied growth rate and final size as seen in extant reptiles, probably in response to environmental factors such as food availability. Some individuals were fully grown at only 4.8 metres' (16 ft) total length, while others reached . However, the bone microstructure indicates rapid growth, as in sauropods and extant mammals, which suggests endothermy. Plateosaurus apparently represents an early stage in the development of endothermy, in which endothermy was decoupled from developmental plasticity. This hypothesis is based on a detailed study of Plateosaurus long-bone histology conducted by Martin Sander and Nicole Klein of the University of Bonn. A further indication for endothermy is the avian-style lung of Plateosaurus.
Long-bone histology also allows estimating the age a specific individual reached. Sander and Klein found that some individuals were fully grown at 12 years of age, others were still slowly growing at 20 years, and one individual was still growing rapidly at 18 years. The oldest individual found was 27 years and still growing; most individuals were between 12 and 20 years old. However, some may well have lived much longer, because the fossils from Frick and Trossingen are all animals that died in accidents, and not from old age. Due to the absence of individuals smaller than long, it is not possible to deduce a complete ontogenetic series for Plateosaurus or determine the growth rate of animals less than 10 years of age.
Comparisons between the scleral rings and estimated orbit size of Plateosaurus and modern birds and reptiles suggest that it may have been cathemeral, active throughout the day and night, possibly avoiding the midday heat.
Palaeoecology
Plateosaurus gracilis, the older species, is found in the Löwenstein Formation (lower to middle Norian). P. trossingensis and P. longiceps stem from the Trossingen Formation (upper Norian) and equivalently aged rock units. Plateosaurus thus lived probably between approximately 227 and 208.5 million years ago.
Taphonomy
The taphonomy (burial and fossilisation process) of the three main Plateosaurus sites—Trossingen, Halberstadt and Frick—is unusual in several ways. All three sites are nearly monospecific assemblages, meaning that they contain practically only one species, which requires very special circumstances. However, shed teeth of theropods have been found at all three sites, as well as remains of the early turtle Proganochelys. Additionally, a partial "prosauropod" skeleton was found in Halberstadt that does not belong to Plateosaurus, but is preserved in a similar position. All sites yielded almost complete and partial skeletons of Plateosaurus, as well as isolated bones. The partial skeletons tend to include the hind limbs and hips, while parts of the anterior body and neck are rarely found in isolation. The animals were all adults or subadults (nearly adult individuals); no juveniles or hatchlings are known. Complete skeletons and large skeleton parts that include the hind limbs all rest dorsal (top) side up, as do the turtles. Also, they are mostly well-articulated, and the hind limbs are three-dimensionally preserved in a zigzag posture, with the feet often much deeper in the sediment than the hips.
Earlier interpretations
In the first published discussion of the Trossingen Plateosaurus finds, Fraas suggested that only miring in mud allowed the preservation of the single complete skeleton then known. Similarly, Jaekel interpreted the Halberstadt finds as animals that waded too deep into swamps, became mired and drowned. He interpreted partial remains as having been transported into the deposit by water, and strongly refuted a catastrophic accumulation. In contrast, von Huene interpreted the sediment as aeolian deposits, with the weakest animals, mostly subadults, succumbing to the harsh conditions in the desert and sinking into the mud of ephemeral water holes. He argued that the completeness of many finds indicated that transport did not happen, and saw partial individuals and isolated bones as results of weathering and trampling. Seemann developed a different scenario, in which Plateosaurus herds congregated on large water holes, and some herd members got pushed in. Light animals managed to get free, while heavy individuals got stuck and died.
A different school of thought developed almost half a century later, with palaeontologist David Weishampel suggesting that the skeletons from the lower layers stemmed from a herd that died catastrophically in a mudflow, while those in the upper layers accumulated over time. Weishampel explained the curious monospecific assemblage by theorising that Plateosaurus were common during this period. This theory was erroneously attributed to Seemann in a popular account of the plateosaurs in the collection of the Institute and Museum for Geology and Palaeontology, University of Tübingen, and has since become the standard explanation on most internet sites and in popular books on dinosaurs. Rieber proposed a more elaborate scenario, which included the animals dying of thirst or starvation, and being concentrated by mudflows.
Mud-miring trap
A detailed re-assessment of the taphonomy by palaeontologist Martin Sander of the University of Bonn, Germany, found that the mud-miring hypothesis first suggested by Fraas is true: animals above a certain body weight sank into the mud, which was further liquefied by their attempts to free themselves. Sander's scenario, similar to that proposed for the famous Rancho La Brea Tar Pits, is the only one explaining all taphonomic data. The degree of completeness of the carcasses was not influenced by transport, which is obvious from the lack of indications for transport before burial, but rather by how much the dead animals were scavenged. Juveniles of Plateosaurus and other taxa of herbivores were too light to sink into the mud or managed to extract themselves, and were thus not preserved. Similarly, scavenging theropods were not trapped due to their lower body weights, combined with proportionally larger feet. There is no indication of herding, or of catastrophic burial of such a herd, or catastrophic accumulation of animals that previously died isolated elsewhere.
Pathologies affecting the chevrons of specimen SMNS 13200 have been hypothesized to be the result of capture myopathy, induced by a mud-miring trap.
| Biology and health sciences | Sauropods | Animals |
1565709 | https://en.wikipedia.org/wiki/Cancer%20pagurus | Cancer pagurus | Cancer pagurus, commonly known as the edible crab or brown crab, is a species of crab found in the North Sea, North Atlantic Ocean, and perhaps the Mediterranean Sea. It is a robust crab of a reddish-brown colour, having an oval carapace with a characteristic "pie crust" edge and black tips to the claws. A mature adult may have a carapace width up to and weigh up to . C. pagurus is a nocturnal predator, targeting a range of molluscs and crustaceans. It is the subject of the largest crab fishery in Western Europe, centred on the coasts of the Ireland and Britain, with more than 60,000 tonnes caught annually.
Description
The carapace of C. pagurus adults is a reddish-brown colour, while in young specimens it is purple-brown. It occasionally bears white patches, and is shaped along the front edge into nine rounded lobes, resembling a pie crust. Males typically have a carapace long, and females long, although they may reach up to long in exceptional cases. Carapace width is typically , or exceptionally up to . A fold of the carapace extends ventrally to constitute a branchial chamber where the gills lie.
The first pereiopod is modified into a strong cheliped (claw-bearing leg); the claw's fingers, the dactylus and propodus, are black at the tips. The other pereiopods are covered with rows of short stiff setae; the dactylus of each is black towards the tip, and ends in a sharp point.
From the front, the antennae and antennules are visible. Beside these, the orbits are where the eyes are situated. The mouthparts comprise three pairs of maxillipeds, behind which are a pair of maxillae, a pair of maxillules, and finally the mandibles.
Lifecycle
Reproduction occurs in winter; the male stands over the female and forms a cage with his legs protecting her while she moults. Internal fertilisation takes place before the hardening of the new carapace, with the aid of two abdominal appendages (gonopods). After mating, the female retreats to a pit on the sea floor to lay her eggs. Between 250,000 and 3,000,000 fertilised eggs are held under the female's abdomen up to eight months until they hatch.
The first developmental stage after hatching is a planktonic larva (1 mm) called the zoea that develops into a postlarva (megalopa), and finally a juvenile. The first juvenile stage is characterised by a well-developed abdomen, which in time becomes reduced in size and folded under the sternum. Juveniles settle to the sea floor in the intertidal zone, where they stay until they reach a carapace width of , and then migrate to deeper water. The growth rate in males slows from an increase in carapace width of 10 mm per year before it is 8 years old, to 2 mm per year thereafter. Females grow at about half the rate of males, probably due to the energetic demands of egg laying. Sexual maturity is reached at a carapace width of in females, and in males. Longevity is typically 25–30 years, although exceptional individuals may live up to 100 years.
Distribution and ecology
C. pagurus is abundant throughout the northeast Atlantic as far as Norway in the north and North Africa in the south, on mixed coarse grounds, mud, and sand from the shallow sublittoral to depths around . It is frequently found inhabiting cracks and holes in rocks, but occasionally also in open areas. Smaller specimens may be found under rocks in the littoral zone. Unconfirmed reports suggest that C. pagurus may also occur in the Mediterranean Sea and Black Sea.
Adults of C. pagurus are nocturnal, hiding buried in the substrate during the day, but foraging at night up to from their hideouts. Their diet includes a variety of crustaceans (including the crabs Carcinus maenas and Pilumnus hirtellus, the porcelain crabs Porcellana platycheles and Pisidia longicornis, and the squat lobster Galathea squamifera) and molluscs (including the gastropods Nucella lapillus and Littorina littorea, and the bivalves Ensis, Mytilus edulis, Cerastoderma edule, Ostrea edulis, and Lutraria lutraria). It may stalk or ambush motile prey, and may dig large pits to reach buried molluscs. The main predator of C. pagurus is the octopus, which even attacks them inside the crab pots that fishermen use to trap them.
Diseases
Compared to other commercially important crab species, relatively little is known about diseases of C. pagurus. Its parasites include viruses, such as the white spot syndrome virus, various bacteria that cause dark lesions on the exoskeleton, and Hematodinium-like dinoflagellates that cause "pink crab disease". Other microscopic pathogens include fungi, microsporidians, paramyxeans, and ciliates. C. pagurus is also targeted by metazoan parasites, including trematodes and parasitic barnacles. A number of sessile animals occasionally settle as epibionts on the exoskeleton of C. pagurus, including barnacles, sea anemones, serpulid polychaetes such as Janua pagenstecheri, bryozoans, and saddle oysters.
Fishery
C. pagurus is heavily exploited commercially throughout its range, being the most commercially important crab species in Western Europe. The crabs are caught using crab pots (similar to lobster pots), also known as creels, which are placed offshore and baited. The catch of C. pagurus has increased steadily, rising from 26,000 tonnes in 1978 to 60,000 t in 2007, of which more than 70% was caught around the British Isles. The fishery is widely dispersed around the British and Irish coasts, and C. pagurus is thought to be overfished across much of this area. Most of the edible crabs caught by the British fleet are exported live for sale in France and Spain.
A number of legal restrictions apply to the catching of C. pagurus. Catching "berried" crabs (females carrying eggs) is illegal, but since ovigerous females remain in pits dug in the sediment and do not feed, fishing pressure does not affect the supply of larvae. Minimum landing sizes (MLSs) for C. pagurus are set by both the European Union technical regulations and by the UK government. Different minimum sizes are employed in different geographical areas, to reflect differences in the crab's growth rate across its range. In particular, the "Cromer crab" fishery along the coasts of Suffolk, Norfolk and Lincolnshire is subject to an MLS of , rather than the MLS in most of the species' range. An intermediate value of is used in the rest of the North Sea between the 56°N and the Essex–Kent border, and in the Irish Sea south of 55°N. Around Devon, Cornwall, and the Isles of Scilly, the MLS for males is different () from females (). The Norwegian catch is 8,500 tons annually, compared to 20,000 tons in the United Kingdom, 13,000 tons in Ireland, 8,500 tons in France, and a total 45,000 tons globally. Recent studies have shown that edible crabs are negatively affected by electromagnetic fields emitted from sub-sea power cables around offshore wind farms.
Cookery
Around one-third of the weight of an adult edible crab is meat, of which one-third is white meat from the claws (see declawing of crabs), and two-thirds is white and brown meat from the body. As food, male edible crabs are referred to as cocks and females as hens. Cocks have more sweet white meat; hens have more rich brown meat. Dishes include dressed crab (crab meat arranged in the cleaned shell, sometimes with decoration of other foodstuffs), soups such as bisque or bouillabaisse, pâtés, mousses, and hot soufflés.
Taxonomy and systematics
According to the rules of the International Code of Zoological Nomenclature, Cancer pagurus was first described by Carl Linnaeus in 1758, in the tenth edition of his Systema Naturae, which marks the starting point of zoological nomenclature. It was chosen to be the type species of the genus Cancer by Pierre André Latreille in 1810. The specific epithet is a Latin word, deriving from the (), which, alongside (), was used to refer to edible marine crabs; neither classical term can be confidently assigned to a particular species.
Although the genus Cancer formerly included most crabs, it has since been restricted to eight species. Within that set of closely related species, the closest relative of C. pagurus is the Jonah crab, C. borealis, from the east coast of North America.
| Biology and health sciences | Crabs and hermit crabs | Animals |
13424509 | https://en.wikipedia.org/wiki/Elacatinus | Elacatinus | Elacatinus is a genus of small marine gobies, often known collectively as the neon gobies. Although only one species, E. oceanops, is technically the "neon goby", because of their similar appearance, other members of the genus are generally labeled neon gobies, as well. Except for a single East Pacific species, all reside in warmer parts of the West Atlantic, including the Caribbean and Gulf of Mexico. They are known for engaging in symbiosis with other marine creatures by providing them cleaning service that consists of getting rid of ectoparasites on their bodies. In return, Elacatinus species obtain their primary source of food, ectoparasites.
Species
Currently, 24 recognized species are placed in this genus:
Elacatinus atronasus J. E. Böhlke & C. R. Robins, 1968
Elacatinus cayman Victor, 2014 (Cayman cleaner goby)
Elacatinus centralis Victor, 2014 (Cayman sponge goby)
Elacatinus chancei Beebe & Hollister, 1933 (shortstripe goby)
Elacatinus colini J. E. Randall & Lobel, 2009
Elacatinus evelynae J. E. Böhlke & C. R. Robins, 1968 (sharknose goby, Caribbean cleaner goby)
Elacatinus figaro I. Sazima (fr), R. L. Moura & R. de S. Rosa, 1997 (barber goby)
Elacatinus genie J. E. Böhlke & C. R. Robins, 1968 (cleaner goby)
Elacatinus horsti Metzelaar, 1922 (yellowline goby)
Elacatinus illecebrosus (J. E. Böhlke & C. R. Robins, 1968) (barsnout goby)
Elacatinus jarocho M. S. Taylor & Akins, 2007 (Jarocho goby)
Elacatinus lobeli J. E. Randall & P. L. Colin, 2009
Elacatinus lori P. L. Colin, 2002
Elacatinus louisae J. E. Böhlke & C. R. Robins, 1968 (spotlight goby)
Elacatinus oceanops D. S. Jordan, 1904 (neon goby)
Elacatinus phthirophagus I. Sazima, Carvalho-Filho & C. Sazima, 2008 (Noronha cleaner goby)
Elacatinus pridisi R. Z. P. Guimarães, Gasparini & L. A. Rocha, 2004
Elacatinus prochilos J. E. Böhlke & C. R. Robins, 1968 (broadstripe goby)
Elacatinus puncticulatus Ginsburg, 1938
Elacatinus randalli J. E. Böhlke & C. R. Robins, 1968 (yellownose goby)
Elacatinus redimiculus M. S. Taylor & Akins, 2007 (Cinta goby)
Elacatinus serranilla J. E. Randall & P. L. Colin, 2009
Elacatinus tenox J. E. Böhlke & C. R. Robins, 1968 (slaty goby)
Elacatinus xanthiprora J. E. Böhlke & C. R. Robins, 1968 (yellowprow goby)
Description
Neon gobies are very small, torpedo-shaped fish. Although sizes vary slightly by species, they are generally about long. They have dark bodies with iridescent stripes running from the tip of the nose to the base of the caudal fin. The color of the stripes varies by species. Like all gobies, their dorsal fin is split in two, the anterior dorsal fin being rounded like that of a clownfish and the posterior dorsal fin being relatively flat. The anal fin lines up with the posterior dorsal fin and is of similar shape. The pectoral fins are nearly circular, and, like all other fins, transparent.
Distribution
Except for the East Pacific E. puncticulatus, all gobies of the genus Elacatinus reside in warmer parts of the western Atlantic, ranging from Florida and Bermuda, through the Bahamas, Caribbean and Gulf of Mexico to the coasts of Central America and northern South America (south to Brazil). Among the species is E. oceanops that resides in the Caribbean Sea, the Florida Keys and the Bahama Islands. They have been found along the northern Yucatan Peninsula.
Diet
Elacatinus are generally carnivorous, with their primary diet consisting of ectoparasites on skins, fins, mouth and gill chambers of their clients. Depending on their ecological circumstances, they may also feed on zooplankton and non-parasitic copepods. Although they are carnivorous, Elacatinus occasionally consume algae and other plants as secondary food source.
Physiology
Sex reversal
Some species of gobies exhibit gonochorism and protogynous hermaphroditism, including bidirectional sex reversal. Protogyny refers to a category of hermaphroditism where female organs develop or mature before the appearance of male product. In most males of Tellostei gobiidae, this characteristic is observed.
Among those in the genus Elacatinus, protogyny is observed in E. illecebrosus.
Protogynous hermaphroditism in gobies consists of a male reproductive system with paired, secretory, accessory gonadal structures (AGSs) associated with the testis.
While the AGS is almost universally present in male gobies, protogynous females need to develop AGSs for sex reversal to take place. The AGSs develop from precursive tissues (pAGSs), in the form of bilateral, ventrally localized cell masses, located close to the junction of the ovarian lobes and the oviduct. At the time of sex change, it undergoes rapid growth and diverts to form the AGSs. When pAGSs develop into AGSs, ovigerous tissue is also completely replaced by seminiferous lobules. However, the ovarian lumen remains even after the sex change, functioning as a common spermatozoa collection region that is continuous with the common genital sinus as free spermatozoa travels from seminiferous lobules into the gonadal lumen.
Gonochorism refers to development or evolution of sex. Gonochoric goby species normally do not possess pAGSs, but pAGSs are observed in E. illecebrosus and E. evelynae. Specifically, the ovarian lobes of small-sized juvenile females of these species possess distinctive pAGSs that started to diminish and then disappeared as they approach adulthood.
Development
Gobies are multiply spawning species, usually spawning from February to April. After spawning, male gobies guard and oxygenate eggs by frequent movement of their pectoral and caudal fins; males consume any eggs affected by fungus. However, after hatching, the larvae receive no parental protection. Around 30 days after hatching, the larvae begin metamorphosis into juvenile gobies.
Behavior
Mating
Elacatinus species usually maintain social monogamy, a system in which heterosexual pairs remain closely associated during both reproductive and nonreproductive periods. Males and females of Elacatinus forage together, occupying a single cleaning station and servicing client fish in pairs. Such behavior observed in Elacatinus is attributed to low costs and high benefits for both sexes that result from being paired with a single, large partner. Males benefit from forming monogamous pairs with large females since they tend to have higher fecundity, while females are able to gain more resources by cleaning under the protection of a larger male. Females experience reduced cleaning rate overall when cleaning with a male. However, they spend more time in each cleaning session, so are able to feed on more ectoparasites compared to those with a smaller mate. If large body size also correlates with better paternal care is not confirmed, as it is difficult to observe caring behavior of Elacatinus whose males tend eggs that are laid deep within a small coral cavity.
Intrasexual aggression used as a means to guard mates is proposed as a primary mechanism of maintaining monogamy. Both males and females were observed to be very aggressive toward same-sex intruders that come to their territory to accost their partners. However, several biological and ecological factors also enforce monogamy in these cleaner gobies. Elacatinus species reproduce asynchronously, which makes polygyny unfavorable. Furthermore, although it differs among species, cleaner gobies tend to live in environments of low population density where distance between potential mates is rather far.
Although it is seldom, polygyny is observed in Elacatinus. Mated males may approach a new female if she is larger than their mate. Polygyny may also be exhibited in widowed males and females. When Elacatinus spp. are widowed, they often leave their cleaning territory. However, the vacant territory is not claimed by other cleaner gobies, which implies that the widowed gobies actually chose to move instead of being forced. This observation shows that the widowed gobies possibly have moved to search for new mate.
Mutualism
Mutualism refers to relationship where one or both partners provide a service or resources to the other. Caribbean cleaning gobies engage in mutualism by removing and feeding on ectoparasites on their clients. They present themselves and wait for clients at cleaning stations, as they largely depend on cleaning for their food. Elacatinus spp. often clean in pairs, where pairs are most often composed of a male and a female. Occupying the same territory, the cleaner pair usually cleans the same client at the same time.
Cleaning gobies generally service a wide range of clients; however, members of the genus Elacatinus are considered the most specialized cleaner gobies in tropical western Atlantic. Most frequent clients of Elacatinus include damselfish, Pomacentridae and Haemulidae, and planktivores. Rather than seeking their clients actively, they remain close to their cleaning station and seldom move more than a meter laterally. They do, however, dance in zig-zag swimming pattern to attract clients. Hosts come to the cleaning sites and pose to show their intent to receive service. Such poses are usually directed at the cleaning station rather than the individual gobies. However, not all the potential clients, or those that pose, are attended by cleaners. Duration of cleaning may range from a few seconds to almost half an hour. In observational studies, decreases in cleaning frequency turned out to be correlated to increases in cleaning durations. The rate of feeding and cleaning duration most likely reflect the number of parasites on clients' bodies.
Predator–prey relationship
Elacatinus has a unique response to predators' approach. Fish response to danger is largely classified into fight-or-flight or freezing. However, Elacatinus follows neither. It engages in cleaning interactions with potential predators sooner than with nonpredatory clients, treating them almost as soon as they arrive at their cleaning stations. Furthermore, Elacatinus species clean predators for longer durations. As implied by higher cortisol level in the cleaners when approached by predators, the fish do experience stress upon encountering predators, but unlike other fish that exhibit flight or freezing response, Elacatinus spp. demonstrates a proactive response. Elacatinus is thought to choose to be proactive, as cleaning predators faster makes them leave sooner, which in turn encourages nonpredatory clients to revisit cleaning stations. Moreover, such proactive response may serve as a pre-conflict management strategy that might result in safe outcome for interactions with certain predators.
Coloration
Common stripe patterns in Elacatinus include yellow, green, and blue; however, those possessing blue stripes were found to be most effective in attracting clients, as well as deterring predators. Four of six cleaner species of the genus Elacatinus display such coloration—E. oceanops, E. evelynae, E. genie, and E. prochilos. E. puncticulatus and E. nesiotes engage in cleaner activity, but do not possess blue stripes. One of the ways Elacatinus signals its clients is through unique blue stripes that distinguish them from their noncleaning sister species; while their noncleaning relatives possess yellow or green stripes that blend well with their sponge dwellings, cleaning Elacatinus spp. advertise their presence to potential clients by sitting on top of substrate such as coral. The characteristic blue stripe only observed in the cleaner lineage of gobies marks great contrast with the coral microhabitats compared to other stripe colors found in gobies, so allow them to be spotted easily. Blue stripes of Elacatinus play a role as signals for cooperation in addition to advertisement. Additionally, Elacatinus spp. possessing blue stripes deterred or survived significantly more attacks as compared to green and yellow gobies.
Cheating
Some Elacatinus cleaners cheat by feeding on scales and mucus of clients in addition to ectoparasites on their clients, which is confirmed by examination of their stomach contents. However, cheating may result in punishment. When clients realize that they are being cheated on, they interrupt the cleaning interaction and swim away or do not return to the gobies' cleaning station in the future, which may result in the cheater obtaining less resources than they could have obtained without cheating. This client behavior is similar to sanction strategy, where one partner restrains its biological investment. This strategy has been proven effective in keeping interspecies mutualism stable, and such cheating behavior is not readily observed in Elacatinus. They prefer to feed on ectoparasites over client mucus or scale. Therefore, they most likely cheat only when ectoparasites supply is depleted in clients.
In the aquarium
Members of the genus Elacatinus, particularly E. oceanops, are among the most popular marine aquarium inhabitants.
Several species of neon goby are readily available because of successful captive-breeding programs, although scientific names are not always given. Generally, if the specimen has a blue stripe, it can be identified as E. oceanops, and if the stripe is half-blue and half-gold, it is E. evelynae. Various species are offered as "gold neon gobies".
Neon gobies are not difficult to keep, and accept a wide variety of water parameters. Specific gravity is not critical, so long as it remains steady. As with all marine aquarium fish, they are sensitive to even trace amounts of ammonia or nitrite in an aquarium. Small amounts of nitrate are acceptable, but significant amounts over the long term can cause problems. Neon gobies are tolerant of a broad range of temperatures, but they are tropical, so a heater may be necessary to maintain a temperature of at least year round. Other parameters, such as alkalinity, only become a problem if they are extreme.
| Biology and health sciences | Acanthomorpha | Animals |
8752642 | https://en.wikipedia.org/wiki/Nuclear%20structure | Nuclear structure | Understanding the structure of the atomic nucleus is one of the central challenges in nuclear physics.
Models
The cluster model
The cluster model describes the nucleus as a molecule-like collection of proton-neutron groups (e.g., alpha particles) with one or more valence neutrons occupying molecular orbitals.
The liquid drop model
The liquid drop model is one of the first models of nuclear structure, proposed by Carl Friedrich von Weizsäcker in 1935. It describes the nucleus as a semiclassical fluid made up of neutrons and protons, with an internal repulsive electrostatic force proportional to the number of protons. The quantum mechanical nature of these particles appears via the Pauli exclusion principle, which states that no two nucleons of the same kind can be at the same state. Thus the fluid is actually what is known as a Fermi liquid.
In this model, the binding energy of a nucleus with protons and neutrons is given by
where is the total number of nucleons (Mass Number). The terms proportional to and represent the volume and surface energy of the liquid drop, the term proportional to represents the electrostatic energy, the term proportional to represents the Pauli exclusion principle and the last term is the pairing term, which lowers the energy for even numbers of protons or neutrons.
The coefficients and the strength of the pairing term may be estimated theoretically, or fit to data.
This simple model reproduces the main features of the binding energy of nuclei.
The assumption of nucleus as a drop of Fermi liquid is still widely used in the form of Finite Range Droplet Model (FRDM), due to the possible good reproduction of nuclear binding energy on the whole chart, with the necessary accuracy for predictions of unknown nuclei.
The shell model
The expression "shell model" is ambiguous in that it refers to two different items. It was previously used to describe the existence of nucleon shells according to an approach closer to what is now called mean field theory.
Nowadays, it refers to a formalism analogous to the configuration interaction formalism used in quantum chemistry.
Introduction to the shell concept
Systematic measurements of the binding energy of atomic nuclei show systematic deviations with respect to those estimated from the liquid drop model. In particular, some nuclei having certain values for the number of protons and/or neutrons are bound more tightly together than predicted by the liquid drop model. These nuclei are called singly/doubly magic. This observation led scientists to assume the existence of a shell structure of nucleons (protons and neutrons) within the nucleus, like that of electrons within atoms.
Indeed, nucleons are quantum objects. Strictly speaking, one should not speak of energies of individual nucleons, because they are all correlated with each other. However, as an approximation one may envision an average nucleus, within which nucleons propagate individually. Owing to their quantum character, they may only occupy discrete energy levels. These levels are by no means uniformly distributed; some intervals of energy are crowded, and some are empty, generating a gap in possible energies. A shell is such a set of levels separated from the other ones by a wide empty gap.
The energy levels are found by solving the Schrödinger equation for a single nucleon moving in the average potential generated by all other nucleons. Each level may be occupied by a nucleon, or empty. Some levels accommodate several different quantum states with the same energy; they are said to be degenerate. This occurs in particular if the average nucleus exhibits a certain symmetry, like a spherical shape.
The concept of shells allows one to understand why some nuclei are bound more tightly than others. This is because two nucleons of the same kind cannot be in the same state (Pauli exclusion principle). Werner Heisenberg extended the principle of Pauli exclusion to nucleons, via the introduction of the iso-spin concept. Nucleons are thought to be composed of two kind of particles, the neutron and the proton that differ through their intrinsic property, associated with their iso-spin quantum number. This concept enables the explanation of the bound state of Deuterium, in which the proton and neutron can couple their spin and iso-spin in two different manners. So the lowest-energy state of the nucleus is one where nucleons fill all energy levels from the bottom up to some level. Nuclei that exhibit an odd number of either protons or neutrons are less bound than nuclei with even number. A nucleus with full shells is exceptionally stable, as will be explained.
As with electrons in the electron shell model, protons in the outermost shell are relatively loosely bound to the nucleus if there are only few protons in that shell, because they are farthest from the center of the nucleus. Therefore, nuclei which have a full outer proton shell will be more tightly bound and have a higher binding energy than other nuclei with a similar total number of protons. This is also true for neutrons.
Furthermore, the energy needed to excite the nucleus (i.e. moving a nucleon to a higher, previously unoccupied level) is exceptionally high in such nuclei. Whenever this unoccupied level is the next after a full shell, the only way to excite the nucleus is to raise one nucleon across the gap, thus spending a large amount of energy. Otherwise, if the highest occupied energy level lies in a partly filled shell, much less energy is required to raise a nucleon to a higher state in the same shell.
Some evolution of the shell structure observed in stable nuclei is expected away from the valley of stability. For example, observations of unstable isotopes have shown shifting and even a reordering of the single particle levels of which the shell structure is composed. This is sometimes observed as the creation of an island of inversion or in the reduction of excitation energy gaps above the traditional magic numbers.
Basic hypotheses
Some basic hypotheses are made in order to give a precise conceptual framework to the shell model:
The atomic nucleus is a quantum n-body system.
The internal motion of nucleons within the nucleus is non-relativistic, and their behavior is governed by the Schrödinger equation.
Nucleons are considered to be pointlike, without any internal structure.
Brief description of the formalism
The general process used in the shell model calculations is the following. First a Hamiltonian for the nucleus is defined. Usually, for computational practicality, only one- and two-body terms are taken into account in this definition. The interaction is an effective theory: it contains free parameters which have to be fitted with experimental data.
The next step consists in defining a basis of single-particle states, i.e. a set of wavefunctions describing all possible nucleon states. Most of the time, this basis is obtained via a Hartree–Fock computation. With this set of one-particle states, Slater determinants are built, that is, wavefunctions for Z proton variables or N neutron variables, which are antisymmetrized products of single-particle wavefunctions (antisymmetrized meaning that under exchange of variables for any pair of nucleons, the wavefunction only changes sign).
In principle, the number of quantum states available for a single nucleon at a finite energy is finite, say n. The number of nucleons in the nucleus must be smaller than the number of available states, otherwise the nucleus cannot hold all of its nucleons. There are thus several ways to choose Z (or N) states among the n possible. In combinatorial mathematics, the number of choices of Z objects among n is the binomial coefficient C. If n is much larger than Z (or N), this increases roughly like nZ. Practically, this number becomes so large that every computation is impossible for A=N+Z larger than 8.
To obviate this difficulty, the space of possible single-particle states is divided into core and valence, by analogy with chemistry (see core electron and valence electron). The core is a set of single-particles which are assumed to be inactive, in the sense that they are the well bound lowest-energy states, and that there is no need to reexamine their situation. They do not appear in the Slater determinants, contrary to the states in the valence space, which is the space of all single-particle states not in the core, but possibly to be considered in the choice of the build of the (Z-) N-body wavefunction. The set of all possible Slater determinants in the valence space defines a basis for (Z-) N-body states.
The last step consists in computing the matrix of the Hamiltonian within this basis, and to diagonalize it. In spite of the reduction of the dimension of the basis owing to the fixation of the core, the matrices to be diagonalized reach easily dimensions of the order of 109, and demand specific diagonalization techniques.
The shell model calculations give in general an excellent fit with experimental data. They depend however strongly on two main factors:
The way to divide the single-particle space into core and valence.
The effective nucleon–nucleon interaction.
Mean field theories
The independent-particle model (IPM)
The interaction between nucleons, which is a consequence of strong interactions and binds the nucleons within the nucleus, exhibits the peculiar behaviour of having a finite range: it vanishes when the distance between two nucleons becomes too large; it is attractive at medium range, and repulsive at very small range. This last property correlates with the Pauli exclusion principle according to which two fermions (nucleons are fermions) cannot be in the same quantum state. This results in a very large mean free path predicted for a nucleon within the nucleus.
The main idea of the Independent Particle approach is that a nucleon moves inside a certain potential well (which keeps it bound to the nucleus) independently from the other nucleons. This amounts to replacing an N-body problem (N particles interacting) by N single-body problems. This essential simplification of the problem is the cornerstone of mean field theories. These are also widely used in atomic physics, where electrons move in a mean field due to the central nucleus and the electron cloud itself.
The independent particle model and mean field theories (we shall see that there exist several variants) have a great success in describing the properties of the nucleus starting from an effective interaction or an effective potential, thus are a basic part of atomic nucleus theory. One should also notice that they are modular enough, in that it is quite easy to extend the model to introduce effects such as nuclear pairing, or collective motions of the nucleon like rotation, or vibration, adding the corresponding energy terms in the formalism. This implies that in many representations, the mean field is only a starting point for a more complete description which introduces correlations reproducing properties like collective excitations and nucleon transfer.
Nuclear potential and effective interaction
A large part of the practical difficulties met in mean field theories is the definition (or calculation) of the potential of the mean field itself. One can very roughly distinguish between two approaches:
The phenomenological approach is a parameterization of the nuclear potential by an appropriate mathematical function. Historically, this procedure was applied with the greatest success by Sven Gösta Nilsson, who used as a potential a (deformed) harmonic oscillator potential. The most recent parameterizations are based on more realistic functions, which account more accurately for scattering experiments, for example. In particular the form known as the Woods–Saxon potential can be mentioned.
The self-consistent or Hartree–Fock approach aims to deduce mathematically the nuclear potential from an effective nucleon–nucleon interaction. This technique implies a solution of the Schrödinger equation in an iterative fashion, starting from an ansatz wavefunction and improving it variationally, since the potential depends there upon the wavefunctions to be determined. The latter are written as Slater determinants.
In the case of the Hartree–Fock approaches, the trouble is not to find the mathematical function which describes best the nuclear potential, but that which describes best the nucleon–nucleon interaction. Indeed, in contrast with atomic physics where the interaction is known (it is the Coulomb interaction), the nucleon–nucleon interaction within the nucleus is not known analytically.
There are two main reasons for this fact. First, the strong interaction acts essentially among the quarks forming the nucleons. The nucleon–nucleon interaction in vacuum is a mere consequence of the quark–quark interaction. While the latter is well understood in the framework of the Standard Model at high energies, it is much more complicated in low energies due to color confinement and asymptotic freedom. Thus there is yet no fundamental theory allowing one to deduce the nucleon–nucleon interaction from the quark–quark interaction. Furthermore, even if this problem were solved, there would remain a large difference between the ideal (and conceptually simpler) case of two nucleons interacting in vacuum, and that of these nucleons interacting in the nuclear matter. To go further, it was necessary to invent the concept of effective interaction. The latter is basically a mathematical function with several arbitrary parameters, which are adjusted to agree with experimental data.
Most modern interaction are zero-range so they act only when the two nucleons are in contact, as introduced by Tony Skyrme. In a seminal paper by Dominique Vautherin and David M. Brink it was demonstrated that a Skyrme force that is density dependent can reproduce basic properties of atomic nuclei. Other commonly used interaction is the finite range Gogny force,
The self-consistent approaches of the Hartree–Fock type
In the Hartree–Fock approach of the n-body problem, the starting point is a Hamiltonian containing n kinetic energy terms, and potential terms. As mentioned before, one of the mean field theory hypotheses is that only the two-body interaction is to be taken into account. The potential term of the Hamiltonian represents all possible two-body interactions in the set of n fermions. It is the first hypothesis.
The second step consists in assuming that the wavefunction of the system can be written as a Slater determinant of one-particle spin-orbitals. This statement is the mathematical translation of the independent-particle model. This is the second hypothesis.
There remains now to determine the components of this Slater determinant, that is, the individual wavefunctions of the nucleons. To this end, it is assumed that the total wavefunction (the Slater determinant) is such that the energy is minimum. This is the third hypothesis.
Technically, it means that one must compute the mean value of the (known) two-body Hamiltonian on the (unknown) Slater determinant, and impose that its mathematical variation vanishes. This leads to a set of equations where the unknowns are the individual wavefunctions: the Hartree–Fock equations. Solving these equations gives the wavefunctions and individual energy levels of nucleons, and so the total energy of the nucleus and its wavefunction.
This short account of the Hartree–Fock method explains why it is called also the variational approach. At the beginning of the calculation, the total energy is a "function of the individual wavefunctions" (a so-called functional), and everything is then made in order to optimize the choice of these wavefunctions so that the functional has a minimum – hopefully absolute, and not only local. To be more precise, there should be mentioned that the energy is a functional of the density, defined as the sum of the individual squared wavefunctions. The Hartree–Fock method is also used in atomic physics and condensed matter physics as Density Functional Theory, DFT.
The process of solving the Hartree–Fock equations can only be iterative, since these are in fact a Schrödinger equation in which the potential depends on the density, that is, precisely on the wavefunctions to be determined. Practically, the algorithm is started with a set of individual grossly reasonable wavefunctions (in general the eigenfunctions of a harmonic oscillator). These allow to compute the density, and therefrom the Hartree–Fock potential. Once this done, the Schrödinger equation is solved anew, and so on. The calculation stops – convergence is reached – when the difference among wavefunctions, or energy levels, for two successive iterations is less than a fixed value. Then the mean field potential is completely determined, and the Hartree–Fock equations become standard Schrödinger equations. The corresponding Hamiltonian is then called the Hartree–Fock Hamiltonian.
The relativistic mean field approaches
Born first in the 1970s with the works of John Dirk Walecka on quantum hadrodynamics, the relativistic models of the nucleus were sharpened up towards the end of the 1980s by P. Ring and coworkers. The starting point of these approaches is the relativistic quantum field theory. In this context, the nucleon interactions occur via the exchange of virtual particles called mesons. The idea is, in a first step, to build a Lagrangian containing these interaction terms. Second, by an application of the least action principle, one gets a set of equations of motion. The real particles (here the nucleons) obey the Dirac equation, whilst the virtual ones (here the mesons) obey the Klein–Gordon equations.
In view of the non-perturbative nature of strong interaction, and also since the exact potential form of this interaction between groups of nucleons is relatively badly known, the use of such an approach in the case of atomic nuclei requires drastic approximations. The main simplification consists in replacing in the equations all field terms (which are operators in the mathematical sense) by their mean value (which are functions). In this way, one gets a system of coupled integro-differential equations, which can be solved numerically, if not analytically.
The interacting boson model
The interacting boson model (IBM) is a model in nuclear physics in which nucleons are represented as pairs, each of them acting as a boson particle, with integral spin of 0, 2 or 4. This makes calculations feasible for larger nuclei.
There are several branches of this model - in one of them (IBM-1) one can group all types of nucleons in pairs, in others (for instance - IBM-2) one considers protons and neutrons in pairs separately.
Spontaneous breaking of symmetry in nuclear physics
One of the focal points of all physics is symmetry. The nucleon–nucleon interaction and all effective interactions used in practice have certain symmetries. They are invariant by translation (changing the frame of reference so that directions are not altered), by rotation (turning the frame of reference around some axis), or parity (changing the sense of axes) in the sense that the interaction does not change under any of these operations. Nevertheless, in the Hartree–Fock approach, solutions which are not invariant under such a symmetry can appear. One speaks then of spontaneous symmetry breaking.
Qualitatively, these spontaneous symmetry breakings can be explained in the following way: in the mean field theory, the nucleus is described as a set of independent particles. Most additional correlations among nucleons which do not enter the mean field are neglected. They can appear however by a breaking of the symmetry of the mean field Hamiltonian, which is only approximate. If the density used to start the iterations of the Hartree–Fock process breaks certain symmetries, the final Hartree–Fock Hamiltonian may break these symmetries, if it is advantageous to keep these broken from the point of view of the total energy.
It may also converge towards a symmetric solution. In any case, if the final solution breaks the symmetry, for example, the rotational symmetry, so that the nucleus appears not to be spherical, but elliptic, all configurations deduced from this deformed nucleus by a rotation are just as good solutions for the Hartree–Fock problem. The ground state of the nucleus is then degenerate.
A similar phenomenon happens with the nuclear pairing, which violates the conservation of the number of baryons (see below).
Extensions of the mean field theories
Nuclear pairing phenomenon
The most common extension to mean field theory is the nuclear pairing. Nuclei with an even number of nucleons are systematically more bound than those with an odd one. This implies that each nucleon binds with another one to form a pair, consequently the system cannot be described as independent particles subjected to a common mean field. When the nucleus has an even number of protons and neutrons, each one of them finds a partner. To excite such a system, one must at least use such an energy as to break a pair. Conversely, in the case of odd number of protons or neutrons, there exists an unpaired nucleon, which needs less energy to be excited.
This phenomenon is closely analogous to that of Type 1 superconductivity in solid state physics. The first theoretical description of nuclear pairing was proposed at the end of the 1950s by Aage Bohr, Ben Mottelson, and David Pines (which contributed to the reception of the Nobel Prize in Physics in 1975 by Bohr and Mottelson). It was close to the BCS theory of Bardeen, Cooper and Schrieffer, which accounts for metal superconductivity. Theoretically, the pairing phenomenon as described by the BCS theory combines with the mean field theory: nucleons are both subject to the mean field potential and to the pairing interaction.
The Hartree–Fock–Bogolyubov (HFB) method is a more sophisticated approach, enabling one to consider the pairing and mean field interactions consistently on equal footing. HFB is now the de facto standard in the mean field treatment of nuclear systems.
Symmetry restoration
Peculiarity of mean field methods is the calculation of nuclear property by explicit symmetry breaking. The calculation of the mean field with self-consistent methods (e.g. Hartree-Fock), breaks rotational symmetry, and the calculation of pairing property breaks particle-number.
Several techniques for symmetry restoration by projecting on good quantum numbers have been developed.
Particle vibration coupling
Mean field methods (eventually considering symmetry restoration) are a good approximation for the ground state of the system, even postulating a system of independent particles. Higher-order corrections consider the fact that the particles interact together by the means of correlation. These correlations can be introduced taking into account the coupling of independent particle degrees of freedom, low-energy collective excitation of systems with even number of protons and neutrons.
In this way, excited states can be reproduced by the means of random phase approximation (RPA), also eventually consistently calculating corrections to the ground state (e.g. by the means of nuclear field theory).
| Physical sciences | Nuclear physics | Physics |
335054 | https://en.wikipedia.org/wiki/Axion | Axion | An axion () is a hypothetical elementary particle originally theorized in 1978 independently by Frank Wilczek and Steven Weinberg as the Goldstone boson of Peccei–Quinn theory, which had been proposed in 1977 to solve the strong CP problem in quantum chromodynamics (QCD). If axions exist and have low mass within a specific range, they are of interest as a possible component of cold dark matter.
History
Strong CP problem
As shown by Gerard 't Hooft, strong interactions of the standard model, QCD, possess a non-trivial vacuum structure that in principle permits violation of the combined symmetries of charge conjugation and parity, collectively known as CP. Together with effects generated by weak interactions, the effective periodic strong CP-violating term, , appears as a Standard Model input – its value is not predicted by the theory, but must be measured. However, large CP-violating interactions originating from QCD would induce a large electric dipole moment (EDM) for the neutron. Experimental constraints on the unobserved EDM implies CP violation from QCD must be extremely tiny and thus must itself be extremely small. Since could have any value between 0 and 2, this presents a "naturalness" problem for the standard model. Why should this parameter find itself so close to zero? (Or, why should QCD find itself CP-preserving?) This question constitutes what is known as the strong CP problem.
Prediction
In 1977, Roberto Peccei and Helen Quinn postulated a more elegant solution to the strong CP problem, the Peccei–Quinn mechanism. The idea is to effectively promote to a field. This is accomplished by adding a new global symmetry (called a Peccei–Quinn (PQ) symmetry) that becomes spontaneously broken. This results in a new particle, as shown independently by Frank Wilczek and Steven Weinberg, that fills the role of , naturally relaxing the CP-violation parameter to zero. Wilczek named this new hypothesized particle the "axion" after a brand of laundry detergent because it "cleaned up" a problem, while Weinberg called it "the higglet". Weinberg later agreed to adopt Wilczek's name for the particle. Because it has a non-zero mass, the axion is a pseudo-Nambu–Goldstone boson.
Axion dark matter
QCD effects produce an effective periodic potential in which the axion field moves. Expanding the potential about one of its minima, one finds that the product of the axion mass with the axion decay constant is determined by the topological susceptibility of the QCD vacuum. An axion with mass much less than 60 keV is long-lived and weakly interacting: A perfect dark matter candidate.
The oscillations of the axion field about the minimum of the effective potential, the so-called misalignment mechanism, generate a cosmological population of cold axions with an abundance depending on the mass of the axion. With a mass above 5 μeV/2 ( times the electron mass) axions could account for dark matter, and thus be both a dark-matter candidate and a solution to the strong CP problem. If inflation occurs at a low scale and lasts sufficiently long, the axion mass can be as low as 1 peV/2.
There are two distinct scenarios in which the axion field begins its evolution, depending on the following two conditions:
Broadly speaking, one of the two possible scenarios outlined in the two following subsections occurs:
Pre-inflationary scenario
If both (a) and (b) are satisfied, cosmic inflation selects one patch of the Universe within which the spontaneous breaking of the PQ symmetry leads to a homogeneous value of the initial value of the axion field. In this "pre-inflationary" scenario, topological defects are inflated away and do not contribute to the axion energy density. However, other bounds that come from isocurvature modes severely constrain this scenario, which require a relatively low-energy scale of inflation to be viable.
Post-inflationary scenario
If at least one of the conditions (a) or (b) is violated, the axion field takes different values within patches that are initially out of causal contact, but that today populate the volume enclosed by our Hubble horizon. In this scenario, isocurvature fluctuations in the PQ field randomise the axion field, with no preferred value in the power spectrum.
The proper treatment in this scenario is to solve numerically the equation of motion of the PQ field in an expanding Universe, in order to capture all features coming from the misalignment mechanism, including the contribution from topological defects like "axionic" strings and domain walls. An axion mass estimate between 0.05 and 1.50 meV was reported by Borsanyi et al. (2016). The result was calculated by simulating the formation of axions during the post-inflation period on a supercomputer.
Progress in the late 2010s in determining the present abundance of a KSVZ-type axion using numerical simulations lead to values between 0.02 and 0.1 meV, although these results have been challenged by the details on the power spectrum of emitted axions from strings.
Phenomenology of the axion field
Searches
The axion models originally proposed by Wilczek and by Weinberg chose axion coupling strengths that were so strong that they would have already been detected in prior experiments. It had been thought that the Peccei-Quinn mechanism for solving the strong CP problem required such large couplings. However, it was soon realized that "invisible axions" with much smaller couplings also work. Two such classes of models are known in the literature as (Kim–Shifman–Vainshtein– and (Dine–Fischler––
The very weakly coupled axion is also very light, because axion couplings and mass are proportional. Satisfaction with "invisible axions" changed when it was shown that any very light axion would have been overproduced in the early universe and therefore must be excluded.
Maxwell's equations with axion modifications
Pierre Sikivie computed how Maxwell's equations are modified in the presence of an axion in 1983. He showed that these axions could be detected on Earth by converting them to photons, using a strong magnetic field, motivating a number of experiments. For example, the Axion Dark Matter Experiment converts axion dark matter to microwave photons, the CERN Axion Solar Telescope converts axions produced in the Sun's core to X-rays, and other experiments search for axions produced in laser light. As of the early 2020s, there are dozens of proposed or ongoing experiments searching for axion dark matter.
The equations of axion electrodynamics are typically written in "natural units", where the reduced Planck constant , speed of light , and permittivity of free space all reduce to 1 when expressed in these "natural units". In this unit system, the electrodynamic equations are:
{| class="wikitable" style="text-align: center;"
|-
! scope="col" style="width: 15em;" | Name
! scope="col" | Equations
|-
! scope="row" | Gauss's law
|
|-
! scope="row" | Gauss's law for magnetism
|
|-
! scope="row" | Faraday's law
|
|-
! scope="row" | Ampère–Maxwell law
|
|-
! scope="row" | Axion field's equation of motion
|
|}
Above, a dot above a variable denotes its time derivative; the dot spaced between variables is the vector dot product; the factor is the axion-to-photon coupling constant rendered in "natural units".
Alternative forms of these equations have been proposed, which imply completely different physical signatures. For example, Visinelli wrote a set of equations that imposed duality symmetry, assuming the existence of magnetic monopoles. However, these alternative formulations are less theoretically motivated, and in many cases cannot even be derived from an action.
Analogous effect for topological insulators
A term analogous to the one that would be added to Maxwell's equations to account for axions also appears in recent (2008) theoretical models for topological insulators giving an effective axion description of the electrodynamics of these materials.
This term leads to several interesting predicted properties including a quantized magnetoelectric effect. Evidence for this effect has been given in THz spectroscopy experiments performed at the Johns Hopkins University on quantum regime thin film topological insulators developed at Rutgers University.
In 2019, a team at the Max Planck Institute for Chemical Physics of Solids published their detection of an axion insulator phase of a Weyl semimetal material. In the axion insulator phase, the material has an axion-like quasiparticle – an excitation of electrons that behave together as an axion – and its discovery demonstrates the consistency of axion electrodynamics as a description of the interaction of axion-like particles with electromagnetic fields. In this way, the discovery of axion-like quasiparticles in axion insulators provides motivation to use axion electrodynamics to search for the axion itself.
Experiments
Despite not yet having been found, the axion has been well studied for over 40 years, giving time for physicists to develop insight into axion effects that might be detected. Several experimental searches for axions are presently underway; most exploit axions' expected slight interaction with photons in strong magnetic fields. Axions are also one of the few remaining plausible candidates for dark matter particles, and might be discovered in some dark matter experiments.
Direct conversion in a magnetic field
Several experiments search for astrophysical axions by the Primakoff effect, which converts axions to photons and vice versa in electromagnetic fields.
The Axion Dark Matter Experiment (ADMX) at the University of Washington uses a strong magnetic field to detect the possible weak conversion of axions to microwaves. ADMX searches the galactic dark matter halo for axions resonant with a cold microwave cavity. ADMX has excluded optimistic axion models in the 1.9–3.53 μeV range. From 2013 to 2018 a series of upgrades were done and it is taking new data, including at 4.9–6.2 μeV. In December 2021 it excluded the 3.3–4.2 μeV range for the KSVZ model.
Other experiments of this type include DMRadio, HAYSTAC, CULTASK, and ORGAN. HAYSTAC completed the first scanning run of a haloscope above 20 μeV in the late 2010s.
Polarized light in a magnetic field
The Italian PVLAS experiment searches for polarization changes of light propagating in a magnetic field. The concept was first put forward in 1986 by Luciano Maiani, Roberto Petronzio and Emilio Zavattini. A rotation claim in 2006 was excluded by an upgraded setup. An optimized search began in 2014.
Light shining through walls
Another technique is so called "light shining through walls", where light passes through an intense magnetic field to convert photons into axions, which then pass through metal and are reconstituted as photons by another magnetic field on the other side of the barrier. Experiments by BFRS and a team led by Rizzo ruled out an axion cause. GammeV saw no events, reported in a 2008 Physics Review Letter. ALPS I conducted similar runs, setting new constraints in 2010; ALPS II began collecting data in May 2023. OSQAR found no signal, limiting coupling, and will continue.
Astrophysical axion searches
Axion-like bosons could have a signature in astrophysical settings. In particular, several works have proposed axion-like particles as a solution to the apparent transparency of the Universe to TeV photons. It has also been demonstrated that, in the large magnetic fields threading the atmospheres of compact astrophysical objects (e.g., magnetars), photons will convert much more efficiently. This would in turn give rise to distinct absorption-like features in the spectra detectable by early 21st century telescopes. A new (2009) promising means is looking for quasi-particle refraction in systems with strong magnetic gradients. In particular, the refraction will lead to beam splitting in the radio light curves of highly magnetized pulsars and allow much greater sensitivities than currently achievable. The International Axion Observatory (IAXO) is a proposed fourth generation helioscope.
Axions can resonantly convert into photons in the magnetospheres of neutron stars. The emerging photons lie in the GHz frequency range and can be potentially picked up in radio detectors, leading to a sensitive probe of the axion parameter space. This strategy has been used to constrain the axion–photon coupling in the 5–11 μeV mass range, by re-analyzing existing data from the Green Bank Telescope and the Effelsberg 100 m Radio Telescope. A novel, alternative strategy consists in detecting the transient signal from the encounter between a neutron star and an axion minicluster in the Milky Way.
Axions can be produced in the Sun's core when X-rays scatter in strong electric fields. The CAST solar telescope is underway, and has set limits on coupling to photons and electrons. Axions may also be produced within neutron stars by nucleon–nucleon bremsstrahlung. The subsequent decay of axions to gamma rays allows constraints on the axion mass to be placed from observations of neutron stars in gamma-rays using the Fermi Gamma-ray Space Telescope. From an analysis of four neutron stars, Berenji et al. (2016) obtained a 95% confidence interval upper limit on the axion mass of 0.079 eV. In 2021 it has been also suggested that a reported excess of hard X-ray emission from a system of neutron stars known as the magnificent seven could be explained as axion emission.
In 2016, a theoretical team from Massachusetts Institute of Technology devised a possible way of detecting axions using a strong magnetic field that need be no stronger than that produced in an MRI scanning machine. It would show variation, a slight wavering, that is linked to the mass of the axion. Results from the ensuing experiment published in 2021 reported no evidence of axions in the mass range from 4.1x10−10 to 8.27x10−9 eV.
In 2022 the polarized light measurements of Messier 87* by the Event Horizon Telescope were used to constrain the mass of the axion assuming that hypothetical clouds of axions could form around a black hole, rejecting the approximate – range of mass values.
Searches for resonance effects
Resonance effects may be evident in Josephson junctions from a supposed high flux of axions from the galactic halo with mass of 110 μeV and density compared to the implied dark matter density , indicating said axions would not have enough mass to be the sole component of dark matter. The ORGAN experiment plans to conduct a direct test of this result via the haloscope method.
Dark matter recoil searches
Dark matter cryogenic detectors have searched for electron recoils that would indicate axions. CDMS published in 2009 and EDELWEISS set coupling and mass limits in 2013. UORE and XMASS also set limits on solar axions in 2013. XENON100 used a 225-day run to set the best coupling limits to date and exclude some parameters.
Nuclear spin precession
While Schiff's theorem states that a static nuclear electric dipole moment (EDM) does not produce atomic and molecular EDMs, the axion induces an oscillating nuclear EDM that oscillates at the Larmor frequency. If this nuclear EDM oscillation frequency is in resonance with an external electric field, a precession in the nuclear spin rotation occurs. This precession can be measured using precession magnetometry and if detected, would be evidence for Axions.
An experiment using this technique is the Cosmic Axion Spin Precession Experiment (CASPEr).
Searches at particle colliders
Axions may also be produced at colliders, in particular in electron-positron collisions as well as in ultra-peripheral heavy ion collisions at the Large Hadron Collider at CERN, reinterpreting the light-by-light scattering process. Those searches are sensitive for rather large axion masses between 100 MeV/c2 and hundreds of GeV/c2. Assuming a coupling of axions to the Higgs boson, searches for anomalous Higgs boson decays into two axions can theoretically provide even stronger limits.
Disputed detections
It was reported in 2014 that evidence for axions may have been detected as a seasonal variation in observed X-ray emission that would be expected from conversion in the Earth's magnetic field of axions streaming from the Sun. Studying 15 years of data by the European Space Agency's XMM-Newton observatory, a research group at Leicester University noticed a seasonal variation for which no conventional explanation could be found. One potential explanation for the variation, described as "plausible" by the senior author of the paper, is the known seasonal variation in visibility to XMM-Newton of the sunward magnetosphere in which X-rays may be produced by axions from the Sun's core.
This interpretation of the seasonal variation is disputed by two Italian researchers, who identify flaws in the arguments of the Leicester group that are said to rule out an interpretation in terms of axions. Most importantly, the scattering in angle assumed by the Leicester group to be caused by magnetic field gradients during the photon production, necessary to allow the X-rays to enter the detector that cannot point directly at the sun, would dissipate the flux so much that the probability of detection would be negligible.
In 2013, Christian Beck suggested that axions might be detectable in Josephson junctions; and in 2014, he argued that a signature, consistent with a mass ≈110 μeV, had in fact been observed in several preexisting experiments.
In 2020, the XENON1T experiment at the Gran Sasso National Laboratory in Italy reported a result suggesting the discovery of solar axions. The results were not significant at the 5-sigma level required for confirmation, and other explanations of the data were possible though less likely. New observations made in July 2022 after the observatory upgrade to XENONnT discarded the excess, thus ending the possibility of new particle discovery.
Properties
Predictions
One theory of axions relevant to cosmology had predicted that they would have no electric charge, a very small mass in the range from to , and very low interaction cross-sections for strong and weak forces. Because of their properties, axions would interact only minimally with ordinary matter. Axions would also change to and from photons in magnetic fields.
Cosmological implications
The properties of the axion, such as the axion mass, decay constant, and abundance, all have implications for cosmology.
Inflation theory suggests that if they exist, axions would be created abundantly during the Big Bang. Because of a unique coupling to the instanton field of the primordial universe (the "misalignment mechanism"), an effective dynamical friction is created during the acquisition of mass, following cosmic inflation. This robs all such primordial axions of their kinetic energy.
Ultralight axion (ULA) with is a kind of scalar field dark matter that seems to solve the small scale problems of CDM. A single ULA with a GUT scale decay constant provides the correct relic density without fine-tuning.
Axions would also have stopped interaction with normal matter at a different moment after the Big Bang than other more massive dark particles. The lingering effects of this difference could perhaps be calculated and observed astronomically.
If axions have low mass, thus preventing other decay modes (since there are no lighter particles to decay into), the low coupling constant thus predicts that the axion is not scattered out of its state despite its small mass so that the universe would be filled with a very cold Bose–Einstein condensate of primordial axions. Hence, axions could plausibly explain the dark matter problem of physical cosmology. Observational studies are underway, but they are not yet sufficiently sensitive to probe the mass regions if they are the solution to the dark matter problem with the fuzzy dark matter region starting to be probed via superradiance. High mass axions of the kind searched for by Jain and Singh (2007) would not persist in the modern universe. Moreover, if axions exist, scatterings with other particles in the thermal bath of the early universe unavoidably produce a population of hot axions.
Low mass axions could have additional structure at the galactic scale. If they continuously fall into galaxies from the intergalactic medium, they would be denser in "caustic" rings, just as the stream of water in a continuously flowing fountain is thicker at its peak. The gravitational effects of these rings on galactic structure and rotation might then be observable. Other cold dark matter theoretical candidates, such as WIMPs and MACHOs, could also form such rings, but because such candidates are fermionic and thus experience friction or scattering among themselves, the rings would be less sharply defined.
João G. Rosa and Thomas W. Kephart suggested that axion clouds formed around unstable primordial black holes might initiate a chain of reactions that radiate electromagnetic waves, allowing their detection. When adjusting the mass of the axions to explain dark matter, the pair discovered that the value would also explain the luminosity and wavelength of fast radio bursts, being a possible origin for both phenomena. In 2022 a similar hypothesis was used to constrain the mass of the axion from data of M87*.
In 2020, it was proposed that the axion field might actually have influenced the evolution of the early Universe by creating more imbalance between the amounts of matter and antimatter – which possibly resolves the baryon asymmetry problem.
Supersymmetry
In supersymmetric theories the axion has both a scalar and a fermionic superpartner. The fermionic superpartner of the axion is called the axino, the scalar superpartner is called the saxion or dilaton. They are all bundled in a chiral superfield.
The axino has been predicted to be the lightest supersymmetric particle in such a model. In part due to this property, it is also considered a candidate for dark matter.
| Physical sciences | Particle physics: General | Physics |
335098 | https://en.wikipedia.org/wiki/Jackfruit | Jackfruit | The jackfruit (Artocarpus heterophyllus) is a species of tree in the fig, mulberry, and breadfruit family (Moraceae). The jackfruit is the largest tree fruit, reaching as much as in weight, in length, and in diameter. A mature jackfruit tree produces some 200 fruits per year, with older trees bearing up to 500 fruits in a year. The jackfruit is a multiple fruit composed of hundreds to thousands of individual flowers, and the fleshy petals of the unripe fruit are eaten.
The jackfruit tree is well-suited to tropical lowlands and is widely cultivated throughout tropical regions of the world, including India, Bangladesh, Sri Lanka, and the rainforests of the Philippines, Indonesia, Malaysia, and Australia.
The ripe fruit is sweet (depending on variety) and is commonly used in desserts. Canned green jackfruit has a mild taste and meat-like texture that lends itself to being called "vegetable meat". Jackfruit is commonly used in South and Southeast Asian cuisines. Both ripe and unripe fruits are consumed. It is available internationally, canned or frozen, and in chilled meals, as are various products derived from the fruit, such as noodles and chips.
Names
The word jackfruit comes from Portuguese , which in turn is derived from the Malayalam language term (ചക്ക), when the Portuguese arrived in India at Kozhikode (Calicut) on the Malabar Coast (Kerala) in 1499. Later the Malayalam name () was recorded by Hendrik van Rheede (1678–1703) in the , vol. iii in Latin. Henry Yule translated the book in Jordanus Catalani's () Mirabilia descripta: the wonders of the East. This term is in turn derived from the Proto-Dravidian root ("fruit, vegetable").
The common English name "jackfruit" was used by physician and naturalist Garcia de Orta in his 1563 book . Centuries later, botanist Ralph Randles Stewart suggested it was named after William Jack (1795–1822), a Scottish botanist who worked for the East India Company in Bengal, Sumatra, and Malaya.
Nangka is another name used in Philippine English borrowing from Tagalog related to in Cebuano and in Malay, both from the same Austronesian language family.
Description
Shape, trunk and leaves
Artocarpus heterophyllus grows as an evergreen tree that has a relatively short trunk and dense treetop. It easily reaches heights of and trunk diameters of . It sometimes forms buttress roots. The bark of the jackfruit tree is reddish-brown and smooth. In the event of injury to the bark, a milky sap is released.
The leaves are alternate and spirally arranged. They are gummy and thick and are divided into a petiole and a leaf blade. The petiole is long. The leathery leaf blade is long and wide, and is oblong to ovate in shape.
In young trees, the leaf edges are irregularly lobed or split. On older trees, the leaves are rounded and dark green, with a smooth leaf margin. The leaf blade has a prominent main nerve and, starting on each side, six to eight lateral nerves. The stipules are egg-shaped at a length of .
Flowers
The inflorescences are formed on the trunk, branches or twigs (cauliflory). Jackfruit trees are monoecious, having both female and male flowers on a tree. The inflorescences are pedunculated, cylindrical to ellipsoidal or pear-shaped, to about long and wide. Inflorescences are initially completely enveloped in egg-shaped cover sheets which rapidly slough off.
The flowers are small, sitting on a fleshy rachis. The male flowers are greenish, some flowers are sterile. The male flowers are hairy and the perianth ends with two membrane. The individual and prominent stamens are straight with yellow, roundish anthers. Pollen grains are tiny, around 60 microns in diameter. After the pollen distribution, the stamens become ash-gray and fall off after a few days. Later, all the male inflorescences also fall off. The greenish female flowers, with hairy and tubular perianth, have a fleshy flower-like base. The female flowers contain an ovary with a broad, capitate, or rarely bilobed scar. The blooming time ranges from December until February or March.
Fruit
The ellipsoidal to roundish fruit is a multiple fruit formed from the fusion of the ovaries of multiple flowers. The fruits grow on a long and thick stem on the trunk. They vary in size and ripen from an initially yellowish-greenish to yellow, and then at maturity to yellowish-brown. They possess a hard, gummy shell with small pimples surrounded with hard, hexagonal tubercles. The large and variously shaped fruit have a length of and a diameter of and can weigh up to the largest of all tree-borne fruits.
The fruits consist of a fibrous, whitish core (rachis) about thick. Radiating from this are many individual fruits, long. They are elliptical to egg-shaped, light brownish achenes with a length of about and a diameter of .
There may be about 100–500 seeds per fruit. The seed coat consists of a thin, waxy, parchment-like and easily removable testa (husk) and a brownish, membranous tegmen. The cotyledons are usually unequal in size, and the endosperm is minimally present. An average fruit consists of 27% edible seed coat, 15% edible seeds, 20% white pulp (undeveloped perianth, rags) and bark and 10% core.
The fruit matures during the rainy season from July to August. The bean-shaped achenes of the jackfruit are coated with a firm yellowish aril (seed coat, flesh), which has an intense sweet taste at maturity of the fruit. The pulp is enveloped by many narrow strands of fiber (undeveloped perianth), which run between the hard shell and the core of the fruit and are firmly attached to it. When pruned, the inner part (core) secretes a sticky, milky liquid, which is hard to remove from the skin, even with soap and water. To clean the hands after "unwinding" the pulp an oil or other solvent is used. For example, street vendors in Tanzania, who sell the fruit in small segments, provide small bowls of kerosene for their customers to cleanse their sticky fingers. When fully ripe, jackfruit has a strong pleasant aroma, the pulp of the opened fruit resembles the odor of pineapple and banana.
Jackfruit has a distinctive sweet and fruity aroma. In a study of flavour volatiles in five jackfruit cultivars, the main volatile compounds detected were ethyl isovalerate, propyl isovalerate, butyl isovalerate, isobutyl isovalerate, 3-methylbutyl acetate, 1-butanol, and 2-methylbutan-1-ol. A fully ripe and unopened jackfruit is known to "emit a strong aroma" – perhaps unpleasant – with the inside of the fruit described as smelling of pineapple and banana.
Ecology
The species has expanded excessively because its fruits, which naturally fall to the ground and open, are eagerly eaten by small mammals, such as the common marmoset and coati. The seeds are then dispersed by these animals, spreading jackfruit trees that compete for space with native tree species. The supply of jackfruit has allowed the marmoset and coati populations to expand. Since both prey opportunistically on bird eggs and nestlings, the increases in marmoset and coati populations are detrimental to local birds.
As an invasive species
In Brazil, the jackfruit can become an invasive species as in Brazil's Tijuca Forest National Park in Rio de Janeiro or at the Horto Florestal in neighbouring Niterói. The Tijuca is mostly an artificial secondary forest, whose planting began during the mid-nineteenth century; jackfruit trees have been a part of the park's flora since it was founded.
Cultivation
History
The jackfruit was domesticated independently in the Indian subcontinent and Southeast Asia, as indicated by the Southeast Asian names which are not derived from the Sanskrit roots. It was probably first domesticated by Austronesians in Java or the Malay Peninsula. The fruit was later introduced to Guam via Filipino settlers when both were part of the Spanish Empire.
Care
In terms of taking care of the plant, minimal pruning is required; cutting off dead branches from the interior of the tree is only sometimes needed. In addition, twigs bearing fruit must be twisted or cut down to the trunk to induce growth for the next season. Branches should be pruned every three to four years to maintain productivity.
Some trees carry too many mediocre fruits and these are usually removed to allow the others to develop better to maturity.
Stingless bees such as Tetragonula iridipennis are jackfruit pollinators, and so play an important role in jackfruit cultivation. It seems to be the case that pollination results from a three-way mutualism involving the flower, a fungus, and a species of gall midge, Clinidiplosis ultracrepidata. The fungus forms a film over the syncarps which is a food source to both the fly larvae and adults.
Production and marketing
In 2017, India produced of jackfruit, followed by Bangladesh, Thailand, and Indonesia.
The marketing of jackfruit involves three groups: producers, traders, and middlemen, including wholesalers and retailers. The marketing channels are rather complex. Large farms sell immature fruit to wholesalers, which helps cash flow and reduces risk, whereas medium-sized farms sell the fruit directly to local markets or retailers.
Commercial availability
Outside countries of origin, fresh jackfruit can be found at food markets throughout Southeast Asia. It is also extensively cultivated in the Brazilian coastal region, where it is sold in local markets. It is available canned in sugary syrup, or frozen, already prepared and cut. Jackfruit industries are established in Sri Lanka and Vietnam, where the fruit is processed into products such as flour, noodles, papad, and ice cream. It is also canned and sold as a vegetable for export.
Jackfruit is also widely available year-round, both canned and dried. Dried jackfruit chips are produced by various manufacturers. As reported in 2019, jackfruit became more widely available in US grocery stores, cleaned and ready to cook, as well as in premade dishes or prepared ingredients. It is on restaurant menus in preparations such as taco fillings and vegan versions of pulled pork dishes.
Uses
Nutrition
The edible raw pulp is 74% water, 23% carbohydrates, 2% protein, and 1% fat. The carbohydrate component is primarily sugars, and is a source of dietary fiber (table). In a reference amount of , raw jackfruit provides 95 calories, and is a moderate source (10–19% of the Daily Value) of vitamin B6, vitamin C, and potassium, with no significant content of other micronutrients (table).
The jackfruit is a partial solution for food security in developing countries.
Culinary uses
Ripe jackfruit is naturally sweet, with subtle pineapple- or banana-like flavor. It can be used to make a variety of dishes, including custards, cakes, or mixed with shaved ice as es teler in Indonesia or halo-halo in the Philippines. For the traditional breakfast dish in southern India, idlis, the fruit is used with rice as an ingredient and jackfruit leaves are used as a wrapping for steaming. Jackfruit dosas can be prepared by grinding jackfruit flesh along with the batter. Ripe jackfruit arils are sometimes seeded, fried, or freeze-dried and sold as jackfruit chips.
The seeds from ripe fruits are edible once cooked, and have a milky, sweet taste often compared to Brazil nuts. They may be boiled, baked, or roasted. When roasted, the flavor of the seeds is comparable to chestnuts. Seeds are used as snacks (either by boiling or fire-roasting) or to make desserts. In Java, the seeds are commonly cooked and seasoned with salt as a snack. They are commonly used in curry in India in the form of a traditional lentil and vegetable mix curry. Young leaves are tender enough to be used as a vegetable.
The flavor of the ripe fruit is comparable to a combination of apple, pineapple, mango, and banana. Varieties are distinguished according to characteristics of the fruit flesh. In Indochina, the two varieties are the "hard" version (crunchier, drier, and less sweet, but fleshier), and the "soft" version (softer, moister, and much sweeter, with a darker gold-color flesh than the hard variety). Unripe jackfruit has a mild flavor and meat-like texture and is used in curry dishes with spices in many cuisines. The skin of unripe jackfruit must be peeled first, then the remaining jackfruit flesh is chopped into edible portions and cooked before serving. The final chunks resemble prepared artichoke hearts in their mild taste, color, and flowery qualities.
The cuisines of many Asian countries use cooked young jackfruit. In many cultures, jackfruit is boiled and used in curries as a staple food. The boiled young jackfruit is used in salads or as a vegetable in spicy curries and side dishes, and as fillings for cutlets and chops. It may be used by vegetarians as a substitute for meat such as pulled pork, though the protein content of the fruit is not significant. It may be cooked with coconut milk and eaten alone or with meat, shrimp or smoked pork. In southern India, unripe jackfruit slices are deep-fried to make chips. The jackfruit seeds are also boiled and used in sambar (stew).
After roasting, the seeds may be used as a commercial alternative to chocolate aroma.
South Asia
In Bangladesh, the fruit is consumed on its own. The unripe fruit is used in curry, and the seed is often dried and preserved to be later used in curry. In India, two varieties of jackfruit predominate: muttomvarikka and sindoor. Muttomvarikka has a slightly hard inner flesh when ripe, while the inner flesh of the ripe sindoor fruit is soft. In Sri Lanka these two varieties are called waraka and wela respectively.
A sweet preparation called chakkavaratti (jackfruit jam) is made by seasoning pieces of muttomvarikka fruit flesh in jaggery, which can be preserved and used for many months. The fruits are either eaten alone or as a side to rice. The juice is extracted and either drunk straight or as a side. The juice is sometimes condensed and eaten as candies. The seeds are either boiled or roasted and eaten with salt and hot chilies. They are also used to make spicy side dishes with rice. Jackfruit may be ground and made into a paste, then spread over a mat and allowed to dry in the sun to create a natural chewy candy.
Southeast Asia
In Indonesia and Malaysia, jackfruit is called nangka. The ripe fruit is usually sold separately and consumed on its own, or sliced and mixed with shaved ice as a sweet concoction dessert such as es campur and es teler. The ripe fruit might be dried and fried as kripik nangka, or jackfruit cracker. The seeds are boiled and consumed with salt, as they contain edible starchy content; this is called beton. Young (unripe) jackfruit is made into curry called gulai nangka or stewed called gudeg.
In the Philippines, unripe jackfruit or langka is usually cooked in coconut milk and eaten with rice; this is called ginataang langka. The ripe fruit is often an ingredient in local desserts such as halo-halo and the Filipino turon. The ripe fruit, besides also being eaten raw as it is, is also preserved by storing in syrup or by drying. The seeds are also boiled before being eaten.
Thailand is a major producer of jackfruit, which are often cut, prepared, and canned in a sugary syrup (or frozen in bags or boxes without syrup) and exported overseas, frequently to North America and Europe.
In Vietnam, jackfruit is used to make jackfruit chè, a sweet dessert soup, similar to the Chinese derivative bubur cha cha. The Vietnamese also use jackfruit purée as part of pastry fillings or as a topping on xôi ngọt (a sweet version of sticky rice portions).
Jackfruits are found primarily in the eastern part of Taiwan. The fresh fruit can be eaten directly or preserved as dried fruit, candied fruit, or jam. It is also stir-fried or stewed with other vegetables and meat.
Americas
In Brazil, three varieties are recognized: jaca-dura, or the "hard" variety, which has a firm flesh, and the largest fruits that can weigh between 15 and 40 kg each; jaca-mole, or the "soft" variety, which bears smaller fruits with a softer and sweeter flesh; and jaca-manteiga, or the "butter" variety, which bears sweet fruits whose flesh has a consistency intermediate between the "hard" and "soft" varieties.
Africa
From a tree planted for its shade in gardens, it became an ingredient for local recipes using different fruit segments. The seeds are boiled in water or roasted to remove toxic substances, and then roasted for a variety of desserts. The flesh of the unripe jackfruit is used to make a savory salty dish with smoked pork. The jackfruit arils are used to make jams or fruits in syrup, and can also be eaten raw.
Materials
Wood and manufacturing
The golden yellow timber with good grain is used for building furniture and house construction in India. It is termite-resistant and is superior to teak for building furniture. The wood of the jackfruit tree is important in Sri Lanka and is exported to Europe. Jackfruit wood is widely used in the manufacture of furniture, doors and windows, in roof construction, and fish sauce barrels.
The wood of the tree is used for the production of musical instruments. In Indonesia, hardwood from the trunk is carved out to form the barrels of drums used in the gamelan, and in the Philippines, its soft wood is made into the body of the kutiyapi, a type of boat lute. It is also used to make the body of the Indian string instrument veena and the drums mridangam, thimila, and kanjira.
In culture
The jackfruit has played a significant role in Indian agriculture for centuries. Archaeological findings in India have revealed that jackfruit was cultivated in India 3,000 to 6,000 years ago. It has also been widely cultivated in Southeast Asia.
The ornate wooden plank called avani palaka, made of the wood of the jackfruit tree, is used as the priest's seat during Hindu ceremonies in Kerala. In Vietnam, jackfruit wood is prized for the making of Buddhist statues in temples The heartwood is used by Buddhist forest monastics in Southeast Asia as a dye, giving the robes of the monks in those traditions their distinctive light-brown color.
Jackfruit is the national fruit of Bangladesh, and the state fruit of the Indian states of Kerala (which hosts jackfruit festivals) and Tamil Nadu.
| Biology and health sciences | Rosales | null |
335380 | https://en.wikipedia.org/wiki/Cortisol | Cortisol | Cortisol is a steroid hormone in the glucocorticoid class of hormones and a stress hormone. When used as medication, it is known as hydrocortisone.
It is produced in many animals, mainly by the zona fasciculata of the adrenal cortex in an adrenal gland. In other tissues, it is produced in lower quantities. By a diurnal cycle, cortisol is released and increases in response to stress and a low blood-glucose concentration. It functions to increase blood sugar through gluconeogenesis, suppress the immune system, and aid in the metabolism of calories. It also decreases bone formation. These stated functions are carried out by cortisol binding to glucocorticoid or mineralocorticoid receptors inside a cell, which then bind to DNA to affect gene expression.
Health effects
Metabolic response
Metabolism of glucose
Cortisol plays a crucial role in regulating glucose metabolism and promotes gluconeogenesis (glucose synthesis) and glycogenesis (glycogen synthesis) in the liver and glycogenolysis (breakdown of glycogen) in skeletal muscle. It also increases blood glucose levels by reducing glucose uptake in muscle and adipose tissue, decreasing protein synthesis, and increasing the breakdown of fats into fatty acids (lipolysis). All of these metabolic steps have the net effect of increasing blood glucose levels, which fuel the brain and other tissues during the fight-or-flight response. Cortisol is also responsible for releasing amino acids from muscle, providing a substrate for gluconeogenesis. Its impact is complex and diverse.
In general, cortisol stimulates gluconeogenesis (the synthesis of 'new' glucose from non-carbohydrate sources, which occurs mainly in the liver, but also in the kidneys and small intestine under certain circumstances). The net effect is an increase in the concentration of glucose in the blood, further complemented by a decrease in the sensitivity of peripheral tissue to insulin, thus preventing this tissue from taking the glucose from the blood. Cortisol has a permissive effect on the actions of hormones that increase glucose production, such as glucagon and adrenaline.
Cortisol also plays an important, but indirect, role in liver and muscle glycogenolysis (the breaking down of glycogen to glucose-1-phosphate and glucose) which occurs as a result of the action of glucagon and adrenaline. Additionally, cortisol facilitates the activation of glycogen phosphorylase, which is necessary for adrenaline to have an effect on glycogenolysis.
It is paradoxical that cortisol promotes not only gluconeogenesis (biosynthesis of glucose molecules) in the liver, but also glycogenesis (polymerization of glucose molecules into glycogen): cortisol is thus better thought of as stimulating glucose/glycogen turnover in the liver. This is in contrast to cortisol's effect in the skeletal muscle where glycogenolysis (breakdown of glycogen into glucose molecules) is promoted indirectly through catecholamines. In this way, cortisol and catecholamines work synergistically to promote the breakdown of muscle glycogen into glucose for use in the muscle tissue.
Metabolism of proteins and lipids
Elevated levels of cortisol, if prolonged, can lead to proteolysis (breakdown of proteins) and muscle wasting. The reason for proteolysis is to provide the relevant tissue with a feedstock for gluconeogenesis; see glucogenic amino acids. The effects of cortisol on lipid metabolism are more complicated since lipogenesis is observed in patients with chronic, raised circulating glucocorticoid (i.e. cortisol) levels, although an acute increase in circulating cortisol promotes lipolysis. The usual explanation to account for this apparent discrepancy is that the raised blood glucose concentration (through the action of cortisol) will stimulate insulin release. Insulin stimulates lipogenesis, so this is an indirect consequence of the raised cortisol concentration in the blood but it will only occur over a longer time scale.
Immune response
Cortisol prevents the release of substances in the body that cause inflammation. It is used to treat conditions resulting from overactivity of the B-cell-mediated antibody response. Examples include inflammatory and rheumatoid diseases, as well as allergies. Low-dose topical hydrocortisone, available as a nonprescription medicine in some countries, is used to treat skin problems such as rashes and eczema.
Cortisol inhibits production of interleukin 12 (IL-12), interferon gamma (IFN-gamma), IFN-alpha, and tumor necrosis factor alpha (TNF-alpha) by antigen-presenting cells (APCs) and T helper cells (Th1 cells), but upregulates interleukin 4, interleukin 10, and interleukin 13 by Th2 cells. This results in a shift toward a Th2 immune response rather than general immunosuppression. The activation of the stress system (and resulting increase in cortisol and Th2 shift) seen during an infection is believed to be a protective mechanism which prevents an over-activation of the inflammatory response.
Cortisol can weaken the activity of the immune system. It prevents proliferation of T-cells by rendering the interleukin-2 producer T-cells unresponsive to interleukin-1, and unable to produce the T-cell growth factor IL-2. Cortisol downregulates the expression of the IL2 receptor IL-2R on the surface of the helper T-cell which is necessary to induce a Th1 'cellular' immune response, thus favoring a shift towards Th2 dominance and the release of the cytokines listed above which results in Th2 dominance and favors the 'humoral' B-cell mediated antibody immune response.
Cortisol also has a negative-feedback effect on IL-1.
The way this negative feedback works is that an immune stressor causes peripheral immune cells to release IL-1 and other cytokines such as IL-6 and TNF-alpha. These cytokines stimulate the hypothalamus, causing it to release corticotropin-releasing hormone (CRH). CRH in turn stimulates the production of adrenocorticotropic hormone (ACTH) among other things in the adrenal gland, which (among other things) increases production of cortisol. Cortisol then closes the loop as it inhibits TNF-alpha production in immune cells and makes them less responsive to IL-1.
Through this system, as long as an immune stressor is small, the response will be regulated to the correct level. Like a thermostat controlling a heater, the hypothalamus uses cortisol to turn off the heat once the production of cortisol matches the stress induced on the immune system. But in a severe infection or in a situation where the immune system is overly sensitized to an antigen (such as in allergic reactions) or there is a massive flood of antigens (as can happen with endotoxic bacteria) the correct set point might never be reached. Also because of downregulation of Th1 immunity by cortisol and other signaling molecules, certain types of infection, (notably Mycobacterium tuberculosis) can trick the body into getting locked in the wrong mode of attack, using an antibody-mediated humoral response when a cellular response is needed.
Lymphocytes include the B-cell lymphocytes that are the antibody-producing cells of the body, and are thus the main agents of humoral immunity. A larger number of lymphocytes in the lymph nodes, bone marrow, and skin means the body is increasing its humoral immune response. B-cell lymphocytes release antibodies into the bloodstream. These antibodies lower infection through three main pathways: neutralization, opsonization, and complement activation. Antibodies neutralize pathogens by binding to surface adhering proteins, keeping pathogens from binding to host cells. In opsonization, antibodies bind to the pathogen and create a target for phagocytic immune cells to find and latch onto, allowing them to destroy the pathogen more easily. Finally antibodies can also activate complement molecules which can combine in various ways to promote opsonization or even act directly to lyse a bacteria. There are many different kinds of antibody and their production is highly complex, involving several types of lymphocyte, but in general lymphocytes and other antibody regulating and producing cells will migrate to the lymph nodes to aid in the release of these antibodies into the bloodstream.
Rapid administration of corticosterone (the endogenous type I and type II receptor agonist) or RU28362 (a specific type II receptor agonist) to adrenalectomized animals induced changes in leukocyte distribution.
On the other side of things, there are natural killer cells; these cells have the ability to take down larger in size threats like bacteria, parasites, and tumor cells. A separate study found that cortisol effectively disarmed natural killer cells, downregulating the expression of their natural cytotoxicity receptors. Prolactin has the opposite effect. It increases the expression of cytotoxicity receptors on natural killer cells, increasing their firepower.
Cortisol stimulates many copper enzymes (often to 50% of their total potential), including lysyl oxidase, an enzyme that cross-links collagen and elastin. Especially valuable for immune response is cortisol's stimulation of the superoxide dismutase, since this copper enzyme is almost certainly used by the body to permit superoxides to poison bacteria.
Some viruses, such as influenza and SARS-CoV-1 and SARS-CoV-2, are known to suppress the secretion of stress hormones to avoid the organism's immune response, thus avoiding the immune protection of the organism. These viruses suppress cortisol by producing a protein that mimics the human ACTH hormone but is incomplete and does not have hormonal activity. ACTH is a hormone that stimulates the adrenal gland to produce cortisol and other steroid hormones. However, the organism makes antibodies against this viral protein, and those antibodies also kill the human ACTH hormone, which leads to the suppression of adrenal gland function. Such adrenal suppression is a way for a virus to evade immune detection and elimination. This viral strategy can have severe consequences for the host (human that is infected by the virus), as cortisol is essential for regulating various physiological processes, such as metabolism, blood pressure, inflammation, and immune response. A lack of cortisol can result in a condition called adrenal insufficiency, which can cause symptoms such as fatigue, weight loss, low blood pressure, nausea, vomiting, and abdominal pain. Adrenal insufficiency can also impair the ability of the host to cope with stress and infections, as cortisol helps to mobilize energy sources, increase heart rate, and downregulate non-essential metabolic processes during stress. Therefore, by suppressing cortisol production, some viruses can escape the immune system and weaken the host's overall health and resilience.
Other effects
Metabolism
Glucose
Cortisol counteracts insulin, contributes to hyperglycemia by stimulating gluconeogenesis and inhibits the peripheral use of glucose (insulin resistance) by decreasing the translocation of glucose transporters (especially GLUT4) to the cell membrane. Cortisol also increases glycogen synthesis (glycogenesis) in the liver, storing glucose in easily accessible form.
Bone and collagen
Cortisol reduces bone formation, favoring long-term development of osteoporosis (progressive bone disease). The mechanism behind this is two-fold: cortisol stimulates the production of RANKL by osteoblasts which stimulates, through binding to RANK receptors, the activity of osteoclasts – cells responsible for calcium resorption from bone – and also inhibits the production of osteoprotegerin (OPG) which acts as a decoy receptor and captures some RANKL before it can activate the osteoclasts through RANK. In other words, when RANKL binds to OPG, no response occurs as opposed to the binding to RANK which leads to the activation of osteoclasts.
It transports potassium out of cells in exchange for an equal number of sodium ions (see above). This can trigger the hyperkalemia of metabolic shock from surgery. Cortisol also reduces calcium absorption in the intestine. Cortisol down-regulates the synthesis of collagen.
Amino acid
Cortisol raises the free amino acids in the serum by inhibiting collagen formation, decreasing amino acid uptake by muscle, and inhibiting protein synthesis. Cortisol (as opticortinol) may inversely inhibit IgA precursor cells in the intestines of calves. Cortisol also inhibits IgA in serum, as it does IgM; however, it is not shown to inhibit IgE.
Electrolyte balance
Cortisol increases glomerular filtration rate, and renal plasma flow from the kidneys thus increasing phosphate excretion, as well as increasing sodium and water retention and potassium excretion by acting on mineralocorticoid receptors. It also increases sodium and water absorption and potassium excretion in the intestines.
Sodium
Cortisol promotes sodium absorption through the small intestine of mammals. Sodium depletion, however, does not affect cortisol levels so cortisol cannot be used to regulate serum sodium. Cortisol's original purpose may have been sodium transport. This hypothesis is supported by the fact that freshwater fish use cortisol to stimulate sodium inward, while saltwater fish have a cortisol-based system for expelling excess sodium.
Potassium
A sodium load augments the intense potassium excretion by cortisol. Corticosterone is comparable to cortisol in this case. For potassium to move out of the cell, cortisol moves an equal number of sodium ions into the cell. This should make pH regulation much easier (unlike the normal potassium-deficiency situation, in which two sodium ions move in for each three potassium ions that move out—closer to the deoxycorticosterone effect).
Stomach and kidneys
Cortisol stimulates gastric-acid secretion. Cortisol's only direct effect on the hydrogen-ion excretion of the kidneys is to stimulate the excretion of ammonium ions by deactivating the renal glutaminase enzyme.
Memory
Cortisol works with adrenaline (epinephrine) to create memories of short-term emotional events; this is the proposed mechanism for storage of flash bulb memories, and may originate as a means to remember what to avoid in the future. However, long-term exposure to cortisol damages cells in the hippocampus; this damage results in impaired learning.
Diurnal cycles
Diurnal cycles of cortisol levels are found in humans.
Stress
Sustained stress can lead to high levels of circulating cortisol (regarded as one of the more important of the several "stress hormones").
Effects during pregnancy
During human pregnancy, increased fetal production of cortisol between weeks 30 and 32 initiates production of fetal lung pulmonary surfactant to promote maturation of the lungs. In fetal lambs, glucocorticoids (principally cortisol) increase after about day 130, with lung surfactant increasing greatly, in response, by about day 135, and although lamb fetal cortisol is mostly of maternal origin during the first 122 days, 88% or more is of fetal origin by day 136 of gestation. Although the timing of fetal cortisol concentration elevation in sheep may vary somewhat, it averages about 11.8 days before the onset of labor. In several livestock species (e.g. cattle, sheep, goats, and pigs), the surge of fetal cortisol late in gestation triggers the onset of parturition by removing the progesterone block of cervical dilation and myometrial contraction. The mechanisms yielding this effect on progesterone differ among species. In the sheep, where progesterone sufficient for maintaining pregnancy is produced by the placenta after about day 70 of gestation, the prepartum fetal cortisol surge induces placental enzymatic conversion of progesterone to estrogen. (The elevated level of estrogen stimulates prostaglandin secretion and oxytocin receptor development.)
Exposure of fetuses to cortisol during gestation can have a variety of developmental outcomes, including alterations in prenatal and postnatal growth patterns. In marmosets, a species of New World primates, pregnant females have varying levels of cortisol during gestation, both within and between females. Infants born to mothers with high gestational cortisol during the first trimester of pregnancy had lower rates of growth in body mass indices than infants born to mothers with low gestational cortisol (about 20% lower). However, postnatal growth rates in these high-cortisol infants were more rapid than low-cortisol infants later in postnatal periods, and complete catch-up in growth had occurred by 540 days of age. These results suggest that gestational exposure to cortisol in fetuses has important potential fetal programming effects on both pre and postnatal growth in primates.
Cortisol face
Increased cortisol levels may lead to facial swelling and bloating, creating a round and puffy appearance, referred to as "cortisol face."
Synthesis and release
Cortisol is produced in the human body by the adrenal gland's zona fasciculata, the second of three layers comprising the adrenal cortex. This cortex forms the outer "bark" of each adrenal gland, situated atop the kidneys. The release of cortisol is controlled by the hypothalamus of a brain. Secretion of corticotropin-releasing hormone by the hypothalamus triggers cells in its neighboring anterior pituitary to secrete adrenocorticotropic hormone (ACTH) into the vascular system, through which blood carries it to the adrenal cortex. ACTH stimulates the synthesis of cortisol and other glucocorticoids, mineralocorticoid aldosterone, and dehydroepiandrosterone.
Testing of individuals
Normal values indicated in the following tables pertain to humans (normal levels vary among species). Measured cortisol levels, and therefore reference ranges, depend on the sample type, analytical method used, and factors such as age and sex. Test results should, therefore, always be interpreted using the reference range from the laboratory that produced the result. An individual's cortisol levels can be detected in blood, serum, urine, saliva, and sweat.
Using the molecular weight of 362.460 g/mole, the conversion factor from μg/dL to nmol/L is approximately 27.6; thus, 10 μg/dL is about 276 nmol/L.
Cortisol follows a circadian rhythm, and to accurately measure cortisol levels is best to test four times per day through saliva. An individual may have normal total cortisol but have a lower than normal level during a certain period of the day and a higher than normal level during a different period. Therefore, some scholars question the clinical utility of cortisol measurement.
Cortisol is lipophilic, and is transported bound to transcortin (also known as corticosteroid-binding globulin (CBG)) and albumin, while only a small part of the total serum cortisol is unbound and has biological activity. This binding of cortisol to transcortin is accomplished through hydrophobic interactions in which cortisol binds in a 1:1 ratio. Serum cortisol assays measures total cortisol, and its results may be misleading for patients with altered serum protein concentrations. The salivary cortisol test avoids this problem because only free cortisol can pass through the blood-saliva barrier. Transcortin particles are too large to pass through this barrier, that consists of epithelial cell layers of the oral mucosa and salivary glands.
Cortisol may be incorporated into hair from blood, sweat, and sebum. A 3 centimeter segment of scalp hair can represent 3 months of hair growth, although growth rates can vary in different regions of the scalp. Cortisol in hair is a reliable indicator of chronic cortisol exposure.
Automated immunoassays lack specificity and show significant cross-reactivity due to interactions with structural analogs of cortisol, and show differences between assays. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) can improve specificity and sensitivity.
Disorders of cortisol production
Some medical disorders are related to abnormal cortisol production, such as:
Primary hypercortisolism (Cushing's syndrome): excessive levels of cortisol
Secondary hypercortisolism (pituitary tumor resulting in Cushing's disease, pseudo-Cushing's syndrome)
Primary hypocortisolism (Addison's disease, Nelson's syndrome): insufficient levels of cortisol
Secondary hypocortisolism (pituitary tumor, Sheehan's syndrome)
Regulation
The primary control of cortisol is the pituitary gland peptide, ACTH, which probably controls cortisol by controlling the movement of calcium into the cortisol-secreting target cells. ACTH is in turn controlled by the hypothalamic peptide corticotropin-releasing hormone (CRH), which is under nervous control. CRH acts synergistically with arginine vasopressin, angiotensin II, and epinephrine. (In swine, which do not produce arginine vasopressin, lysine vasopressin acts synergistically with CRH.)
When activated macrophages start to secrete IL-1, which synergistically with CRH increases ACTH, T-cells also secrete glucosteroid response modifying factor (GRMF), as well as IL-1; both increase the amount of cortisol required to inhibit almost all the immune cells. Immune cells then assume their own regulation, but at a higher cortisol setpoint. The increase in cortisol in diarrheic calves is minimal over healthy calves, however, and falls over time. The cells do not lose all their fight-or-flight override because of interleukin-1's synergism with CRH. Cortisol even has a negative feedback effect on interleukin-1—especially useful to treat diseases that force the hypothalamus to secrete too much CRH, such as those caused by endotoxic bacteria. The suppressor immune cells are not affected by GRMF, so the immune cells' effective setpoint may be even higher than the setpoint for physiological processes. GRMF affects primarily the liver (rather than the kidneys) for some physiological processes.
High-potassium media (which stimulates aldosterone secretion in vitro) also stimulate cortisol secretion from the fasciculata zone of canine adrenals — unlike corticosterone, upon which potassium has no effect.
Potassium loading also increases ACTH and cortisol in humans. This is probably the reason why potassium deficiency causes cortisol to decline (as mentioned) and causes a decrease in conversion of 11-deoxycortisol to cortisol. This may also have a role in rheumatoid-arthritis pain; cell potassium is always low in RA.
Ascorbic acid presence, particularly in high doses has also been shown to mediate response to psychological stress and speed the decrease of the levels of circulating cortisol in the body post-stress. This can be evidenced through a decrease in systolic and diastolic blood pressures and decreased salivary cortisol levels after treatment with ascorbic acid.
Factors increasing cortisol levels
Viral infections increase cortisol levels through activation of the HPA axis by cytokines.
Intense (high VO2 max) or prolonged aerobic exercise transiently increases cortisol levels to increase gluconeogenesis and maintain blood glucose; however, cortisol declines to normal levels after eating (i.e., restoring a neutral energy balance).
Severe trauma or stressful events can elevate cortisol levels in the blood for prolonged periods.
Low-carbohydrate diets cause a short-term increase in resting cortisol (≈3 weeks), and increase the cortisol response to aerobic exercise in the short- and long-term.
Increase in the concentration of ghrelin, the hunger stimulating hormone, increases levels of cortisol.
Biochemistry
Biosynthesis
Cortisol is synthesized from cholesterol. Synthesis takes place in the zona fasciculata of an adrenal cortex.
The name "cortisol" is derived from the word 'cortex'. Cortex means "the outer layer"—a reference to the adrenal cortex, the part of the adrenal gland where cortisol is produced.
While the adrenal cortex in humans also produces aldosterone in the zona glomerulosa and some sex hormones in the zona reticularis, cortisol is its main secretion in humans and several other species. In cattle, corticosterone levels may approach or exceed cortisol levels. In humans, the medulla of the adrenal gland lies under its cortex, mainly secreting the catecholamines adrenaline (epinephrine) and noradrenaline (norepinephrine) under sympathetic stimulation.
Synthesis of cortisol in the adrenal gland is stimulated by the anterior lobe of the pituitary gland with ACTH; ACTH production is, in turn, stimulated by CRH, which is released by the hypothalamus. ACTH increases the concentration of cholesterol in the inner mitochondrial membrane, via regulation of the steroidogenic acute regulatory protein. It also stimulates the main rate-limiting step in cortisol synthesis, in which cholesterol is converted to pregnenolone and catalyzed by Cytochrome P450SCC (side-chain cleavage enzyme).
Metabolism
11beta-hydroxysteroid dehydrogenases
Cortisol is metabolized reversibly to cortisone by the 11-beta hydroxysteroid dehydrogenase system (11-beta HSD), which consists of two enzymes: 11-beta HSD1 and 11-beta HSD2. The metabolism of cortisol to cortisone involves oxidation of the hydroxyl group at the 11-beta position.
11-beta HSD1 uses the cofactor NADPH to convert biologically inert cortisone to biologically active cortisol
11-beta HSD2 uses the cofactor NAD+ to convert cortisol to cortisone
Overall, the net effect is that 11-beta HSD1 serves to increase the local concentrations of biologically active cortisol in a given tissue; 11-beta HSD2 serves to decrease local concentrations of biologically active cortisol. If hexose-6-phosphate dehydrogenase (H6PDH) is present, the equilibrium can favor the activity of 11-beta HSD1. H6PDH regenerates NADPH, which increases the activity of 11-beta HSD1, and decreases the activity of 11-beta HSD2.
An alteration in 11-beta HSD1 has been suggested to play a role in the pathogenesis of obesity, hypertension, and insulin resistance known as metabolic syndrome.
An alteration in 11-beta HSD2 has been implicated in essential hypertension and is known to lead to the syndrome of apparent mineralocorticoid excess (SAME).
A-ring reductases (5alpha- and 5beta-reductases)
Cortisol is also metabolized irreversibly into 5-alpha tetrahydrocortisol (5-alpha THF) and 5-beta tetrahydrocortisol (5-beta THF), reactions for which 5-alpha reductase and 5-beta-reductase are the rate-limiting factors, respectively. 5-Beta reductase is also the rate-limiting factor in the conversion of cortisone to tetrahydrocortisone.
Cytochrome P450, family 3, subfamily A monooxygenases
Cortisol is also metabolized irreversibly into 6β-hydroxycortisol by cytochrome p450-3A monooxygenases, mainly, CYP3A4. Drugs that induce CYP3A4 may accelerate cortisol clearance.
Chemistry
Cortisol is a naturally occurring pregnane corticosteroid and is also known as 11β,17α,21-trihydroxypregn-4-ene-3,20-dione.
Animals
In animals, cortisol is often used as an indicator of stress and can be measured in blood, saliva, urine, hair, and faeces.
| Biology and health sciences | Animal hormones | Biology |
335507 | https://en.wikipedia.org/wiki/Parsnip | Parsnip | The parsnip (Pastinaca sativa) is a root vegetable closely related to carrot and parsley, all belonging to the flowering plant family Apiaceae. It is a biennial plant usually grown as an annual. Its long taproot has cream-colored skin and flesh, and, left in the ground to mature, becomes sweeter in flavor after winter frosts. In its first growing season, the plant has a rosette of pinnate, mid-green leaves. If unharvested, it produces a flowering stem topped by an umbel of small yellow flowers in its second growing season, later producing pale brown, flat, winged seeds. By this time, the stem has become woody, and the taproot inedible. Precautions should be taken when handling the stems and foliage, as parsnip sap can cause a skin rash or even blindness if exposed to sunlight after handling.
The parsnip is native to Eurasia; it has been used as a vegetable since antiquity and was cultivated by the Romans, although some confusion exists between parsnips and carrots in the literature of the time. It was used as a sweetener before the arrival of cane sugar in Europe.
Parsnips are usually cooked but can also be eaten raw. The flesh has a sweet flavor, even more so than carrots. It is high in vitamins, antioxidants, and minerals (especially potassium); and also contains both soluble and insoluble dietary fiber. Parsnips are best cultivated in deep, stone-free soil. The plant is attacked by the carrot fly and other insect pests, as well as viruses and fungal diseases, of which canker is the most serious.
Description
The parsnip is a biennial plant with a rosette of roughly hairy leaves that have a pungent odor when crushed. Parsnips are grown for their fleshy, edible, cream-colored taproots. The roots are generally smooth, although lateral roots sometimes form. Most are narrowly conical, but some cultivars have a more bulbous shape, which generally tends to be favored by food processors as it is more resistant to breakage. The plant's apical meristem produces a rosette of pinnate leaves, each with several pairs of leaflets with toothed margins. The lower leaves have short stems, the upper ones are stemless, and the terminal leaves have three lobes. The leaves are once- or twice-pinnate with broad, ovate, sometimes lobed leaflets with toothed margins; they grow up to long.
The petioles are grooved and have sheathed bases. The floral stem develops in the second year and can grow to more than tall. It is hairy, grooved, hollow (except at the nodes), and sparsely branched. It has a few stalkless, single-lobed leaves measuring long that are arranged in opposite pairs.
The yellow flowers are in a loose, compound umbel measuring in diameter. Six to 25 straight pedicels are present, each measuring that support the umbellets (secondary umbels). The umbels and umbellets usually have no upper or lower bracts. The flowers have tiny sepals or lack them entirely, and measure about . They consist of five yellow petals that are curled inward, five stamens, and one pistil. The fruits, or schizocarps, are oval and flat, with narrow wings and short, spreading styles. They are colored straw to light brown, and measure long.
Despite the slight morphological differences between the two, wild parsnip is the same taxon as the cultivated version, and the two readily cross-pollinate. The parsnip has a chromosome number of 2n=22.
Taxonomy
Pastinaca sativa was first officially described by Carolus Linnaeus in his 1753 work Species Plantarum. It has acquired several synonyms in its taxonomic history:
Pastinaca fleischmannii Hladnik, ex D.Dietr.
Pastinaca opaca Bernh. ex Hornem.
Pastinaca pratensis (Pers.) H.Mart.
Pastinaca sylvestris Mill.
Pastinaca teretiuscula Boiss.
Pastinaca umbrosa Steven, ex DC.
Pastinaca urens Req. ex Godr.
Several species from other genera (Anethum, Elaphoboscum, Peucedanum, Selinum) are likewise synonymous with the name Pastinaca sativa.
Like most plants of agricultural importance, several subspecies and varieties of P. sativa have been described, but these are mostly no longer recognized as independent taxa, but rather, morphological variations of the same taxon.
Pastinaca sativa subsp. divaricata (Desf.) Rouy & Camus
Pastinaca sativa subsp. pratensis (Pers.) Čelak.
Pastinaca sativa subsp. sylvestris (Mill.) Rouy & Camus
Pastinaca sativa subsp. umbrosa (Steven, ex DC.) Bondar. ex O.N.Korovina
Pastinaca sativa subsp. urens (Req. ex Godr.) Čelak.
Pastinaca sativa var. brevis Alef.
Pastinaca sativa var. edulis DC.
Pastinaca sativa var. hortensis Ehrh. ex Hoffm.
Pastinaca sativa var. longa Alef.
Pastinaca sativa var. pratensis Pers.
Pastinaca sativa var. siamensis Roem. & Schult. ex Alef.
In Eurasia, some authorities distinguish between cultivated and wild versions of parsnips by using subspecies P. s. sylvestris for the latter, or even elevating it to species status as Pastinaca sylvestris. In Europe, various subspecies have been named based on characteristics such as the hairiness of the leaves, the extent to which the stems are angled or rounded, and the size and shape of the terminal umbel.
Etymology
The etymology of the generic name Pastinaca is not known with certainty but is probably derived from either the Latin word , meaning 'to prepare the ground for planting of the vine' or , meaning 'food'. The specific epithet sativa means 'sown'.
While folk etymology sometimes assumes the name is a mix of parsley and turnip, it actually comes from Middle English , alteration (influenced by , 'turnip') of Old French (now panais) from Latin pastinum, a kind of fork. The word's ending was changed to -nip by analogy with turnip because it was mistakenly assumed to be a kind of turnip.
Distribution and habitat
Like carrots, parsnips are native to Eurasia.
Invasivity
The parsnip's popularity as a cultivated plant has led to its spread beyond its native range, and wild populations have become established in other parts of the world. A scattered population can be found throughout North America.
The plant can form dense stands which outcompete native species and is especially common in abandoned yards, farmland, and along roadsides and other disturbed environments. The increasing abundance of this plant is a concern, particularly due to the plant's toxicity and increasing abundance in populated areas such as parks. Control is often carried out via chemical means, with glyphosate-containing herbicides considered effective.
Cultivation
History
Zohary and Hopf note that the archaeological evidence for the ancient cultivation of the parsnip is "still rather limited" and that Greek and Roman literary sources are a major source about its early use. They warn that "there are some difficulties in distinguishing between parsnip and carrot (which, in Roman times, were white or purple) in classical writings since both vegetables seem to have been called pastinaca in Latin, yet each vegetable appears to be well under cultivation in Roman times".
This plant was introduced to North America simultaneously by the French colonists in Canada and the British in the Thirteen Colonies for use as a root vegetable, but in the mid-19th century, it was replaced as the main source of starch by the potato and consequently was less widely cultivated.
In 1859, a new cultivar called 'Student' was developed by James Buckman at the Royal Agricultural College in England. He back-crossed cultivated plants to wild stock, aiming to demonstrate how native plants could be improved by selective breeding. This experiment was so successful 'Student' became the major variety in cultivation in the late 19th century.
Propagation
The wild parsnip from which the modern cultivated varieties were derived is a plant of dry, rough grassland and waste places, particularly on chalk and limestone soils. Parsnips are biennials, but are normally grown as annuals. Sandy and loamy soils are preferable to silt, clay, and stony ground; the latter produces short, forked roots. Parsnip seed significantly deteriorates in viability if stored for long. Seeds are usually planted in early spring, as soon as the ground can be worked to a fine tilth, in the position where the plants are to grow. The growing plants are thinned and kept weed-free. Harvesting begins in late fall after the first frost and continues through winter. The rows can be covered with straw to enable the crop to be lifted during frosty weather. Low soil temperatures cause some of the starches stored in the roots to be converted into sugars, giving them a sweeter taste.
Problems
Parsnip leaves are sometimes tunnelled by the larvae of the celery fly (Euleia heraclei). Irregular, pale brown passages can be seen between the upper and lower surfaces of the leaves. The effects are most serious on young plants, as whole leaves may shrivel and die. Treatment is by removing affected leaflets, whole leaves, or by chemical means.
The crop can be attacked by larvae of the carrot fly (Chamaepsila rosae). This pest feeds on the outer layers of the root, burrowing its way inside later in the season. Seedlings may be killed while larger roots are spoiled. The damage done provides a point of entry for fungal rots and canker. The smell of bruised tissue attracts the fly.
Parsnip is used as a food plant by the larvae of some lepidopteran species, including the parsnip swallowtail (Papilio polyxenes), the common swift moth (Korscheltellus lupulina), the garden dart moth (Euxoa nigricans), and the ghost moth (Hepialus humuli). The larvae of the parsnip moth (Depressaria radiella), native to Europe and accidentally introduced to North America in the mid-1800s, construct their webs on the umbels, feeding on flowers and partially developed seeds.
Parsnip canker is a serious disease of this crop. Black or orange-brown patches occur around the root's crown and shoulders, accompanied by cracking and hardening of the flesh. It is more likely to occur when the seed is sown into cold, wet soil, the pH of the soil is too low, or the roots have already been damaged by carrot fly larvae. Several fungi are associated with canker, including Phoma complanata, Ilyonectria radicicola, Itersonilia pastinaceae, and I. perplexans. In Europe, Mycocentrospora acerina has been found to cause a black rot that kills the plant early. Watery soft rot, caused by Sclerotinia minor and S. sclerotiorum, causes the taproot to become soft and watery. A white or buff-coloured mould grows on the surface. The pathogen is most common in temperate and subtropical regions with a cool, wet season.
Violet root rot caused by the fungus Helicobasidium purpureum sometimes affects the roots, covering them with a purplish mat to which soil particles adhere. The leaves become distorted and discoloured, and the mycelium can spread through the soil between plants. Some weeds can harbour this fungus, and it is more prevalent in wet, acid conditions. Erysiphe heraclei causes a powdery mildew that can cause significant crop loss. Infestation by this causes results in the yellowing of the leaf and loss of foliage. Moderate temperatures and high humidity favor the development of the disease.
Several viruses are known to infect the plant, including seed-borne strawberry latent ringspot virus, parsnip yellow fleck virus, parsnip leaf curl virus, parsnip mosaic potyvirus, and potyvirus celery mosaic virus. The latter causes clearing or yellowing of the areas of the leaf immediately beside the veins, the appearance of ochre mosaic spots, and the crinkling of the leaves in infected plants.
Toxicity
The shoots and leaves of parsnip must be handled with care, as its sap contains furanocoumarins, phototoxic chemicals that cause blisters on the skin when it is exposed to sunlight, a condition known as phytophotodermatitis. It shares this property with many of its relatives in the carrot family. Symptoms include redness, burning, and blisters; afflicted areas can remain sensitive and discolored for up to two years. Reports of gardeners experiencing toxic symptoms after coming into contact with foliage have been made, but these have been small compared to the number of people who grow the crop. The problem is most likely to occur on a sunny day when gathering foliage or pulling up old plants that have gone to seed. The symptoms have mostly been mild to moderate. Risk can be reduced by wearing long pants and sleeves to avoid exposure, and avoiding sunlight after any suspected exposure.
If eyes are exposed to the sap it can cause blindness.
The toxic properties of parsnip extracts are resistant to heating and periods of storage lasting several months. Toxic symptoms can also affect livestock and poultry in parts of their bodies where their skin is exposed. Polyynes can be found in Apiaceae vegetables such as parsnip, and they show cytotoxic activities.
Uses
Parsnips resemble carrots and can be used in similar ways, but they have a sweeter taste, especially when cooked. They can be baked, boiled, pureed, roasted, fried, grilled, or steamed. When used in stews, soups, and casseroles, they give a rich flavour. In some cases, parsnips are boiled, and the solid portions are removed from the soup or stew, leaving behind a more subtle flavour than the whole root and starch to thicken the dish. Roast parsnip is considered an essential part of Christmas dinner in some parts of the English-speaking world and frequently features in the traditional Sunday roast. Parsnips can also be fried or thinly sliced and made into crisps. They can be made into a wine with a taste similar to Madeira.
The author Dorothy Hartley described parsnips as having "the type of sweetness that mingles with honey and spice..." The food writer Alan Davidson remarks, "parsnip has a taste which, although not strong, is peculiar and not to everyone's liking."
In Roman times, parsnips were believed to be an aphrodisiac. However, parsnips do not typically feature in modern Italian cooking. Instead, they are fed to pigs, particularly those bred to make Parma ham.
Nutrition
A typical 100 g serving of parsnip provides of food energy. Most parsnip cultivars consist of about 80% water, 5% sugar, 1% protein, 0.3% fat, and 5% dietary fiber. The parsnip is rich in vitamins and minerals and is particularly rich in potassium with 375 mg per 100 g. Several of the B-group vitamins are present, but levels of vitamin C are reduced in cooking. Since most of the vitamins and minerals are found close to the skin, many will be lost unless the root is finely peeled or cooked whole. During frosty weather, part of the starch is converted to sugar, and the root tastes sweeter.
The consumption of parsnips has potential health benefits. They contain antioxidants such as falcarinol, falcarindiol, panaxydiol, and methyl-falcarindiol, which may potentially have anticancer, anti-inflammatory and antifungal properties. The dietary fiber in parsnips is partly of the soluble and partly the insoluble type and comprises cellulose, hemicellulose, and lignin. The high fiber content of parsnips may help prevent constipation and reduce blood cholesterol levels.
In culture
The parsnip was much esteemed in Rome, and Emperor Tiberius accepted part of the tribute payable to Rome by Germania in the form of parsnips. In Europe, the vegetable was used as a source of sugar before cane and beet sugars were available. As pastinache comuni, the "common" pastinaca figures in the long list of comestibles enjoyed by the Milanese given by Bonvesin da la Riva in his "Marvels of Milan" (1288).
| Biology and health sciences | Apiales | null |
335568 | https://en.wikipedia.org/wiki/European%20fallow%20deer | European fallow deer | The European fallow deer (Dama dama), also known as the common fallow deer or simply fallow deer, is a species of deer native to Eurasia. It is one of two living species of fallow deer alongside the Persian fallow deer (Dama mesapotamica). It is historically native to Turkey and possibly the Italian Peninsula, Balkan Peninsula, and the island of Rhodes near Anatolia. During the Pleistocene it inhabited much of Europe, and has been reintroduced to its prehistoric distribution by humans. It has also been introduced to other regions in the world.
Taxonomy
Some taxonomists include the rarer Persian fallow deer as a subspecies (D. d. mesopotamica), with both species being grouped together as the fallow deer, while others treat it as a different species (D. mesopotamica). The white-tailed deer (Odocoileus virginianus) was once classified as Dama virginiana and the mule deer or black-tailed deer (Odocoileus hemionus) as Dama hemionus; they were given a separate genus in the 19th century.
Description
The male fallow deer is known as a buck, the female is a doe, and the young a fawn. Adult bucks are long, in shoulder height, and typically in weight; does are long, in shoulder height, and in weight. The largest bucks may measure long and weigh . Fawns are born in spring around and weigh around . Their lifespan is around 12–16 years.
Much variation occurs in the coat colour of the species, with four main variants: common, menil, melanistic, and leucistic – a genuine colour variety, not albinistic. White is the lightest coloured, almost white; common and menil are darker, and melanistic is very dark, sometimes even black (and is easily confused with the sika deer).
Common: Chestnut coat with white mottles, it is most pronounced in summer with a much darker, unspotted coat in the winter. The light-coloured area around the tail is edged with black. The tail is light with a black stripe.
Menil: Spots are more distinct than common in summer and no black is seen around the rump patch or on the tail. In winter, spots are still clear on a darker brown coat.
Melanistic (black): All year the coat is black, shading to greyish-brown. No light-coloured tail patch or spots are seen.
Leucistic (white, but not albino): Fawns are cream-coloured; adults become pure white, especially in winter. Dark eyes and nose are seen. The coat has no spots.
Most herds consist of the common coat variation, yet animals of the menil coat variation are not rare. The melanistic coat variation is generally rarer, and the white coat variation is very much rarer still, although wild New Zealand herds often have a high melanistic percentage.
Only bucks have antlers, which are broad and shovel-shaped (palmate) from three years. In the first two years the antler is a single spike. Their preferred habitat is mixed woodland and open grassland. During the rut, bucks spread out and females move between them; at that time of year fallow deer are relatively ungrouped compared with the rest of the year, when they try to stay together in groups of up to 150.
Agile and fast in case of danger, fallow deer can run at a maximum speed of over short distances. Being naturally less muscular than other cervids such as the roe deer, they are not as fast. Fallow deer can also jump up to high and up to in length.
The diet of the European fallow deer has been described as highly flexible, and able to adapt to local conditions. A 1977 study of European fallow deer in the New Forest of Britain found that European fallow deer were selective mixed feeders, feeding primarily on grass (and to a less extend on herbs and browse) during the spring and summer (March–September), while primaily feeding on acorns and other mast during autumn (from September) until late December, with winter foods including grass as well as shrubs like brambles, bilberry, heather, holly, as well as ivy and coniferous material.
Distribution
During the Last Interglacial (also known as the Eemian) around 130-115,000 years ago and prior, European fallow deer were widely distributed over Europe, occurring as far north as the British Isles. During the Last Glacial Period (115,000-11,700 years ago) the range of the species collapsed due to unfavourable climate conditions, surviving in refugia in Anatolia and probably the Balkans and possibly elsewhere, though the fossil record of their distribution during this time is sparse.
Turkey
Turkey is the only country known to have definitively natural populations of European fallow deer since the Last Glacial Maximum, but populations there (alongside those of the Persian fallow deer, which also formerly occurred in Turkey) have since become endangered and almost fully extirpated. European fallow deer in Anatolia underwent a major population decline due to the spread of agriculture (leading to the deforestation of lowland forests) and hunting, and populations in the Marmara and Aegean regions went extinct by the turn of 20th century. Other wild populations of Turkish fallow deer survived for longer on islands at Ayvalık Islands Nature Park, Gökova, and Adaköy near Marmaris, but also appear to have died out in recent years. Currently, the only extant wild population of the species known to be undoubtedly natural lives in Düzlerçami Game Reserve in the Mount Güllük-Termessos National Park in southern Turkey, although the area has been largely fenced since, making the population only semi-wild. This population is very few in number and is genetically distinct from other European fallow deer.
A 2024 study suggests that the Turkish populations of fallow deer are ancestral to most fallow deer found throughout Europe as well as introduced populations worldwide. The translocations of fallow deer out of Turkey were facilitated by Roman-era trade networks. The only other refugium found was one in the Balkans, whose surviving descendants are significantly fewer in number.
Native but originally extinct
Southern Balkans
On mainland Greece and some Greek islands, such as Corfu, Kythira, and Thasos, that were connected to mainland due to lower sea level or proximity to land, fallow deer were present during the last ice age. A belief arose that the species was almost extinct in Greece, returning during the Neolithic. Contrary to that, remains indicate that reduced numbers survived in several parts of the country like in Thessaly, Peloponnese and Central Greece, increasing and becoming common during mid Neolithic, but mostly east of Pindus mountain range and especially in Macedonia and Thrace. During the Neolithic period and the Bronze Age, the species survived on the islands of Corfu and Thasos, appeared on Euboea, and began to be introduced by man to other islands, including Crete, some of the Cyclades, Rhodes, Chios, Lesbos, Samos and Sporades. Early-historic-period remains have been found in eastern Greece and on the islands of Thasos, Chios, Rhodes and Crete. A few surviving individuals were observed on Samos in 1700, while the species became extinct on Lesbos late in the Ottoman period. On the Greek mainland, wild fallow deer survived until the 16th century in northeastern Chalkidiki, until the 19th century in the forests of Mount Olympus, Vermio Mountains, Arakynthos, Evrytania and Boeotia and until the 1910s in Thesprotia. The last individuals were hunted in Acarnania during the 1930s.
In Bulgaria, the autochthonous population of fallow deer is believed to have declined and disappeared after the 9th or 10th centuries, and the species was reintroduced there much more recently. The species remained in European Turkey into the 19th century. A male fallow deer was captured in Thrace in 1977 and translocated to Düzlerçamı, suggesting that a small population existed there at that time. In Albania (possibly in Butrint), the fallow deer seemed to be plentiful during the first half of 19th century.
A 2024 genetic study suggests that the Balkans served as one of two refugia for fallow deer during the glacial periods, alongside Anatolia. Members of this population were also translocated around Europe during the Iron Age and Roman Empire, but have largely been replaced by Turkish-origin fallow deer (including in their native Balkans) aside from parts of southern Europe. The Balkan fallow deer are thought to represent the ancestors of modern Iberian, Italian, and Rhodes fallow deer, with the Rhodes population dating back to Neolithic translocations.
Possible native populations
Aside from Turkey, other areas of Europe that could have potentially served as refuges for the species during the last ice age include parts of the eastern Mediterranean, including most of the Italian Peninsula, parts of the Balkans, and the Greek island of Rhodes, all of which still host populations of this species. However, palaeontological and archaeozoological evidence of the species' diffusion into these areas during the ice age is very fragmentary, thus whether the present populations in these areas are truly native descendants of relict populations or were introduced by humans is unknown. Presently, the IUCN Red List's range map lists European fallow deer as being native to Italy, Turkey, Rhodes, and most of the Balkans, as having a population of uncertain origin in central Bosnia and Herzegovina, and being introduced to the rest of Europe. In the text, though, all the eastern Mediterranean populations aside from Turkey are listed as having an uncertain origin. A 2024 study suggests that Italian and Iberian populations descend from a now-extinct Balkan population that was translocated early on.
Rhodes, Greece
The Rhodian population of European fallow deer is smaller on average than those of central and northern Europe, though they are similarly coloured. European fallow deer are said to have been introduced to Rhodes in Neolithic times; although fossils of the species on Rhodes do indeed go back to Neolithic times, no major evidence has been found of domestication, so they could be considered native. In 2005, the Rhodian fallow deer was found to be genetically distinct from all other populations and to be of urgent conservation concern. At the entrance to the harbour of Rhodes city, statues of a fallow deer buck and doe now grace the location where the Colossus of Rhodes once stood. A 2024 study suggests them to be a basal lineage of the Balkan fallow deer, originating from a very early translocation.
Introduced
Outside of Europe, this species has been introduced to Algeria, Antigua and Barbuda, Argentina, Australia, Canada, Cape Verde, Chile, the Comoros, Cyprus, the Falkland Islands, Fernando Pó, Israel, Lebanon, Madagascar, Mauritius, Mayotte, Morocco, New Zealand, Peru, Réunion, São Tomé, the Seychelles, South Africa, Tunisia, and the United States.
Australia
European fallow deer were introduced to Tasmania in 1830 and to mainland Australia in the 1880s. The deer can now be found in all Australian jurisdictions, except Western Australia and the Northern Territory. The European fallow deer is the most widespread and numerous of introduced deer species in Australia. Proper control of deer populations in New South Wales (NSW) was precluded for some years by the classification of these deer as "game animals", as well as being a feral pest species. This led to an explosion in numbers, a vast increase in range in that state, impacts on agricultural production, increased environmental damage, and a dramatic increase in vehicle accidents involving deer. This policy has since been reversed on privately held land only, and on such land the deer is once again only classified as a feral pest species; they remain game animals on public land. The NSW government now asks the public to assist by not transporting or releasing feral deer onto any land, implying that intentional release of deer has been a factor in the vast increase in range in NSW in recent years.
Argentina
The European fallow deer was introduced to Victoria Island in Neuquén Province by billionaire Aaron Anchorena, who intended to increase hunting opportunities. He freed wildlife of European and Asian origin, making them common inhabitants of the island.
Canada
The European fallow deer is listed as an invasive species in the province of British Columbia. In 2021, the Canadian federal government, local First Nations, and local residents put forward a plan to eradicate the fallow deer population on Sidney Island, a small island located off the southwest coast of British Columbia.
Great Britain and Ireland
The European fallow deer was spread across Central Europe by the Romans. Recent finds at Fishbourne Roman Palace show that European fallow deer were introduced into southern England in the first century AD. Fallow deer were established in Britain by the fourth century AD. Genetic studies have shown that this population became extinct and the fallow deer was re-introduced from Anatolia prior to the Norman conquest, not introduced from Scilly by the Normans as had previously been believed. Deer from England are a likely source of their re-introduction elsewhere in northern Europe.
European fallow deer are now widespread on the UK mainland and are present in most of England and Wales south of a line drawn from the Wash to the Mersey. Populations in the New Forest and the Forest of Dean are long-standing, and many of the other populations originated from park escapees. They are not quite so widespread in the northern parts of England, but are present in most lowland areas and also in parts of Scotland, principally in Strathtay and around Loch Lomond. According to the British Deer Society distribution survey 2007, they have increased in range since the previous survey in 2000, although the increase in range is not as spectacular as for some of the other deer species.
A significant number of the European fallow deer in the Forest of Dean and in Epping Forest are of the black variety. One particularly interesting population, known as "long-haired fallow deer", inhabit Mortimer Forest on the England/Wales border; a significant part of the population has long body hair with distinct ear tufts.
A historical herd is at Phoenix Park in Ireland, where a herd of 400–450 European fallow deer descends from the original herd introduced in the 1660s. In a 2023 study, this herd was shown to comprise the first wild deer outside of North America to have contracted SARS-CoV-2, raising concerns about a potential natural reservoir arising within European deer herds.
New Zealand
From 1860, European fallow deer were introduced into New Zealand. Significant herds exist in a number of low-altitude forests.
South Africa
European fallow deer are popular in the rural areas of KwaZulu-Natal for hunting purposes, in parts of the Gauteng Province to beautify ranches, and in the Eastern Cape where they were introduced on game farms for the hunting industry because of their exotic qualities. European fallow deer adapted extremely well to the South African environment with access to savanna grasslands, particularly in the cooler climate ranges such as the highveld. They also occur in the western cape.
Sweden
One noted historical herd of European fallow deer is located in the Ottenby nature reserve in Öland, where Charles X Gustav of Sweden erected a dry stone wall some 4 km long to enclose a royal fallow deer herd in the mid-17th century; the herd still exists as of 2006.
United States
In recent times, European fallow deer have been introduced in parts of the United States. A small feral population exists on one barrier island in Georgia. Fallow deer have also been introduced in Texas, along with many other exotic deer species, where they are often hunted on large game ranches.
In Pennsylvania, European fallow deer are considered livestock, since no feral animals are breeding in the wild. Occasional reports of wild European fallow deer in Pennsylvania and Indiana are generally attributed to escapes from preserves or farms.
A herd of white European fallow deer is located near Argonne National Laboratories in northeastern Illinois.
A small herd of 15 mostly white European fallow deer resides at the Belle Isle Nature Zoo on Belle Isle in Detroit, Michigan. Until the turn of the 21st century, this herd had the run of the island; the herd was thereafter confined, with extirpation being the initial goal.
A small herd, believed to be the oldest in the United States, exists in the Land Between the Lakes National Recreation Area (LBL) in far western Kentucky and Tennessee. The European fallow deer herd in the LBL "was brought to LBL by the Hillman Land Company in 1918. LBL's herd is believed to be the oldest population of fallow deer in the country, and at one time was the largest. Today, the herd numbers fewer than 150 and hunting of fallow deer is not permitted. Although LBL's wildlife management activities focus on native species, the fallow herd is maintained for wildlife viewing and because of its historical significance."
European fallow deer are present in the Point Reyes National Seashore, California, and Mendocino County near Ridgewood Ranch, west of Redwood Valley, California; some of them are leucistic.
Mating system
European fallow deer are highly dimorphic, polygynous breeders; the breeding season or rut lasts about 135 days. In the Northern Hemisphere, the breeding season tends to occur in the second half of October, while it occurs in April in the Southern Hemisphere, some matings can still occur before and after. This mating behaviour within the rut most often occurs in leks, where males congregate in small groups on mating territories in which the females’ only purpose for visiting these territories is for copulation. Variation within European fallow deer mating systems occurs; other than the traditional behaviour of lekking, different types of mating behaviours can include harems, dominance groups, stands, temporary stands, and multiple stands. Different populations, environmental variation, size, and even age can determine the type of variation within a European fallow deer mating system, but lekking behaviour is the most commonly found and studied in nature; variation can be explained by three characteristics (1) the optimal strategy under specific environmental or social conditions, (2) the strategy of an individual may be dependent on the strategies of other individual males within the same population, and (3) individual males may be less capable at gaining access to females, since they can be outcompeted by other males that are more capable.
Female European fallow deer are polyestrous; they are receptive to males during multiple periods of estrus throughout the mating season while not gestating. Male rut behaviour includes licking and sniffing around the anus and vulva to determine whether a female is fertile. Males produce high-pitched whines repeatedly to initiate mating; following this display, a female may allow the male to mount; copulation can last as long as 5 minutes.
Ecology and mating system characteristics
Many deer species—including European fallow deer—have a social organization that can be tremendously plastic depending on their environment, meaning that group size and habitat type are closely linked to herd size. Most of the detailed research on the ecological characteristics and behaviour of European fallow deer occurs in large blocks of woodland, which means some bias may be present. European fallow deer can be found in a variety of habitats, which can range from cool and wet to hot and dry. European fallow deer seem to have a preference for older forests with dispersed areas of grass, trees, and a variety of other vegetation. The largest herd occurs right before the rutting season, while the smallest groups are females with fawns. Throughout a large portion of the year, the sexes remain separated and only congregate during the mating months, but other patterns may be described, such as bachelor groups and even mixed groups.
Male European fallow deer produce low-frequency vocalizations called groans; the sound of these groans results from the consistent and complex shape of the vocal tract involving the oral and nasal cavities.
Ruts are characterized by males gaining the best territory possible to increase their odds for mating, and are often characterized by the presence of females on stands. During this time, males stop feeding to defend their ruts from subordinate males. Males defending this territory often lose an average of 17% of their body weight, and the liver exhibits steatosis, which is reversible. Throughout breeding seasons, the males may obtain the same rut; in some cases, ruts can be held by more than one individual; some possibilities for this include high population density and less rut space, or more suitable habitats, which can be shared.
Parental care
After a female is impregnated, gestation lasts up to 245 days. Usually one fawn is born; twins are rare. The females can conceive when they are 16 months old, whereas the males can successfully breed at 16 months, but most do not breed until they are 48 months old. The females can become very cagey just before they give birth to their fawn and find secluded areas such as a bush or cave; sometimes females give birth near the herd. As soon as the female gives birth, the she then licks the fawn to clean it; this helps initiate the maternal bond between the two, and only females provide parental investment; males do not participate in rearing the fawn.
After the birth of the fawn occurs, the females do not return to the herd for at least 10 days and for most of the days the mother is separated from the fawn, returning only to feed the fawn. The nursing period lasts about 4 months and happens every 4 hours each day. Rumination is a critical part of development in the fawn's life, and this develops about 2 to 3 weeks into the fawn's life. Females initiate the weaning periods for the fawn, which lasts about 20 days; 3 to 4 weeks; later, the fawn will start to follow its mother, and they will finally rejoin the herd together. The mother frequently licks the fawn's anal area to stimulate suckling, urination, and defecation, which is a critical part of the development of the fawn. Weaning is completed at around 7 months, and at around 12 months, the fawn is independent; after the 135 days of reproduction, the rut comes to an end, which can be characterized by the changes in group size and behaviour.
Contests and weaponry
Since European fallow deer are polygynous species that congregate once every year, males must fight to obtain access to estrous females. The relationship between antler size and body condition can be treated as indicators to reflect body condition within a given year. These secondary sexual characteristics can have dual functions, which include the attractiveness of males, which females can ultimately choose, and fighting ability of the male. It was found that males with larger antlers had higher mating success, where males with asymmetrical antlers did not. When males develop their antlers, trade-offs are made between reproduction and survival. Genetic variations exist within fallow deer populations with variable antler growth, males that exhibited faster-growing antlers early in life are able to grow longer antlers without any significant cost; this shows that there is phenotypic variation among fallow deer populations.
Aggressive behaviour is often observed when individuals are seeking out mates, scarce resources, and even territories. Species that compete using their weapons usually engage with mutual agreement, but if any noticeable asymmetries are seen, such as a broken or lost weapon, this may alter the behaviour of an individual to engage in a fight. Likelihood and severity of antler damage were looked at in fallow deer, to test whether antler damage was associated with contest tactics and duration, and if an association existed with the tendency for individuals to engage in fighting. Individuals with undamaged antlers were more likely to attack, using high-risk tactics that included jumping, clashing, or backward-pushing behaviour, this was exhibited by both contestants; dominant males were more likely to have damaged antlers. Dominance ranks exist within fallow deer populations, which can be linked to aggression level and body size; when competing for a male, however, how ranks are obtained is not studied extensively.
Endurance rivalry
Male fallow deer are highly competitive during the rutting season; successful mating depends mainly on body size and dominance rank. Many factors can determine the seasonal reproductive success of an individual male fallow deer; these factors include body size, which can affect reproduction and survival. The amount of time spent in a lek can be an important factor in determining male reproductive success; energy can play an important role for the duration of competitive leks. Among ungulates, European fallow deer exhibit one of the most outstanding examples of sexual dimorphism, as males are much larger than females. For sexual selection to lead to the evolution of sexual dimorphism, where males are bigger than females, advantages must be present: (1) Advantages during combat, (2) Endurance rivalry advantage, (3) Female preference for larger males, and (4) Advantages during sperm competition. Sexual selection has chosen bigger males over an evolutionary time scale and conferred advantages during competition of mates by a variety of mechanisms, which are intrasexual competition, access to females, and resource accessibility, which affects attractiveness to females.
Body size is important during male-male agonistic interactions and endurance rivalry, while females tend to have a preference for larger males. Dominance rank is a good indicator of body size and body mass, but age was not an important factor. In a study done by McElligott et al. (2001), it was found that mating success was related to body size, pre-rut and rut rank. Similarly, in another study, researchers found that age, weight, and display effort were all significant factors in determining mating success; in both studies, mating success was measured by the frequency of copulations, which means that a variety of factors in different fallow deer populations can affect the overall energy allocation which will ultimately affect mating success. Maternal investment early in life can be critical to the development of body size, since it can be quite variable at that stage depending on resources and habitat type. Mature male body size can be a better indicator of overall male quality rather than body mass, since body mass depends on a variety of resources each year and is not a static trait; body mass can be a complex trait to measure.
Threats
Since the 20th century, a serious decline in the populations of European fallow deer has been seen in Turkey, the only region where it is definitely thought to be native, and it has disappeared from almost all regions where it was formerly found aside from Düzlerçami Game Reserve in the Mount Güllük-Termessos National Park, where a semiwild, genetically distinct population exists. The Turkish government undertook a breeding program at Düzlerçami starting in 1966, with the population growing from 7 to 500 animals, but it underwent a massive collapse until 2000 due to reasons not fully understood, but thought to be linked to urbanization, recreational activities, and poaching, and numbered less than 30 (with only 10 individuals roaming outside the fenced areas) individuals by 2007 and less than 130 individuals by 2010. This population remains at risk from inbreeding and poaching. Reintroduction to other areas of Turkey has not been successful but should still be considered to increase the species' population.
The population on Rhodes, which is of uncertain origin, but is known to be very genetically distinct from others, is also of major conservation concern. It numbers around 400-500 animals and is at risk from poaching and wildfires. The population is also at risk of outbreeding depression, as in some parts of Rhodes, mainland European fallow deer are kept in fenced areas; these deer could escape and breed with the Rhodian fallow deer. Rhodian fallow deer also damage summer crops and due to a lack of a compensation system, persecution against the population could happen. A reduction of water resources on the island due to climate change could also affect the animals. Despite this, there are signs of population recovery on Rhodes as of 2008 due to conservation measures.
Despite these threats, the European fallow deer is common across the other areas where it could potentially be native, as well as the areas throughout Europe that it was introduced to early on, thus it is considered to be of least concern by the IUCN Red List.
| Biology and health sciences | Deer | Animals |
336123 | https://en.wikipedia.org/wiki/Dumbbell%20Nebula | Dumbbell Nebula | The Dumbbell Nebula (also known as the Apple Core Nebula, Messier 27, and NGC 6853) is a planetary nebula (nebulosity surrounding a white dwarf) in the constellation Vulpecula, at a distance of about 1360 light-years. It was the first such nebula to be discovered, by Charles Messier in 1764. At its brightness of visual magnitude 7.5 and diameter of about 8 arcminutes, it is easily visible in binoculars and is a popular observing target in amateur telescopes.
The Dumbbell Nebula appears shaped like a prolate spheroid and is viewed from our perspective along the plane of its equator. In 1992, Moreno-Corral et al. computed that its rate of expansion angularly was, viewed from our distance, no more than (″) per century. From this, an upper limit to the age of 14,600 years may be determined. In 1970, Bohuski, Smith, and Weedman found an expansion velocity of . Given its semi-minor axis radius of , this implies that the kinematic age of the nebula is 9,800 years.
Like many nearby planetary nebulae, the Dumbbell contains knots. Its central region is marked by a pattern of dark and bright cusped knots and their associated dark tails (see picture). The knots vary in appearance from symmetric objects with tails to rather irregular tail-less objects. Similarly to the Helix Nebula and the Eskimo Nebula, the heads of the knots have bright cusps which are local photoionization fronts.
The central star, a white dwarf progenitor, is estimated to have a radius which is (0.13 light seconds) which gives it a size larger than most other known white dwarfs. Its mass was estimated in 1999 by Napiwotzki to be .
Gallery
The Dumbbell nebula can be easily seen in binoculars in a dark sky, just above the small constellation of Sagitta.
| Physical sciences | Notable nebulae | Astronomy |
336175 | https://en.wikipedia.org/wiki/Jute | Jute | Jute ( ) is a long, rough, shiny bast fibre that can be spun into coarse, strong threads. It is produced from flowering plants in the genus Corchorus, of the mallow family Malvaceae. The primary source of the fiber is Corchorus olitorius, but such fiber is considered inferior to that derived from Corchorus capsularis.
Jute fibers, composed primarily of cellulose and lignin, are collected from bast (the phloem of the plant, sometimes called the "skin") of plants like kenaf, industrial hemp, flax (linen), and ramie. The industrial term for jute fiber is raw jute. The fibers are off-white to brown and range from long. In Bangladesh, jute is called the "golden fiber" for its color and monetary value.
The bulk of the jute trade is centered in South Asia, with India and Bangladesh as the primary producers. The majority of jute is used for durable and sustainable packaging, such as burlap sacks. Its production and usage declined as disposable plastic packaging became common, but this trend has begun to reverse as merchants and even nations phase out or ban single-use plastics.
Cultivation
The jute plant needs plain alluvial soil and standing water. During the monsoon season, the monsoon climate offers a warm and wet environment which is suitable for growing jute. Temperatures from and relative humidity of 70%–80% are favorable for successful cultivation. Jute requires of rainfall weekly, and more during the sowing time. Soft water is necessary for jute production.
White jute (Corchorus capsularis)
Historical documents (including Ain-e-Akbari by Abu'l-Fazl ibn Mubarak in 1590) state that the poor villagers of India used to wear clothing made of jute. The weavers used simple hand-spinning wheels and hand looms, which they also used to spin cotton yarns. History also suggests that Indians, especially Bengalis, used ropes and twines made of white jute from ancient times for household and other uses. Jute is highly functional for carrying grains or other agricultural products.
Tossa jute (Corchorus olitorius)
Tossa jute (Corchorus olitorius) is a variety thought to be native to South Asia. It is grown for both fiber and culinary purposes. People use the leaves as an ingredient in a mucilaginous potherb called "molokhiya" (, of uncertain etymology), which is mainly used in some Arabic countries such as Egypt, Jordan, and Syria as a soup-based dish, sometimes with meat over rice or lentils. The King James translation of the Book of Job (chapter 30, verse 4), in the Hebrew Bible, mistranslates the word maluaḥ, which means Atriplex as "mallow", which in turn has led some to identify this jute species as that what was meant by the translators, and led it to be called 'Jew's mallow' in English. It is high in protein, vitamin C, beta-carotene, calcium, and iron.
Bangladesh and other countries in Southeast Asia, and the South Pacific mainly use jute for its fiber. Tossa jute fiber is softer, silkier, and stronger than white jute. This variety shows good sustainability in the Ganges Delta climate. Along with white jute, tossa jute has also been cultivated in the soil of Bengal where has been known as paat since the start of the 19th century. Coremantel, Bangladesh, is the largest global producer of the tossa jute variety. In India, West Bengal is the largest producer of jute.
History
Jute has been used for making textiles in the Indus valley civilization since the 3rd millennium BC.
For centuries, jute has been a part of the culture of Bangladesh and some parts of West Bengal and Assam. The British started trading in jute during the seventeenth century. During the reign of the British Empire, jute was also used in the military. British jute barons grew rich by processing jute and selling manufactured products made from it. Dundee Jute Barons and the British East India Company set up many jute mills in Bengal, and by 1895 jute industries in Bengal overtook the Scottish jute trade. Many Scots emigrated to Bengal to set up jute factories. More than a billion jute sandbags were exported from Bengal to the trenches of World War I, and to the American South for bagging cotton. It was used in multiple industries, including the fishing, construction, art, and arms industries.
Due to its coarse and tough texture, jute could initially only be processed by hand, until someone in Dundee discovered that treating it with whale oil made it machine processable. The industry boomed throughout the eighteenth and nineteenth centuries ("jute weaver" was a recognized trade occupation in the 1901 UK census), but this trade largely ceased by about 1970, being substituted for by synthetic fibres. In the 21st century, jute has become a large export again, mainly in Bangladesh.
Production
The jute fiber comes from the stem and ribbon (outer skin) of the jute plant. The fibers are first extracted by retting, a process in which jute stems are bundled together and immersed in slow running water. There are two types of retting: stem and ribbon. After the retting process, stripping begins. In the stripping process, workers scrape off non-fibrous matter, then dig in and grab the fibers from within the jute stem.
Jute is a rain-fed crop with little need for fertilizer or pesticides, in contrast to cotton's heavy requirements. Production in India is concentrated mostly in West Bengal. India is the world's largest producer of jute, but imported approximately 162,000 tonnes of raw fiber and 175,000 tonnes of jute products in 2011. India, Pakistan, and China import significant quantities of jute fiber and products from Bangladesh, as do the United Kingdom, Japan, United States, France, Spain, Ivory Coast, Germany and Brazil. Jute and jute products formerly held the top position among Bangladesh's most exported goods, although now they stand second after ready-made apparel. Annually, Bangladesh produces 7 to 8 million bales of raw jute, out of which 0.6 to 0.8 million bales are exported to international markets. China, India, and Pakistan are the primary importers of Bangladeshi raw jute.
Genome
In 2002, Bangladesh commissioned a consortium of researchers from University of Dhaka, Bangladesh Jute Research Institute (BJRI) and private software firm DataSoft Systems Bangladesh Ltd., in collaboration with the Centre for Chemical Biology, University of Science Malaysia and University of Hawaii, to research different fibers and hybrid fibers of jute. The draft genome of jute (Corchorus olitorius) was completed.
Uses
Jutes are relatively cheap and versatile fiber and have a wide variety of uses in cordage and cloth. It is commonly used to make burlap sacks.
The jute plant also has some culinary uses, which are generally focused on the leaves.
Due to its durability and biodegradability, jute matting is used as a temporary solution to prevent flood erosion.
Researchers have also investigated the possibility of using jute and glucose to build aeroplane panels.
Fibers
Individual jute fibers can range from very fine to very coarse, and the varied fibers are suited for a variety of uses.
The coarser fibers, which are called jute butts, are used alone or combined with other fibers to make many products:
Hessian cloth
Sacking
Agricultural wrapping cloth, most notably wrapping for bales of raw cotton
Sandbags
Cloth backing for flooring, such as linoleum or carpet
Cordage, such as twine or rope
Pulp (for paper production)
Finer jute fibers can be processed for use in:
Shoes, such as espadrilles
Sweaters and cardigans
Imitation silk
Curtains
Chair coverings
Carpets
Rugs
Jute was historically used in traditional textile machinery because jute fibers contain cellulose (vegetable fiber) and lignin (wood fiber). Later, several industries, such as the automotive, pulp and paper, furniture, and bedding industries, started to use jute and its allied fibers with their non-woven and composite technology to manufacture nonwoven fabric, technical textiles, and composites.
Jute is used in the manufacture of fabrics, such as Hessian cloth, sacking, scrim, carpet backing cloth (CBC), and canvas. Hessian is lighter than sacking, and it is used for bags, wrappers, wall-coverings, upholstery, and home furnishings. Sacking, which is a fabric made of heavy jute fibers, has its use in the name. CBC made of jute comes in two types: primary and secondary. Primary CBC provides a tufting surface, while secondary CBC is bonded onto the primary backing for an overlay. Jute packaging is sometimes used as an environmentally friendly substitute for plastic.
Other jute consumer products include floor coverings, high performance technical textiles, geotextiles, and composites. Jute has been used as a home textile due to its anti-static and color- and light-fast properties, as well as its strength, durability, UV protection, sound and heat insulation, and low thermal conductivity.
Culinary uses
Corchous olitorius leaves are used to make mulukhiya, which is sometimes considered the Egyptian national dish, and is also consumed in Cyprus and other Middle Eastern countries. These leaves are an ingredient in stews, typically cooked with lamb or chicken.
In India (West Bengal) and Bangladesh, in the Bengali cuisine, the fresh leaves are stir fried and eaten as path saak bhaja (পাঠ শাক ভাজা) along with a mustard sauce called kasundi (কাসুন্দি). The leaves are also eaten by making pakoras (পাঠ পাতার বড়া) with rice flour or Gram flour batter.
In Nigeria, leaves of Corchorus olitorius are prepared in sticky soup called ewedu together with ingredients such as sweet potato, dried small fish, or shrimp. The leaves are rubbed until foamy or sticky before they are added to the soup. Among the Yoruba people of Nigeria, the leaves are called Ewedu, and in the Hausa-speaking northern Nigeria, the leaves are called turgunuwa or lallo. The cook shreds the jute leaves adds them to the soup, which generally also contains meat or fish, onions, pepper, and other spices. The Lugbara of Northwestern Uganda also eat jute leaves in a soup called pala bi. Jute is also a totem for Ayivu, one of the Lugbara clans.
In the Philippines, especially in Ilocano-dominated areas, this vegetable, which is locally known as saluyot, can be mixed with bitter gourd, bamboo shoots, loofah, or a combination of these ingredients, which have a slimy and slippery texture.
Vietnamese cuisine also use edible jute known as rau đay. It is usually used in canh cooked with crab and loofah.
In Haiti, a dish called "Lalo" is made with jute leaves and other ingredients. One version of Lalo includes lalo with crab and meat (such as pork or beef) served on a bed of rice.
Environmental impact
Fabrics made of jute fibers are carbon neutral and biodegradable, which make jute a candidate material for high performance technical textiles.
As global concern over forest destruction increases, jute may begin to replace wood as a primary pulp ingredient.
Cultural significance
| Technology | Fabrics and fibers | null |
336271 | https://en.wikipedia.org/wiki/Approximation | Approximation | An approximation is anything that is intentionally similar but not exactly equal to something else.
Etymology and usage
The word approximation is derived from Latin approximatus, from proximus meaning very near and the prefix ad- (ad- before p becomes ap- by assimilation) meaning to. Words like approximate, approximately and approximation are used especially in technical or scientific contexts. In everyday English, words such as roughly or around are used with a similar meaning. It is often found abbreviated as approx.
The term can be applied to various properties (e.g., value, quantity, image, description) that are nearly, but not exactly correct; similar, but not exactly the same (e.g., the approximate time was 10 o'clock).
Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.
In science, approximation can refer to using a simpler process or model when the correct model is difficult to use. An approximate model is used to make calculations easier. Approximations might also be used if incomplete information prevents use of exact representations.
The type of approximation used depends on the available information, the degree of accuracy required, the sensitivity of the problem to this data, and the savings (usually in time and effort) that can be achieved by approximation.
Mathematics
Approximation theory is a branch of mathematics, and a quantitative part of functional analysis. Diophantine approximation deals with approximations of real numbers by rational numbers.
Approximation usually occurs when an exact form or an exact numerical number is unknown or difficult to obtain. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. For example, 1.5 × 106 means that the true value of something being measured is 1,500,000 to the nearest hundred thousand (so the actual value is somewhere between 1,450,000 and 1,550,000); this is in contrast to the notation 1.500 × 106, which means that the true value is 1,500,000 to the nearest thousand (implying that the true value is somewhere between 1,499,500 and 1,500,500).
Numerical approximations sometimes result from using a small number of significant digits. Calculations are likely to involve rounding errors and other approximation errors. Log tables, slide rules and calculators produce approximate answers to all but the simplest calculations. The results of computer calculations are normally an approximation expressed in a limited number of significant digits, although they can be programmed to produce more precise results. Approximation can occur when a decimal number cannot be expressed in a finite number of binary digits.
Related to approximation of functions is the asymptotic value of a function, i.e. the value as one or more of a function's parameters becomes arbitrarily large. For example, the sum is asymptotically equal to k. No consistent notation is used throughout mathematics and some texts use ≈ to mean approximately equal and ~ to mean asymptotically equal whereas other texts use the symbols the other way around.
Typography
The approximately equals sign, ≈, was introduced by British mathematician Alfred Greenhill in 1892, in his book Applications of Elliptic Functions.
LaTeX symbols
Symbols used in LaTeX markup.
(\approx), usually to indicate approximation between numbers, like .
(\not\approx), usually to indicate that numbers are not approximately equal ().
(\simeq), usually to indicate asymptotic equivalence between functions, like .
So writing would be wrong under this definition, despite wide use.
(\sim), usually to indicate proportionality between functions, the same of the line above will be .
(\cong), usually to indicate congruence between figures, like .
(\eqsim), usually to indicate that two quantities are equal up to constants.
(\lessapprox) and (\gtrapprox), usually to indicate that either the inequality holds or the two values are approximately equal.
Unicode
Symbols used to denote items that are approximately equal are wavy or dotted equals signs.
Science
Approximation arises naturally in scientific experiments. The predictions of a scientific theory can differ from actual measurements. This can be because there are factors in the real situation that are not included in the theory. For example, simple calculations may not include the effect of air resistance. Under these circumstances, the theory is an approximation to reality. Differences may also arise because of limitations in the measuring technique. In this case, the measurement is an approximation to the actual value.
The history of science shows that earlier theories and laws can be approximations to some deeper set of laws. Under the correspondence principle, a new scientific theory should reproduce the results of older, well-established, theories in those domains where the old theories work. The old theory becomes an approximation to the new theory.
Some problems in physics are too complex to solve by direct analysis, or progress could be limited by available analytical tools. Thus, even when the exact representation is known, an approximation may yield a sufficiently accurate solution while reducing the complexity of the problem significantly. Physicists often approximate the shape of the Earth as a sphere even though more accurate representations are possible, because many physical characteristics (e.g., gravity) are much easier to calculate for a sphere than for other shapes.
Approximation is also used to analyze the motion of several planets orbiting a star. This is extremely difficult due to the complex interactions of the planets' gravitational effects on each other. An approximate solution is effected by performing iterations. In the first iteration, the planets' gravitational interactions are ignored, and the star is assumed to be fixed. If a more precise solution is desired, another iteration is then performed, using the positions and motions of the planets as identified in the first iteration, but adding a first-order gravity interaction from each planet on the others. This process may be repeated until a satisfactorily precise solution is obtained.
The use of perturbations to correct for the errors can yield more accurate solutions. Simulations of the motions of the planets and the star also yields more accurate solutions.
The most common versions of philosophy of science accept that empirical measurements are always approximations — they do not perfectly represent what is being measured.
Law
Within the European Union (EU), "approximation" refers to a process through which EU legislation is implemented and incorporated within Member States' national laws, despite variations in the existing legal framework in each country. Approximation is required as part of the pre-accession process for new member states, and as a continuing process when required by an EU Directive. Approximation is a key word generally employed within the title of a directive, for example the Trade Marks Directive of 16 December 2015 serves "to approximate the laws of the Member States relating to trade marks". The European Commission describes approximation of law as "a unique obligation of membership in the European Union".
| Mathematics | Basics | null |
336349 | https://en.wikipedia.org/wiki/Egyptian%20fraction | Egyptian fraction | An Egyptian fraction is a finite sum of distinct unit fractions, such as
That is, each fraction in the expression has a numerator equal to 1 and a denominator that is a positive integer, and all the denominators differ from each other. The value of an expression of this type is a positive rational number ; for instance the Egyptian fraction above sums to . Every positive rational number can be represented by an Egyptian fraction. Sums of this type, and similar sums also including and as summands, were used as a serious notation for rational numbers by the ancient Egyptians, and continued to be used by other civilizations into medieval times. In modern mathematical notation, Egyptian fractions have been superseded by vulgar fractions and decimal notation. However, Egyptian fractions continue to be an object of study in modern number theory and recreational mathematics, as well as in modern historical studies of ancient mathematics.
Applications
Beyond their historical use, Egyptian fractions have some practical advantages over other representations of fractional numbers.
For instance, Egyptian fractions can help in dividing food or other objects into equal shares. For example, if one wants to divide 5 pizzas equally among 8 diners, the Egyptian fraction
means that each diner gets half a pizza plus another eighth of a pizza, for example by splitting 4 pizzas into 8 halves, and the remaining pizza into 8 eighths. Exercises in performing this sort of fair division of food are a standard classroom example in teaching students to work with unit fractions.
Egyptian fractions can provide a solution to rope-burning puzzles, in which a given duration is to be measured by igniting non-uniform ropes which burn out after a unit time. Any rational fraction of a unit of time can be measured by expanding the fraction into a sum of unit fractions and then, for each unit fraction , burning a rope so that it always has simultaneously lit points where it is burning. For this application, it is not necessary for the unit fractions to be distinct from each other. However, this solution may need an infinite number of re-lighting steps.
Early history
Egyptian fraction notation was developed in the Middle Kingdom of Egypt. Five early texts in which Egyptian fractions appear were the Egyptian Mathematical Leather Roll, the Moscow Mathematical Papyrus, the Reisner Papyrus, the Kahun Papyrus and the Akhmim Wooden Tablet. A later text, the Rhind Mathematical Papyrus, introduced improved ways of writing Egyptian fractions. The Rhind papyrus was written by Ahmes and dates from the Second Intermediate Period; it includes a table of Egyptian fraction expansions for rational numbers , as well as 84 word problems. Solutions to each problem were written out in scribal shorthand, with the final answers of all 84 problems being expressed in Egyptian fraction notation. Tables of expansions for similar to the one on the Rhind papyrus also appear on some of the other texts. However, as the Kahun Papyrus shows, vulgar fractions were also used by scribes within their calculations.
Notation
To write the unit fractions used in their Egyptian fraction notation, in hieroglyph script, the Egyptians placed the hieroglyph:
(er, "[one] among" or possibly re, mouth) above a number to represent the reciprocal of that number. Similarly in hieratic script they drew a line over the letter representing the number. For example:
The Egyptians had special symbols for , , and that were used to reduce the size of numbers greater than when such numbers were converted to an Egyptian fraction series. The remaining number after subtracting one of these special fractions was written as a sum of distinct unit fractions according to the usual Egyptian fraction notation.
The Egyptians also used an alternative notation modified from the Old Kingdom to denote a special set of fractions of the form (for ) and sums of these numbers, which are necessarily dyadic rational numbers. These have been called "Horus-Eye fractions" after a theory (now discredited) that they were based on the parts of the Eye of Horus symbol.
They were used in the Middle Kingdom in conjunction with the later notation for Egyptian fractions to subdivide a hekat, the primary ancient Egyptian volume measure for grain, bread, and other small quantities of volume, as described in the Akhmim Wooden Tablet. If any remainder was left after expressing a quantity in Eye of Horus fractions of a hekat, the remainder was written using the usual Egyptian fraction notation as multiples of a ro, a unit equal to of a hekat.
Calculation methods
Modern historians of mathematics have studied the Rhind papyrus and other ancient sources in an attempt to discover the methods the Egyptians used in calculating with Egyptian fractions. In particular, study in this area has concentrated on understanding the tables of expansions for numbers of the form in the Rhind papyrus. Although these expansions can generally be described as algebraic identities, the methods used by the Egyptians may not correspond directly to these identities. Additionally, the expansions in the table do not match any single identity; rather, different identities match the expansions for prime and for composite denominators, and more than one identity fits the numbers of each type:
For small odd prime denominators , the expansion was used.
For larger prime denominators, an expansion of the form was used, where is a number with many divisors (such as a practical number) between and . The remaining term was expanded by representing the number as a sum of divisors of and forming a fraction for each such divisor in this sum. As an example, Ahmes' expansion fits this pattern with and , as and . There may be many different expansions of this type for a given ; however, as K. S. Brown observed, the expansion chosen by the Egyptians was often the one that caused the largest denominator to be as small as possible, among all expansions fitting this pattern.
For some composite denominators, factored as , the expansion for has the form of an expansion for with each denominator multiplied by . This method appears to have been used for many of the composite numbers in the Rhind papyrus, but there are exceptions, notably , , and .
One can also expand For instance, Ahmes expands . Later scribes used a more general form of this expansion, which works when is a multiple of .
The final (prime) expansion in the Rhind papyrus, , does not fit any of these forms, but instead uses an expansion that may be applied regardless of the value of . That is, . A related expansion was also used in the Egyptian Mathematical Leather Roll for several cases.
Later usage
Egyptian fraction notation continued to be used in Greek times and into the Middle Ages, despite complaints as early as Ptolemy's Almagest about the clumsiness of the notation compared to alternatives such as the Babylonian base-60 notation. Related problems of decomposition into unit fractions were also studied in 9th-century India by Jain mathematician Mahāvīra. An important text of medieval European mathematics, the Liber Abaci (1202) of Leonardo of Pisa (more commonly known as Fibonacci), provides some insight into the uses of Egyptian fractions in the Middle Ages, and introduces topics that continue to be important in modern mathematical study of these series.
The primary subject of the Liber Abaci is calculations involving decimal and vulgar fraction notation, which eventually replaced Egyptian fractions. Fibonacci himself used a complex notation for fractions involving a combination of a mixed radix notation with sums of fractions. Many of the calculations throughout Fibonacci's book involve numbers represented as Egyptian fractions, and one section of this book provides a list of methods for conversion of vulgar fractions to Egyptian fractions. If the number is not already a unit fraction, the first method in this list is to attempt to split the numerator into a sum of divisors of the denominator; this is possible whenever the denominator is a practical number, and Liber Abaci includes tables of expansions of this type for the practical numbers 6, 8, 12, 20, 24, 60, and 100.
The next several methods involve algebraic identities such as
For instance, Fibonacci represents the fraction by splitting the numerator into a sum of two numbers, each of which divides one plus the denominator: . Fibonacci applies the algebraic identity above to each these two parts, producing the expansion . Fibonacci describes similar methods for denominators that are two or three less than a number with many factors.
In the rare case that these other methods all fail, Fibonacci suggests a "greedy" algorithm for computing Egyptian fractions, in which one repeatedly chooses the unit fraction with the smallest denominator that is no larger than the remaining fraction to be expanded: that is, in more modern notation, we replace a fraction by the expansion
where represents the ceiling function; since , this method yields a finite expansion.
Fibonacci suggests switching to another method after the first such expansion, but he also gives examples in which this greedy expansion was iterated until a complete Egyptian fraction expansion was constructed: and .
Compared to ancient Egyptian expansions or to more modern methods, this method may produce expansions that are quite long, with large denominators, and Fibonacci himself noted the awkwardness of the expansions produced by this method. For instance, the greedy method expands
while other methods lead to the shorter expansion
Sylvester's sequence 2, 3, 7, 43, 1807, ... can be viewed as generated by an infinite greedy expansion of this type for the number 1, where at each step we choose the denominator instead of , and sometimes Fibonacci's greedy algorithm is attributed to James Joseph Sylvester.
After his description of the greedy algorithm, Fibonacci suggests yet another method, expanding a fraction by searching for a number c having many divisors, with , replacing by , and expanding ac as a sum of divisors of bc, similar to the method proposed by Hultsch and Bruins to explain some of the expansions in the Rhind papyrus.
Modern number theory
Although Egyptian fractions are no longer used in most practical applications of mathematics, modern number theorists have continued to study many different problems related to them. These include problems of bounding the length or maximum denominator in Egyptian fraction representations, finding expansions of certain special forms or in which the denominators are all of some special type, the termination of various methods for Egyptian fraction expansion, and showing that expansions exist for any sufficiently dense set of sufficiently smooth numbers.
One of the earliest publications of Paul Erdős proved that it is not possible for a harmonic progression to form an Egyptian fraction representation of an integer. The reason is that, necessarily, at least one denominator of the progression will be divisible by a prime number that does not divide any other denominator. The latest publication of Erdős, nearly 20 years after his death, proves that every integer has a representation in which all denominators are products of three primes.
The Erdős–Graham conjecture in combinatorial number theory states that, if the integers greater than 1 are partitioned into finitely many subsets, then one of the subsets has a finite subset of itself whose reciprocals sum to one. That is, for every , and every r-coloring of the integers greater than one, there is a finite monochromatic subset S of these integers such that The conjecture was proven in 2003 by Ernest S. Croot III.
Znám's problem and primary pseudoperfect numbers are closely related to the existence of Egyptian fractions of the form For instance, the primary pseudoperfect number 1806 is the product of the prime numbers 2, 3, 7, and 43, and gives rise to the Egyptian fraction .
Egyptian fractions are normally defined as requiring all denominators to be distinct, but this requirement can be relaxed to allow repeated denominators. However, this relaxed form of Egyptian fractions does not allow for any number to be represented using fewer fractions, as any expansion with repeated fractions can be converted to an Egyptian fraction of equal or smaller length by repeated application of the replacement if k is odd, or simply by replacing by if k is even. This result was first proven by .
Graham and Jewett proved that it is similarly possible to convert expansions with repeated denominators to (longer) Egyptian fractions, via the replacement This method can lead to long expansions with large denominators, such as had originally used this replacement technique to show that any rational number has Egyptian fraction representations with arbitrarily large minimum denominators.
Any fraction has an Egyptian fraction representation in which the maximum denominator is bounded by and a representation with at most terms. The number of terms must sometimes be at least proportional to ; for instance this is true for the fractions in the sequence , , , , , ... whose denominators form Sylvester's sequence. It has been conjectured that terms are always enough. It is also possible to find representations in which both the maximum denominator and the number of terms are small.
characterized the numbers that can be represented by Egyptian fractions in which all denominators are nth powers. In particular, a rational number q can be represented as an Egyptian fraction with square denominators if and only if q lies in one of the two half-open intervals
showed that any rational number has very dense expansions, using a constant fraction of the denominators up to N for any sufficiently large N.
Engel expansion, sometimes called an Egyptian product, is a form of Egyptian fraction expansion in which each denominator is a multiple of the previous one: In addition, the sequence of multipliers ai is required to be nondecreasing. Every rational number has a finite Engel expansion, while irrational numbers have an infinite Engel expansion.
study numbers that have multiple distinct Egyptian fraction representations with the same number of terms and the same product of denominators; for instance, one of the examples they supply is Unlike the ancient Egyptians, they allow denominators to be repeated in these expansions. They apply their results for this problem to the characterization of free products of Abelian groups by a small number of numerical parameters: the rank of the commutator subgroup, the number of terms in the free product, and the product of the orders of the factors.
The number of different n-term Egyptian fraction representations of the number one is bounded above and below by double exponential functions of n.
Open problems
Some notable problems remain unsolved with regard to Egyptian fractions, despite considerable effort by mathematicians.
The Erdős–Straus conjecture concerns the length of the shortest expansion for a fraction of the form . Does an expansion exist for every n? It is known to be true for all , and for all but a vanishingly small fraction of possible values of n, but the general truth of the conjecture remains unknown.
It is unknown whether an odd greedy expansion exists for every fraction with an odd denominator. If Fibonacci's greedy method is modified so that it always chooses the smallest possible odd denominator, under what conditions does this modified algorithm produce a finite expansion? An obvious necessary condition is that the starting fraction have an odd denominator y, and it is conjectured but not known that this is also a sufficient condition. It is known that every with odd y has an expansion into distinct odd unit fractions, constructed using a different method than the greedy algorithm.
It is possible to use brute-force search algorithms to find the Egyptian fraction representation of a given number with the fewest possible terms or minimizing the largest denominator; however, such algorithms can be quite inefficient. The existence of polynomial time algorithms for these problems, or more generally the computational complexity of such problems, remains unknown.
describes these problems in more detail and lists numerous additional open problems.
| Mathematics | Basics | null |
336557 | https://en.wikipedia.org/wiki/Blood%20test | Blood test | A blood test is a laboratory analysis performed on a blood sample that is usually extracted from a vein in the arm using a hypodermic needle, or via fingerprick. Multiple tests for specific blood components, such as a glucose test or a cholesterol test, are often grouped together into one test panel called a blood panel or blood work. Blood tests are often used in health care to determine physiological and biochemical states, such as disease, mineral content, pharmaceutical drug effectiveness, and organ function. Typical clinical blood panels include a basic metabolic panel or a complete blood count. Blood tests are also used in drug tests to detect drug abuse.
Extraction
A venipuncture is useful as it is a minimally invasive way to obtain cells and extracellular fluid (plasma) from the body for analysis. Blood flows throughout the body, acting as a medium that provides oxygen and nutrients to tissues and carries waste products back to the excretory systems for disposal. Consequently, the state of the bloodstream affects or is affected by, many medical conditions. For these reasons, blood tests are the most commonly performed medical tests.
If only a few drops of blood are needed, a fingerstick is performed instead of a venipuncture.
Indwelling arterial, central venous and peripheral venous lines can also be used to draw blood.
Phlebotomists, laboratory practitioners and nurses are those in charge of extracting blood from a patient. However, in special circumstances, and/or emergency situations, paramedics and physicians extract the blood. Also, respiratory therapists are trained to extract arterial blood to examine arterial blood gases.
Types of tests
Biochemical analysis
A basic metabolic panel measures sodium, potassium, chloride, bicarbonate, blood urea nitrogen (BUN), magnesium, creatinine, glucose, and sometimes calcium. Tests that focus on cholesterol levels can determine LDL and HDL cholesterol levels, as well as triglyceride levels.
Some tests, such as those that measure glucose or a lipid profile, require fasting (or no food consumption) eight to twelve hours prior to the drawing of the blood sample.
For the majority of tests, blood is usually obtained from the patient's vein. Other specialized tests, such as the arterial blood gas test, require blood extracted from an artery. Blood gas analysis of arterial blood is primarily used to monitor carbon dioxide and oxygen levels related to pulmonary function, but is also used to measure blood pH and bicarbonate levels for certain metabolic conditions.
While the regular glucose test is taken at a certain point in time, the glucose tolerance test involves repeated testing to determine the rate at which glucose is processed by the body.
Blood tests are also used to identify autoimmune diseases and Immunoglobulin E-mediated food allergies (see also Radioallergosorbent test).
Normal ranges
Blood tests results should always be interpreted using the ranges provided by the laboratory that performed the test. Example ranges are shown below.
Common abbreviations
Upon completion of a blood test analysis, patients may receive a report with blood test abbreviations. Examples of common blood test abbreviations are shown below.
Molecular profiles
Protein electrophoresis (general technique—not a specific test)
Western blot (general technique—not a specific test)
Liver function tests
Polymerase chain reaction (DNA). DNA profiling is today possible with even very small quantities of blood: this is commonly used in forensic science, but is now also part of the diagnostic process of many disorders.
Northern blot (RNA)
Sexually transmitted diseases
Cellular evaluation
Full blood count (or "Complete Blood Count")
Hematocrit
MCV ("Mean Corpuscular Volume")
Mean corpuscular hemoglobin concentration (MCHC)
Erythrocyte sedimentation rate (ESR)
Cross-matching. Determination of blood type for blood transfusion or transplants
Blood cultures are commonly taken if infection is suspected. Positive cultures and resulting sensitivity results are often useful in guiding medical treatment.
Future alternatives
Saliva tests
In 2008, scientists announced that the more cost effective saliva testing could eventually replace some blood tests, as saliva contains 20% of the proteins found in blood. Saliva testing may not be appropriate or available for all markers. For example, lipid levels cannot be measured with saliva testing.
Microemulsion
In February 2011, Canadian researchers at the University of Calgary's Schulich School of Engineering announced a microchip for blood tests. Dubbed a microemulsion, a droplet of blood captured inside a layer of another substance. It can control the exact size and spacing of the droplets. The new test could improve the efficiency, accuracy, and speed of laboratory tests while also doing it cheaply.
SIMBAS
In March 2011, a team of researchers from UC Berkeley, DCU and University of Valparaíso have developed lab-on-a-chip that can diagnose diseases within 10 minutes without the use of external tubing and extra components. It is called Self-powered Integrated Microfluidic Blood Analysis System (SIMBAS). It uses tiny trenches to separate blood cells from plasma (99 percent of blood cells were captured during experiments). Researchers used plastic components, to reduce manufacturing costs.
| Biology and health sciences | Medical procedures | null |
336806 | https://en.wikipedia.org/wiki/Orthodontics | Orthodontics | Orthodontics is a dentistry specialty that addresses the diagnosis, prevention, management, and correction of mal-positioned teeth and jaws, as well as misaligned bite patterns. It may also address the modification of facial growth, known as dentofacial orthopedics.
Abnormal alignment of the teeth and jaws is very common. The approximate worldwide prevalence of malocclusion was as high as 56%. However, conclusive scientific evidence for the health benefits of orthodontic treatment is lacking, although patients with completed treatment have reported a higher quality of life than that of untreated patients undergoing orthodontic treatment. The main reason for the prevalence of these malocclusions is diets with less fresh fruit and vegetables and overall softer foods in childhood, causing smaller jaws with less room for the teeth to erupt. Treatment may require several months to a few years and entails using dental braces and other appliances to gradually adjust tooth position and jaw alignment. In cases where the malocclusion is severe, jaw surgery may be incorporated into the treatment plan. Treatment usually begins before a person reaches adulthood, insofar as pre-adult bones may be adjusted more easily before adulthood.
History
Since the dawn of the human race, individuals have been grappling with the issue of overcrowded, irregular, and protruding teeth. Evidence from Greek and Etruscan materials suggests that attempts to treat this disorder date back to 1000 BC, showcasing primitive yet impressively well-crafted orthodontic appliances. In the 18th and 19th centuries, a range of devices for the "regulation" of teeth were described by various dentistry authors who occasionally put them into practice. As a modern science, orthodontics dates back to the mid-1800s. The field's influential contributors include Norman William Kingsley (1829–1913) and Edward Angle (1855–1930). Angle created the first basic system for classifying malocclusions, a system that remains in use today.
Beginning in the mid-1800s, Norman Kingsley published Oral Deformities, which is now credited as one of the first works to begin systematically documenting orthodontics. Being a major presence in American dentistry during the latter half of the 19th century, not only was Kingsley one of the early users of extraoral force to correct protruding teeth, but he was also one of the pioneers for treating cleft palates and associated issues.
During the era of orthodontics under Kingsley and his colleagues, the treatment was focused on straightening teeth and creating facial harmony. Ignoring occlusal relationships, it was typical to remove teeth for a variety of dental issues, such as malalignment or overcrowding. The concept of an intact dentition was not widely appreciated in those days, making bite correlations seem irrelevant.
In the late 1800s, the concept of occlusion was essential for creating reliable prosthetic replacement teeth. This idea was further refined and ultimately applied in various ways when dealing with healthy dental structures as well. As these concepts of prosthetic occlusion progressed, it became an invaluable tool for dentistry.
It was in 1890 that the work and impact of Dr. Edwards H. Angle began to be felt, with his contribution to modern orthodontics particularly noteworthy. Initially focused on prosthodontics, he taught in Pennsylvania and Minnesota before directing his attention towards dental occlusion and the treatments needed to maintain it as a normal condition, thus becoming known as the "father of modern orthodontics".
By the beginning of the 20th century, orthodontics had become more than just the straightening of crooked teeth. The concept of ideal occlusion, as postulated by Angle and incorporated into a classification system, enabled a shift towards treating malocclusion, which is any deviation from normal occlusion. Having a full set of teeth on both arches was highly sought after in orthodontic treatment due to the need for exact relationships between them. Extraction as an orthodontic procedure was heavily opposed by Angle and those who followed him. As occlusion became the key priority, facial proportions and aesthetics were neglected. To achieve ideal occlusals without using external forces, Angle postulated that having perfect occlusion was the best way to gain optimum facial aesthetics.
With the passing of time, it became quite evident that even an exceptional occlusion was not suitable when considered from an aesthetic point of view. Not only were there issues related to aesthetics, but it usually proved impossible to keep a precise occlusal relationship achieved by forcing teeth together over extended durations with the use of robust elastics, something Angle and his students had previously suggested. Charles Tweed in America and Raymond Begg in Australia (who both studied under Angle) re-introduced dentistry extraction into orthodontics during the 1940s and 1950s so they could improve facial esthetics while also ensuring better stability concerning occlusal relationships.
In the postwar period, cephalometric radiography started to be used by orthodontists for measuring changes in tooth and jaw position caused by growth and treatment. The x-rays showed that many Class II and III malocclusions were due to improper jaw relations as opposed to misaligned teeth. It became evident that orthodontic therapy could adjust mandibular development, leading to the formation of functional jaw orthopedics in Europe and extraoral force measures in the US. These days, both functional appliances and extraoral devices are applied around the globe with the aim of amending growth patterns and forms. Consequently, pursuing true, or at least improved, jaw relationships had become the main objective of treatment by the mid-20th century.
At the beginning of the twentieth century, orthodontics was in need of an upgrade. The American Journal of Orthodontics was created for this purpose in 1915; before it, there were no scientific objectives to follow, nor any precise classification system and brackets that lacked features.
Until the mid-1970s, braces were made by wrapping metal around each tooth. With advancements in adhesives, it became possible to instead bond metal brackets to the teeth.
In 1972, Lawrence F. Andrews gave an insightful definition of the ideal occlusion in permanent teeth. This has had meaningful effects on orthodontic treatments that are administered regularly, and these are: 1. Correct interarchal relationships 2. Correct crown angulation (tip) 3. Correct crown inclination (torque) 4. No rotations 5. Tight contact points 6. Flat Curve of Spee (0.0–2.5 mm), and based on these principles, he discovered a treatment system called the straight-wire appliance system, or the pre-adjusted edgewise system. Introduced in 1976, Larry Andrews' pre-adjusted edgewise appliance, more commonly known as the straight wire appliance, has since revolutionized fixed orthodontic treatment. The advantage of the design lies in its bracket and archwire combination, which requires only minimal wire bending from the orthodontist or clinician. It's aptly named after this feature: the angle of the slot and thickness of the bracket base ultimately determine where each tooth is situated with little need for extra manipulation.
Prior to the invention of a straight wire appliance, orthodontists were utilizing a non-programmed standard edgewise fixed appliance system, or Begg's pin and tube system. Both of these systems employed identical brackets for each tooth and necessitated the bending of an archwire in three planes for locating teeth in their desired positions, with these bends dictating ultimate placements.
Evolution of the current orthodontic appliances
When it comes to orthodontic appliances, they are divided into two types: removable and fixed. Removable appliances can be taken on and off by the patient as required. On the other hand, fixed appliances cannot be taken off as they remain bonded to the teeth during treatment.
Fixed appliances
Fixed orthodontic appliances are predominantly derived from the edgewise appliance approach, which typically begins with round wires before transitioning to rectangular archwires for improving tooth alignment. These rectangluar wires promote precision in the positioning of teeth following initial treatment. In contrast to the Begg appliance, which was based solely on round wires and auxiliary springs, the Tip-Edge system emerged in the early 21st century. This innovative technology allowed for the utilization of rectangular archwires to precisely control tooth movement during the finishing stages after initial treatment with round wires. Thus, almost all modern fixed appliances can be considered variations on this edgewise appliance system.
Early 20th-century orthodontist Edward Angle made a major contribution to the world of dentistry. He created four distinct appliance systems that have been used as the basis for many orthodontic treatments today, barring a few exceptions. They are E-arch, pin and tube, ribbon arch, and edgewise systems.
E-arch
Edward H. Angle made a significant contribution to the dental field when he released the 7th edition of his book in 1907, which outlined his theories and detailed his technique. This approach was founded upon the iconic "E-Arch" or 'the-arch' shape as well as inter-maxillary elastics. This device was different from any other appliance of its period as it featured a rigid framework to which teeth could be tied effectively in order to recreate an arch form that followed pre-defined dimensions. Molars were fitted with braces, and a powerful labial archwire was positioned around the arch. The wire ended in a thread, and to move it forward, an adjustable nut was used, which allowed for an increase in circumference. By ligation, each individual tooth was attached to this expansive archwire.
Pin and tube appliance
Due to its limited range of motion, Angle was unable to achieve precise tooth positioning with an E-arch. In order to bypass this issue, he started using bands on other teeth combined with a vertical tube for each individual tooth. These tubes held a soldered pin, which could be repositioned at each appointment in order to move them in place. Dubbed the "bone-growing appliance", this contraption was theorized to encourage healthier bone growth due to its potential for transferring force directly to the roots. However, implementing it proved troublesome in reality.
Ribbon arch
Realizing that the pin and tube appliance was not easy to control, Angle developed a better option, the ribbon arch, which was much simpler to use. Most of its components were already prepared by the manufacturer, so it was significantly easier to manage than before. In order to attach the ribbon arch, the occlusal area of the bracket was opened. Brackets were only added to eight incisors and mandibular canines, as it would be impossible to insert the arch into both horizontal molar tubes and the vertical brackets of adjacent premolars. This lack of understanding posed a considerable challenge to dental professionals; they were unable to make corrections to an excessive Spee curve in bicuspid teeth. Despite the complexity of the situation, it was necessary for practitioners to find a resolution. Unparalleled to its counterparts, what made the ribbon arch instantly popular was that its archwire had remarkable spring qualities and could be utilized to accurately align teeth that were misaligned. However, a major drawback of this device was its inability to effectively control root position since it did not have enough resilience to generate the torque movements required for setting roots in their new place.
Edgewise appliance
In an effort to rectify the issues with the ribbon arch, Angle shifted the orientation of its slot from vertical, instead making it horizontal. In addition, he swapped out the wire and replaced it with a precious metal wire that was rotated by 90 degrees in relation—henceforth known as Edgewise. Following extensive trials, it was concluded that dimensions of 22 × 28 mils were optimal for obtaining excellent control over crown and root positioning across all three planes of space. After debuting in 1928, this appliance quickly became one of the mainstays for multibanded fixed therapy, although ribbon arches continued to be utilized for another decade or so beyond this point too.
Labiolingual
Prior to Angle, the idea of fitting attachments on individual teeth had not been thought of, and in his lifetime, his concern for precisely positioning each tooth was not highly appraised. In addition to using fingersprings for repositioning teeth with a range of removable devices, two main appliance systems were very popular in the early part of the 20th century. Labiolingual appliances use bands on the first molars joined with heavy lingual and labial archwires affixed with soldered fingersprings to shift single teeth.
Twin wire
Utilizing bands around both incisors and molars, a twin-wire appliance was designed to provide alignment between these teeth. Constructed with two 10-mil steel archwires, its delicate features were safeguarded by lengthy tubes stretching from molars towards canines. Despite its efforts, it had limited capacity for movement without further modifications, rendering it obsolete in modern orthodontic practice.
Begg's Appliance
Returning to Australia in the 1920s, the renowned orthodontist, Raymond Begg, applied his knowledge of ribbon arch appliances, which he had learned from the Angle School. On top of this, Begg recognized that extracting teeth was sometimes vital for successful outcomes and sought to modify the ribbon arch appliance to provide more control when dealing with root positioning.
In the late 1930s, Begg developed his adaptation of the appliance, which took three forms. Firstly, a high-strength 16-mil round stainless steel wire replaced the original precious metal ribbon arch. Secondly, he kept the same ribbon arch bracket but inverted it so that it pointed toward the gums instead of away from them. Lastly, auxiliary springs were added to control root movement. This resulted in what would come to be known as the Begg Appliance. With this design, friction was decreased since contact between wire and bracket was minimal, and binding was minimized due to tipping and uprighting being used for anchorage control, which lessened contact angles between wires and corners of the bracket.
Tip-Edge System
Begg's influence is still seen in modern appliances, such as Tip-Edge brackets. This type of bracket incorporates a rectangular slot cutaway on one side to allow for crown tipping with no incisal deflection of an archwire, allowing teeth to be tipped during space closure and then uprighted through auxiliary springs or even a rectangular wire for torque purposes in finishing. At the initial stages of treatment, small-diameter steel archwires should be used when working with Tip-Edge brackets.
Contemporary edgewise systems
Throughout time, there has been a shift in which appliances are favored by dentists. In particular, during the 1960s, when it was introduced, the Begg appliance gained wide popularity due to its efficiency compared to edgewise appliances of that era; it could produce the same results with less investment on the dentist's part. Nevertheless, since then, there have been advances in technology and sophistication in edgewise appliances, which led to the opposite conclusion: nowadays, edgewise appliances are more efficient than the Begg appliance, thus explaining why it is commonly used.
Automatic rotational control
At the beginning, Angle attached eyelets to the edges of archwires so that they could be held with ligatures and help manage rotations. Now, however, no extra ligature is needed due to either twin brackets or single brackets that have added wings touching underneath the wire (Lewis or Lang brackets). Both types of brackets simplify the process of obtaining moments that control movements along a particular plane of space.
Alteration in bracket slot dimensions
In modern dentistry, two types of edgewise appliances exist: the 18- and 22-slot varieties. While these appliances are used differently, the introduction of a 20-slot device with more precise features has been considered but not pursued yet.
Straight-wire bracket prescriptions
Rather than rely on the same bracket for all teeth, L.F. Andrews found a way to make different brackets for each tooth in the 1980s, thanks to the increased convenience of bonding. This adjustment enabled him to avoid having multiple bends in archwires that would have been needed to make up for variations in tooth anatomy. Ultimately, this led to what was termed a "straight-wire appliance" system – an edgewise appliance that greatly enhanced its efficiency. The modern edgewise appliance has slightly different construction than the original one. Instead of relying on faciolingual bends to accommodate variations among teeth, each bracket has a correspondingly varying base thickness depending on the tooth it is intended for. However, due to individual differences between teeth, this does not completely eliminate the need for compensating bends. Accurately placing the roots of many teeth requires angling brackets in relation to the long axis of the tooth. Traditionally, this mesiodistal root positioning necessitated using second-order, or tip, bends along the archwire. However, angling the bracket or bracket slot eliminates this need for bends.
Given the discrepancies in inclination of facial surfaces across individual teeth, placing a twist, otherwise known as third-order or torque bends, into segments of each rectangular archwire was initially required with the edgewise appliance. These bends were necessary for all patients and wires, not just to avoid any unintentional movement of suitably placed teeth or when moving roots facially or lingually. Angulation of either brackets or slots can minimize the need for second-order or tip bends on archwires. Contemporary edgewise appliances come with brackets designed to adjust for any facial inclinations, thereby eliminating or reducing any third-order bends. These brackets already have angulation and torque values built in so that each rectangluar archwire can be contorted to form a custom fit without inadvertently shifting any correctly positioned teeth. Without bracket angulation and torque, second-order or tip bends would still be required on each patient's archwire.
Methods
A typical treatment for incorrectly positioned teeth (malocclusion) takes from one to two years, with braces being adjusted every four to 10 weeks by orthodontists, while university-trained dental specialists are versed in the prevention, diagnosis, and treatment of dental and facial irregularities. Orthodontists offer a wide range of treatment options to straighten crooked teeth, fix irregular bites, and align the jaws correctly. There are many ways to adjust malocclusion. In growing patients, there are more options to treat skeletal discrepancies, either by promoting or restricting growth using functional appliances, orthodontic headgear, or a reverse pull facemask. Most orthodontic work begins in the early permanent dentition stage before skeletal growth is completed. If skeletal growth has completed, jaw surgery is an option. Sometimes teeth are extracted to aid the orthodontic treatment (teeth are extracted in about half of all the cases, most commonly the premolars).
Orthodontic therapy may include the use of fixed or removable appliances. Most orthodontic therapy is delivered using appliances that are fixed in place, for example, braces that are adhesively bonded to the teeth. Fixed appliances may provide greater mechanical control of the teeth; optimal treatment outcomes are improved by using fixed appliances.
Fixed appliances may be used, for example, to rotate teeth if they do not fit the arch shape of the other teeth in the mouth, to adjust multiple teeth to different places, to change the tooth angle of teeth, or to change the position of a tooth's root. This treatment course is not preferred where a patient has poor oral hygiene, as decalcification, tooth decay, or other complications may result. If a patient is unmotivated (insofar as treatment takes several months and requires commitment to oral hygiene), or if malocclusions are mild.
The biology of tooth movement and how advances in gene therapy and molecular biology technology may shape the future of orthodontic treatment.
Braces
Braces are usually placed on the front side of the teeth, but they may also be placed on the side facing the tongue (called lingual braces). Brackets made out of stainless steel or porcelain are bonded to the center of the teeth using an adhesive. Wires are placed in a slot in the brackets, which allows for controlled movement in all three dimensions.
Apart from wires, forces can be applied using elastic bands, and springs may be used to push teeth apart or to close a gap. Several teeth may be tied together with ligatures, and different kinds of hooks can be placed to allow for connecting an elastic band.
Clear aligners are an alternative to braces, but insufficient evidence exists to determine their effectiveness.
Treatment duration
The time required for braces varies from person to person as it depends on the severity of the problem, the amount of room available, the distance the teeth must travel, the health of the teeth, gums, and supporting bone, and how closely the patient follows instructions. On average, however, once the braces are put on, they usually remain in place for one to three years. After braces are removed, most patients will need to wear a retainer all the time for the first six months, then only during sleep for many years.
Headgear
Orthodontic headgear, sometimes referred to as an "extra-oral appliance", is a treatment approach that requires the patient to have a device strapped onto their head to help correct malocclusion—typically used when the teeth do not align properly. Headgear is most often used along with braces or other orthodontic appliances. While braces correct the position of teeth, orthodontic headgear—which, as the name suggests, is worn on or strapped onto the patient's head—is most often added to orthodontic treatment to help alter the alignment of the jaw, although there are some situations in which such an appliance can help move teeth, particularly molars.
Whatever the purpose, orthodontic headgear works by exerting tension on the braces via hooks, a facebow, coils, elastic bands, metal orthodontic bands, and other attachable appliances directly into the patient's mouth. It is most effective for children and teenagers because their jaws are still developing and can be easily manipulated. (If an adult is fitted with headgear, it is usually to help correct the position of teeth that have shifted after other teeth have been extracted.) Thus, headgear is typically used to treat a number of jaw alignment or bite problems, such as overbite and underbite.
Palatal expansion
Palatal expansion can be best achieved using a fixed tissue-borne appliance. Removable appliances can push teeth outward but are less effective at maxillary sutural expansion. The effects of a removable expander may look the same as they push teeth outward, but they should not be confused with actually expanding the palate. Proper palate expansion can create more space for teeth as well as improve both oral and nasal airflow.
Jaw surgery
Jaw surgery may be required to fix severe malocclusions. The bone is broken during surgery and stabilized with titanium (or bioresorbable) plates and screws to allow for healing to take place. After surgery, regular orthodontic treatment is used to move the teeth into their final position.
During treatment
To reduce pain during the orthodontic treatment, low-level laser therapy (LLLT), vibratory devices, chewing adjuncts, brainwave music, or cognitive behavioral therapy can be used. However, the supporting evidence is of low quality, and the results are inconclusive.
Post treatment
After orthodontic treatment has been completed, there is a tendency for teeth to return, or relapse, back to their pre-treatment positions. Over 50% of patients have some reversion to pre-treatment positions within 10 years following treatment. To prevent relapse, the majority of patients will be offered a retainer once treatment has been completed and will benefit from wearing their retainers. Retainers can be either fixed or removable.
Removable retainers
Removable retainers are made from clear plastic, and they are custom-fitted for the patient's mouth. It has a tight fit and holds all of the teeth in position. There are many types of brands for clear retainers, including Zendura Retainer, Essix Retainer, and Vivera Retainer. A Hawley retainer is also a removable orthodontic appliance made from a combination of plastic and metal that is custom-molded to fit the patient's mouth. Removable retainers will be worn for different periods of time, depending on the patient's need to stabilize the dentition.
Fixed retainers
Fixed retainers are a simple wire fixed to the tongue-facing part of the incisors using dental adhesive and can be specifically useful to prevent rotation in incisors. Other types of fixed retainers can include labial or lingual braces, with brackets fixed to the teeth.
Clear aligners
Clear aligners are another form of orthodontics commonly used today, involving removable plastic trays. There has been controversy about the effectiveness of aligners such as Invisalign or Byte; some consider them to be faster and more freeing than the alternatives.
Training
There are several specialty areas in dentistry, but the specialty of orthodontics was the first to be recognized within dentistry. Specifically, the American Dental Association recognized orthodontics as a specialty in the 1950s. Each country has its own system for training and registering orthodontic specialists.
Australia
In Australia, to obtain an accredited three-year full-time university degree in orthodontics, one will need to be a qualified dentist (complete an AHPRA-registered general dental degree) with a minimum of two years of clinical experience. There are several universities in Australia that offer orthodontic programs: the University of Adelaide, the University of Melbourne, the University of Sydney, the University of Queensland, the University of Western Australia, and the University of Otago. Orthodontic courses are accredited by the Australian Dental Council and reviewed by the Australian Society of Orthodontists (ASO). Prospective applicants should obtain information from the relevant institution before applying for admission. After completing a degree in orthodontics, specialists are required to be registered with the Australian Health Practitioner Regulation Agency (AHPRA) in order to practice.
Bangladesh
Dhaka Dental College in Bangladesh is one of the many schools recognized by the Bangladesh Medical and Dental Council (BM&DC) that offer post-graduation orthodontic courses. Before applying to any post-graduation training courses, an applicant must have completed the Bachelor of Dental Surgery (BDS) examination from any dental college. After application, the applicant must take an admissions test held by the specific college. If successful, selected candidates undergo training for six months.
Canada
In Canada, obtaining a dental degree, such as a Doctor of Dental Surgery (DDS) or Doctor of Medical Dentistry (DMD), would be required before being accepted by a school for orthodontic training. Currently, there are 10 schools in the country offering the orthodontic specialty. Candidates should contact the individual school directly to obtain the most recent pre-requisites before entry. The Canadian Dental Association expects orthodontists to complete at least two years of post-doctoral, specialty training in orthodontics in an accredited program after graduating from their dental degree.
United States
Similar to Canada, there are several colleges and universities in the United States that offer orthodontic programs. Every school has a different enrollment process, but every applicant is required to have graduated with a DDS or DMD from an accredited dental school. Entrance into an accredited orthodontics program is extremely competitive and begins by passing a national or state licensing exam.
The program generally lasts for two to three years, and by the final year, graduates are required to complete the written American Board of Orthodontics (ABO) exam. This exam is also broken down into two components: a written exam and a clinical exam. The written exam is a comprehensive exam that tests for the applicant's knowledge of basic sciences and clinical concepts. The clinical exam, however, consists of a Board Case Oral Examination (BCOE), a Case Report Examination (CRE), and a Case Report Oral Examination (CROE). Once certified, certification must then be renewed every ten years. Orthodontic programs can award a Master of Science degree, a Doctor of Science degree, or a Doctor of Philosophy degree, depending on the school and individual research requirements.
United Kingdom
Throughout the United Kingdom, there are several Orthodontic Specialty Training Registrar posts available. The program is full-time for three years, and upon completion, trainees graduate with a degree at the Masters or Doctorate level. Training may take place within hospital departments that are linked to recognized dental schools. Obtaining a Certificate of Completion of Specialty Training (CCST) allows an orthodontic specialist to be registered under the General Dental Council (GDC). An orthodontic specialist can provide care within a primary care setting, but to work at a hospital as an orthodontic consultant, higher-level training is further required as a post-CCST trainee. To work within a university setting as an academic consultant, completing research toward obtaining a Ph.D. is also required.
| Biology and health sciences | Fields of medicine | Health |
336880 | https://en.wikipedia.org/wiki/Ankle | Ankle | The ankle, the talocrural region or the jumping bone (informal) is the area where the foot and the leg meet. The ankle includes three joints: the ankle joint proper or talocrural joint, the subtalar joint, and the inferior tibiofibular joint. The movements produced at this joint are dorsiflexion and plantarflexion of the foot. In common usage, the term ankle refers exclusively to the ankle region. In medical terminology, "ankle" (without qualifiers) can refer broadly to the region or specifically to the talocrural joint.
The main bones of the ankle region are the talus (in the foot), the tibia, and fibula (both in the leg). The talocrural joint is a synovial hinge joint that connects the distal ends of the tibia and fibula in the lower limb with the proximal end of the talus. The articulation between the tibia and the talus bears more weight than that between the smaller fibula and the talus.
Structure
Region
The ankle region is found at the junction of the leg and the foot. It extends downwards (distally) from the narrowest point of the lower leg and includes the parts of the foot closer to the body (proximal) to the heel and upper surface (dorsum) of the foot.
Ankle joint
The talocrural joint is the only mortise and tenon joint in the human body, the term likening the skeletal structure to the woodworking joint of the same name. The bony architecture of the ankle consists of three bones: the tibia, the fibula, and the talus. The articular surface of the tibia may be referred to as the plafond (French for "ceiling"). The medial malleolus is a bony process extending distally off the medial tibia. The distal-most aspect of the fibula is called the lateral malleolus. Together, the malleoli, along with their supporting ligaments, stabilize the talus underneath the tibia.
Because the motion of the subtalar joint provides a significant contribution to positioning the foot, some authors will describe it as the lower ankle joint, and call the talocrural joint the upper ankle joint. Dorsiflexion and Plantarflexion are the movements that take place in the ankle joint. When the foot is plantar flexed, the ankle joint also allows some movements of side to side gliding, rotation, adduction, and abduction.
The bony arch formed by the tibial plafond and the two malleoli is referred to as the ankle "mortise" (or talar mortise). The mortise is a rectangular socket. The ankle is composed of three joints: the talocrural joint (also called talotibial joint, tibiotalar joint, talar mortise, talar joint), the subtalar joint (also called talocalcaneal), and the Inferior tibiofibular joint. The joint surface of all bones in the ankle is covered with articular cartilage.
The distances between the bones in the ankle are as follows:
Talus - medial malleolus : 1.70 ± 0.13 mm
Talus - tibial plafond: 2.04 ± 0.29 mm
Talus - lateral malleolus: 2.13 ± 0.20 mm
Decreased distances indicate osteoarthritis.
Ligaments
The ankle joint is bound by the strong deltoid ligament and three lateral ligaments: the anterior talofibular ligament, the posterior talofibular ligament, and the calcaneofibular ligament.
The deltoid ligament supports the medial side of the joint, and is attached at the medial malleolus of the tibia and connect in four places to the talar shelf of the calcaneus, calcaneonavicular ligament, the navicular tuberosity, and to the medial surface of the talus.
The anterior and posterior talofibular ligaments support the lateral side of the joint from the lateral malleolus of the fibula to the dorsal and ventral ends of the talus.
The calcaneofibular ligament is attached at the lateral malleolus and to the lateral surface of the calcaneus.
Though it does not span the ankle joint itself, the syndesmotic ligament makes an important contribution to the stability of the ankle. This ligament spans the syndesmosis, i.e. the articulation between the medial aspect of the distal fibula and the lateral aspect of the distal tibia. An isolated injury to this ligament is often called a high ankle sprain.
The bony architecture of the ankle joint is most stable in dorsiflexion. Thus, a sprained ankle is more likely to occur when the ankle is plantar-flexed, as ligamentous support is more important in this position. The classic ankle sprain involves the anterior talofibular ligament (ATFL), which is also the most commonly injured ligament during inversion sprains. Another ligament that can be injured in a severe ankle sprain is the calcaneofibular ligament.
Retinacula, tendons and their synovial sheaths, vessels, and nerves
A number of tendons pass through the ankle region. Bands of connective tissue called retinacula (singular: retinaculum) allow the tendons to exert force across the angle between the leg and foot without lifting away from the angle, a process called bowstringing.
The superior extensor retinaculum of foot extends between the anterior (forward) surfaces of the tibia and fibula near their lower (distal) ends. It contains the anterior tibial artery and vein and the tendons of the tibialis anterior muscle within its tendon sheath and the unsheathed tendons of extensor hallucis longus and extensor digitorum longus muscles. The deep peroneal nerve passes under the retinaculum while the superficial peroneal nerve is outside of it. The inferior extensor retinaculum of foot is a Y-shaped structure. Its lateral attachment is on the calcaneus, and the band travels towards the anterior tibia where it is attached and blends with the superior extensor retinaculum. Along with that course, the band divides and another segment attaches to the plantar aponeurosis. The tendons which pass through the superior extensor retinaculum are all sheathed along their paths through the inferior extensor retinaculum and the tendon of the fibularis tertius muscle is also contained within the retinaculum.
The flexor retinaculum of foot extends from the medial malleolus to the medical process of the calcaneus, and the following structures in order from medial to lateral: the tendon of the tibialis posterior muscle, the tendon of the flexor digitorum longus muscle, the posterior tibial artery and vein, the tibial nerve, and the tendon of the flexor hallucis longus muscle.
The fibular retinacula hold the tendons of the fibularis longus and fibularis brevis along the lateral aspect of the ankle region. The superior fibular retinaculum extends from the deep transverse fascia of the leg and lateral malleolus to calcaneus. The inferior fibular retinaculum is a continuous extension from the inferior extensor retinaculum to the calcaneus.
Mechanoreceptors
Mechanoreceptors of the ankle send proprioceptive sensory input to the central nervous system (CNS). Muscle spindles are thought to be the main type of mechanoreceptor responsible for proprioceptive attributes from the ankle. The muscle spindle gives feedback to the CNS system on the current length of the muscle it innervates and to any change in length that occurs.
It was hypothesized that muscle spindle feedback from the ankle dorsiflexors played the most substantial role in proprioception relative to other muscular receptors that cross at the ankle joint. However, due to the multi-planar range of motion at the ankle joint there is not one group of muscles that is responsible for this. This helps to explain the relationship between the ankle and balance.
In 2011, a relationship between proprioception of the ankle and balance performance was seen in the CNS. This was done by using a fMRI machine in order to see the changes in brain activity when the receptors of the ankle are stimulated. This implicates the ankle directly with the ability to balance. Further research is needed in order to see to what extent does the ankle affect balance.
Function
Historically, the role of the ankle in locomotion has been discussed by Aristotle and Leonardo da Vinci. There is no question that ankle push-off is a significant force in human gait, but how much energy is used in leg swing as opposed to advancing the whole-body center of mass is not clear.
Clinical significance
Traumatic injury
Of all major joints, the ankle is the most commonly injured. If the outside surface of the foot is twisted under the leg during weight bearing, the lateral ligament, especially the anterior talofibular portion, is subject to tearing (a sprain) as it is weaker than the medial ligament and it resists inward rotation of the talocrural joint.
Fractures
Imaging
The initial evaluation of suspected ankle pathology is usually by projectional radiography ("X-ray").
Varus or valgus deformity, if suspected, can be measured with the frontal tibiotalar surface angle (TTS), formed by the mid-longitudinal tibial axis (such as through a line bisecting the tibia at 8 and 13 cm above the tibial plafond) and the talar surface. An angle of less than 84 degrees is regarded as talipes varus, and an angle of more than 94 degrees is regarded as talipes valgus.
For ligamentous injury, there are three main landmarks on X-rays: The first is the tibiofibular clear space, the horizontal distance from the lateral border of the posterior tibial malleolus to the medial border of the fibula, with greater than 5 mm being abnormal. The second is tibiofibular overlap, the horizontal distance between the medial border of the fibula and the lateral border of the anterior tibial prominence, with less than 10 mm being abnormal. The final measurement is the medial clear space, the distance between the lateral aspect of the medial malleolus and the medial border of the talus at the level of the talar dome, with a measurement greater than 4 mm being abnormal. Loss of any of these normal anatomic spaces can indirectly reflect ligamentous injury or occult fracture, and can be followed by MRI or CT.
Abnormalities
Clubfoot or talipes equinovarus, which occurs in one to two of every 1,000 live births, involves multiple abnormalities of the foot. Equinus refers to the downard deflection of the ankle, and is named for the walking on the toes in the manner of a horse. This does not occur because it is accompanied by an inward rotation of the foot (varus deformity), which untreated, results in walking on the sides of the feet. Treatment may involve manipulation and casting or surgery.
Ankle joint equinus, normally in adults, relates to restricted ankle joint range of motion(ROM). Calf muscle stretching exercises are normally helpful to increase the ankle joint dorsiflexion and used to manage clinical symptoms resulting from ankle equinus.
Occasionally a human ankle has a ball-and-socket ankle joint and fusion of the talo-navicular joint.
History
The word ankle or ancle is common, in various forms, to Germanic languages, probably connected in origin with the Latin , or Greek , meaning bent.
Other animals
Evolution
It has been suggested that dexterous control of toes has been lost in favour of a more precise voluntary control of the ankle joint.
| Biology and health sciences | Skeletal system | Biology |
337102 | https://en.wikipedia.org/wiki/Biopsy | Biopsy | A biopsy is a medical test commonly performed by a surgeon, an interventional radiologist, or an interventional cardiologist. The process involves the extraction of sample cells or tissues for examination to determine the presence or extent of a disease. The tissue is then fixed, dehydrated, embedded, sectioned, stained and mounted before it is generally examined under a microscope by a pathologist; it may also be analyzed chemically. When an entire lump or suspicious area is removed, the procedure is called an excisional biopsy. An incisional biopsy or core biopsy samples a portion of the abnormal tissue without attempting to remove the entire lesion or tumor. When a sample of tissue or fluid is removed with a needle in such a way that cells are removed without preserving the histological architecture of the tissue cells, the procedure is called a needle aspiration biopsy. Biopsies are most commonly performed for insight into possible cancerous or inflammatory conditions.
History
The Arab physician Abulcasis (1013–1107) developed one of the earliest diagnostic biopsies. He used a needle to puncture the thyroid and then characterized many types of goiter.
Etymology
The term biopsy reflects the Greek words , "life," and , "a sight."
The French dermatologist Ernest Besnier introduced the word to the medical community in 1879.
Medical use
Cancer
When cancer is suspected, a variety of biopsy techniques can be applied. An excisional biopsy is an attempt to remove an entire lesion. When the specimen is evaluated, in addition to diagnosis, the amount of uninvolved tissue around the lesion, the surgical margin of the specimen is examined to see if the disease has spread beyond the area biopsied. "Clear margins" or "negative margins" means that no disease was found at the edges of the biopsy specimen. "Positive margins" means that disease was found, and a wider excision may be needed, depending on the diagnosis.
When intact removal is not indicated for a variety of reasons, a wedge of tissue may be taken in an incisional biopsy. In some cases, a sample can be collected by devices that "bite" a sample. A variety of sizes of needles can collect tissue in the lumen (core biopsy). Smaller diameter needles collect cells and cell clusters, fine needle aspiration biopsy.
Pathologic examination of a biopsy can determine whether a lesion is benign or malignant, and can help differentiate between different types of cancer. In contrast to a biopsy that merely samples a lesion, a larger excisional specimen called a resection may come to a pathologist, typically from a surgeon attempting to eradicate a known lesion from a patient. For example, a pathologist would examine a mastectomy specimen, even if a previous nonexcisional breast biopsy had already established the diagnosis of breast cancer. Examination of the full mastectomy specimen would confirm the exact nature of the cancer (subclassification of tumor and histologic "grading") and reveal the extent of its spread (pathologic "staging").
Liquid biopsy
There are two types of liquid biopsy (which is not really a biopsy as they are blood tests that do not require a biopsy of tissue): circulating tumor cell assays or cell-free circulating tumor DNA tests. These methods provide a non-invasive alternative to repeat invasive biopsies to monitor cancer treatment, test available drugs against the circulating tumor cells, evaluate the mutations in cancer and plan individualized treatments. In addition, because cancer is a heterogeneous genetic disease, and excisional biopsies provide only a snapshot in time of some of the rapid, dynamic genetic changes occurring in tumors, liquid biopsies provide some advantages over tissue biopsy-based genomic testing. In addition, excisional biopsies are invasive, cannot be used repeatedly, and are ineffective in understanding the dynamics of tumor progression and metastasis. By detecting, quantifying and characterisation vital circulating tumor cells or genomic alterations in CTCs and cell-free DNA in blood, liquid biopsy can provide real-time information on the stage of tumor progression, treatment effectiveness, and cancer metastasis risk. This technological development could make it possible to diagnose and manage cancer from repeated blood tests rather than from a traditional biopsy.
Circulating tumor cell tests are already available but not covered by insurance yet at maintrac and under development by many pharmaceutical companies. Those tests analyze circulating tumor cells (CTCs) Analysis of individual CTCs demonstrated a high level of heterogeneity seen at the single cell level for both protein expression and protein localization and the CTCs reflected both the primary biopsy and the changes seen in the metastatic sites.
Analysis of cell-free circulating tumor DNA (cfDNA) has an advantage over circulating tumor cells assays in that there is approximately 100 times more cell-free DNA than there is DNA in circulating tumor cells. These tests analyze fragments of tumor-cell DNA that are continuously shed by tumors into the bloodstream. Companies offering cfDNA next generation sequencing testing include Personal Genome Diagnostics and Guardant Health. These tests are moving into widespread use when a tissue biopsy has insufficient material for DNA testing or when it is not safe to do an invasive biopsy procedure, according to a recent report of results on over 15,000 advanced cancer patients sequenced with the Guardant Health test.
A 2014 study of the blood of 846 patients with 15 different types of cancer in 24 institutions was able to detect the presence of cancer DNA in the body. They found tumor DNA in the blood of more than 80 percent of patients with metastatic cancers and about 47 percent of those with localized tumors. The test does not indicate the tumor site(s) or other information about the tumor. The test did not produce false positives.
Such tests may also be useful to assess whether malignant cells remain in patients whose tumors have been surgically removed. Up to 30 percent are expected to relapse because some tumor cells remain. Initial studies identified about half the patients who later relapsed, again without false positives.
Another potential use is to track the specific DNA mutations driving a tumor. Many new cancer medications block specific molecular processes. Such tests could allow easier targeting of therapy to tumors.
Precancerous conditions
For easily detected and accessed sites, any suspicious lesions may be assessed. Originally, this was skin or superficial masses. X-ray, then later CT, MRI, and ultrasound along with endoscopy extended the range.
Inflammatory conditions
A biopsy of the temporal arteries is often performed for suspected vasculitis.
In inflammatory bowel disease (Crohn's disease and ulcerative colitis), frequent biopsies are taken to assess the activity of the disease and to assess changes that precede malignancy.
Biopsy specimens are often taken from part of a lesion when the cause of a disease is uncertain or its extent or exact character is in doubt. Vasculitis, for instance, is usually diagnosed on biopsy.
Kidney disease: Biopsy and fluorescence microscopy are key in the diagnosis of alterations of renal function. Immunofluorescence plays vital role in the diagnosis of Crescentic glomerulonephritis.
Infectious disease: Lymph node enlargement may be due to a variety of infectious or autoimmune diseases.
Metabolic disease: Some conditions affect the whole body, but certain sites are selectively biopsied because they are easily accessed. Amyloidosis is a condition where degraded proteins accumulate in body tissues. To make the diagnosis, the gingival.
Transplantation: Biopsies of transplanted organs are performed in order to determine that they are not being rejected or that the disease that necessitated the transplant has not recurred.
Fertility: A testicular biopsy is used for evaluating the fertility of men and find out the cause of a possible infertility, e.g. when sperm quality is low, but hormone levels still are within normal ranges.
Biopsied sites
Analysis of biopsied material
After the biopsy is performed, the sample of tissue that was removed from the patient is sent to the pathology laboratory. A pathologist specializes in diagnosing diseases (such as cancer) by examining tissue under a microscope. When the laboratory (see Histology) receives the biopsy sample, the tissue is processed and an extremely thin slice of tissue is removed from the sample and attached to a glass slide. Any remaining tissue is saved for use in later studies, if required.
The slide with the tissue attached is treated with dyes that stain the tissue, which allows the individual cells in the tissue to be seen more clearly. The slide is then given to the pathologist, who examines the tissue under a microscope, looking for any abnormal findings. The pathologist then prepares a report that lists any abnormal or important findings from the biopsy. This report is sent to the surgeon who originally performed the biopsy on the patient.
| Biology and health sciences | Medical procedures | null |
337353 | https://en.wikipedia.org/wiki/Safety%20data%20sheet | Safety data sheet | A safety data sheet (SDS), material safety data sheet (MSDS), or product safety data sheet (PSDS) is a document that lists information relating to occupational safety and health for the use of various substances and products. SDSs are a widely used type of fact sheet used to catalogue information on chemical species including chemical compounds and chemical mixtures. SDS information may include instructions for the safe use and potential hazards associated with a particular material or product, along with spill-handling procedures. The older MSDS formats could vary from source to source within a country depending on national requirements; however, the newer SDS format is internationally standardized.
An SDS for a substance is not primarily intended for use by the general consumer, focusing instead on the hazards of working with the material in an occupational setting. There is also a duty to properly label substances on the basis of physico-chemical, health, or environmental risk. Labels often include hazard symbols such as the European Union standard symbols. The same product (e.g. paints sold under identical brand names by the same company) can have different formulations in different countries. The formulation and hazards of a product using a generic name may vary between manufacturers in the same country.
Globally Harmonized System
The Globally Harmonized System of Classification and Labelling of Chemicals contains a standard specification for safety data sheets. The SDS follows a 16 section format which is internationally agreed and for substances especially, the SDS should be followed with an Annex which contains the exposure scenarios of this particular substance. The 16 sections are:
SECTION 1: Identification of the substance/mixture and of the company/undertaking
1.1. Product identifier
1.2. Relevant identified uses of the substance or mixture and uses advised against
1.3. Details of the supplier of the safety data sheet
1.4. Emergency telephone number
SECTION 2: Hazards identification
2.1. Classification of the substance or mixture
2.2. Label elements
2.3. Other hazards
SECTION 3: Composition/information on ingredients
3.1. Substances
3.2. Mixtures
SECTION 4: First aid measures
4.1. Description of first aid measures
4.2. Most important symptoms and effects, both acute and delayed
4.3. Indication of any immediate medical attention and special treatment needed
SECTION 5: Firefighting measures
5.1. Extinguishing media
5.2. Special hazards arising from the substance or mixture
5.3. Advice for firefighters
SECTION 6: Accidental release measure
6.1. Personal precautions, protective equipment and emergency procedures
6.2. Environmental precautions
6.3. Methods and material for containment and cleaning up
6.4. Reference to other sections
SECTION 7: Handling and storage
7.1. Precautions for safe handling
7.2. Conditions for safe storage, including any incompatibilities
7.3. Specific end use(s)
SECTION 8: Exposure controls/personal protection
8.1. Control parameters
8.2. Exposure controls
SECTION 9: Physical and chemical properties
9.1. Information on basic physical and chemical properties
9.2. Other information
SECTION 10: Stability and reactivity
10.1. Reactivity
10.2. Chemical stability
10.3. Possibility of hazardous reactions
10.4. Conditions to avoid
10.5. Incompatible materials
10.6. Hazardous decomposition products
SECTION 11: Toxicological information
11.1. Information on toxicological effects
SECTION 12: Ecological information
12.1. Toxicity
12.2. Persistence and degradability
12.3. Bioaccumulative potential
12.4. Mobility in soil
12.5. Results of PBT and vPvB assessment
12.6. Other adverse effects
SECTION 13: Disposal considerations
13.1. Waste treatment methods
SECTION 14: Transport information
14.1. UN number
14.2. UN proper shipping name
14.3. Transport hazard class(es)
14.4. Packing group
14.5. Environmental hazards
14.6. Special precautions for user
14.7. Transport in bulk according to Annex II of MARPOL and the IBC Code
SECTION 15: Regulatory information
15.1. Safety, health and environmental regulations/legislation specific for the substance or mixture
15.2. Chemical safety assessment
SECTION 16: Other information
16.2. Date of the latest revision of the SDS
National and international requirements
Canada
In Canada, the program known as the Workplace Hazardous Materials Information System (WHMIS) establishes the requirements for SDSs in workplaces and is administered federally by Health Canada under the Hazardous Products Act, Part II, and the Controlled Products Regulations.
European Union
Safety data sheets have been made an integral part of the system of Regulation (EC) No 1907/2006 (REACH). The original requirements of REACH for SDSs have been further adapted to take into account the rules for safety data sheets of the Global Harmonised System (GHS) and the implementation of other elements of the GHS into EU legislation that were introduced by Regulation (EC) No 1272/2008 (CLP) via an update to Annex II of REACH.
The SDS must be supplied in an official language of the Member State(s) where the substance or mixture is placed on the market, unless the Member State(s) concerned provide(s) otherwise (Article 31(5) of REACH).
The European Chemicals Agency (ECHA) has published a guidance document on the compilation of safety data sheets.
Germany
In Germany, safety data sheets must be compiled in accordance with REACH Regulation No. 1907/2006. The requirements concerning national aspects are defined in the Technical Rule for Hazardous Substances (TRGS) 220 "National aspects when compiling safety data sheets". A national measure mentioned in SDS section 15 is as example the water hazard class (WGK) it is based on regulations governing systems for handling substances hazardous to waters (AwSV).
The Netherlands
Dutch Safety Data Sheets are well known as veiligheidsinformatieblad or Chemiekaarten. This is a collection of Safety Data Sheets of the most widely used chemicals. The Chemiekaarten boek is commercially available, but also made available through educational institutes, such as the web site offered by the University of Groningen.
South Africa
This section contributes to a better understanding of the regulations governing SDS within the South African framework. As regulations may change, it is the responsibility of the reader to verify the validity of the regulations mentioned in text.
As globalisation increased and countries engaged in cross-border trade, the quantity of hazardous material crossing international borders amplified. Realising the detrimental effects of hazardous trade, the United Nations established a committee of experts specialising in the transportation of hazardous goods. The committee provides best practises governing the conveyance of hazardous materials and goods for land including road and railway; air as well as sea transportation. These best practises are constantly updated to remain current and relevant.
There are various other international bodies who provide greater detail and guidance for specific modes of transportation such as the International Maritime Organisation (IMO) by means of the International Maritime Code and the International Civil Aviation Organisation (ICAO) via the Technical Instructions for the safe transport of dangerous goods by air as well as the International Air Transport Association (IATA) who provides regulations for the transport of dangerous goods.
These guidelines prescribed by the international authorities are applicable to the South African land, sea and air transportation of hazardous materials and goods. In addition to these rules and regulations to International best practice, South Africa has also implemented common laws which are laws based on custom and practise. Common laws are a vital part of maintaining public order and forms the basis of case laws. Case laws, using the principles of common law are interpretations and decisions of statutes made by courts. Acts of parliament are determinations and regulations by parliament which form the foundation of statutory law. Statutory laws are published in the government gazette or on the official website. Lastly, subordinate legislation are the bylaws issued by local authorities and authorised by parliament.
Statutory law gives effect to the Occupational Health and Safety Act of 1993 and the National Road Traffic Act of 1996. The Occupational Health and Safety Act details the necessary provisions for the safe handling and storage of hazardous materials and goods whilst the transport act details with the necessary provisions for the transportation of the hazardous goods.
Relevant South African legislation includes the Hazardous Chemicals Agent regulations of 2021 under the Occupational Health and Safety Act of 1993, the Chemical Substance Act 15 of 1973, and the National Road Traffic Act of 1996, and the Standards Act of 2008.
There has been selective incorporation of aspects of the Globally Harmonised System (GHS) of Classification and Labelling of Chemicals into South African legislation. At each point of the chemical value chain, there is a responsibility to manage chemicals in a safe and responsible manner. SDS is therefore required by law. A SDS is included in the requirements of Occupational Health and Safety Act, 1993 (Act No.85 of 1993) Regulation 1179 dated 25 August 1995.
The categories of information supplied in the SDS are listed in SANS 11014:2010; dangerous goods standards – Classification and information. SANS 11014:2010 supersedes the first edition SANS 11014-1:1994 and is an identical implementation of ISO 11014:2009. According to SANS 11014:2010:
United Kingdom
In the U.K., the Chemicals (Hazard Information and Packaging for Supply) Regulations 2002 - known as CHIP Regulations - impose duties upon suppliers, and importers into the EU, of hazardous materials.
NOTE: Safety data sheets (SDS) are no longer covered by the CHIP regulations. The laws that require a SDS to be provided have been transferred to the European REACH Regulations.
The Control of Substances Hazardous to Health (COSHH) Regulations govern the use of hazardous substances in the workplace in the UK and specifically require an assessment of the use of a substance. Regulation 12 requires that an employer provides employees with information, instruction and training for people exposed to hazardous substances. This duty would be very nearly impossible without the data sheet as a starting point. It is important for employers therefore to insist on receiving a data sheet from a supplier of a substance.
The duty to supply information is not confined to informing only business users of products. SDSs for retail products sold by large DIY shops are usually obtainable on those companies' web sites.
Web sites of manufacturers and large suppliers do not always include them even if the information is obtainable from retailers but written or telephone requests for paper copies will usually be responded to favourably.
United Nations
The United Nations (UN) defines certain details used in SDSs such as the UN numbers used to identify some hazardous materials in a standard form while in international transit.
United States
In the U.S., the Occupational Safety and Health Administration requires that SDSs be readily available to all employees for potentially harmful substances handled in the workplace under the Hazard Communication Standard. The SDS is also required to be made available to local fire departments and local and state emergency planning officials under Section 311 of the Emergency Planning and Community Right-to-Know Act. The American Chemical Society defines Chemical Abstracts Service Registry Numbers (CAS numbers) which provide a unique number for each chemical and are also used internationally in SDSs.
Reviews of material safety data sheets by the U.S. Chemical Safety and Hazard Investigation Board have detected dangerous deficiencies.
The board's Combustible Dust Hazard Study analyzed 140 data sheets of substances capable of producing combustible dusts. None of the SDSs contained all the information the board said was needed to work with the material safely, and 41 percent failed to even mention that the substance was combustible.
As part of its study of an explosion and fire that destroyed the Barton Solvents facility in Valley Center, Kansas, in 2007, the safety board reviewed 62 material safety data sheets for commonly used nonconductive flammable liquids. As in the combustible dust study, the board found all the data sheets inadequate.
In 2012, the US adopted the 16 section Safety Data Sheet to replace Material Safety Data Sheets. This became effective on 1 December 2013. These new Safety Data Sheets comply with the Globally Harmonized System of Classification and Labeling of Chemicals (GHS). By 1 June 2015, employers were required to have their workplace labeling and hazard communication programs updated as necessary – including all MSDSs replaced with SDS-formatted documents.
SDS authoring
Many companies offer the service of collecting, or writing and revising, data sheets to ensure they are up to date and available for their subscribers or users. Some jurisdictions impose an explicit duty of care that each SDS be regularly updated, usually every three to five years. However, when new information becomes available, the SDS must be revised without delay. If a full SDS is not feasible, then a reduced workplace label should be authored.
| Physical sciences | Basics: General | Chemistry |
337871 | https://en.wikipedia.org/wiki/Butte | Butte | In geomorphology, a butte () is an isolated hill with steep, often vertical sides and a small, relatively flat top; buttes are smaller landforms than mesas, plateaus, and tablelands. The word butte comes from the French word butte, meaning knoll (but of any size); its use is prevalent in the Western United States, including the southwest where mesa (Spanish for "table") is used for the larger landform. Due to their distinctive shapes, buttes are frequently landmarks in plains and mountainous areas. To differentiate the two landforms, geographers use the rule of thumb that a mesa has a top that is wider than its height, while a butte has a top that is narrower than its height.
Formation
Buttes form by weathering and erosion when hard caprock overlies a layer of less resistant rock that is eventually worn away. The harder rock on top of the butte resists erosion. The caprock provides protection for the less resistant rock below from wind abrasion which leaves it standing isolated. As the top is further eroded by abrasion and weathering, the excess material that falls off adds to the scree or talus slope around the base. On a much smaller scale, the same process forms hoodoos.
Notable buttes
The Mitten Buttes of Monument Valley in the Utah–Arizona state line are two of the most distinctive and widely recognized buttes. Monument Valley and the Mittens provided backgrounds in the scenes of many western-themed films, including seven movies directed by John Ford. Another very well-known and frequently photographed butte in northern Arizona is Thumb Butte, which overlooks the city of Prescott and is the most prominent and distinctive geologic landmark in the vicinity. The Devils Tower in northeastern Wyoming is a laccolithic butte composed of igneous rock rather than sandstone, limestone or other sedimentary rocks.
The term butte is sometimes applied more broadly to isolated, steep-sided hills with pointed or craggy, rather than flat, tops. Three notable formations that are either named butte or may be considered buttes even though they do not conform to the formal geographer's rule are Scotts Bluff in Nebraska which is a collection of five bluffs, Crested Butte, which is a mountain in Colorado, and Elephant Butte, which is now an island in Elephant Butte Reservoir in New Mexico.
Among the well-known non-flat-topped buttes in the United States are Bear Butte, South Dakota, Black Butte, Oregon, and the Sutter Buttes in California. In many cases, buttes have been given other names that do not use the word butte, for example, Courthouse Rock, Nebraska. Also, some large hills that are technically not buttes have names using the word, examples of which are Kamiak Butte, Chelan Butte and Steptoe Butte in Washington state.
Gallery
| Physical sciences | Other erosional landforms | Earth science |
338046 | https://en.wikipedia.org/wiki/Dihedral%20angle | Dihedral angle | A dihedral angle is the angle between two intersecting planes or half-planes. It is a plane angle formed on a third plane, perpendicular to the line of intersection between the two planes or the common edge between the two half-planes. In higher dimensions, a dihedral angle represents the angle between two hyperplanes. In chemistry, it is the clockwise angle between half-planes through two sets of three atoms, having two atoms in common.
Mathematical background
When the two intersecting planes are described in terms of Cartesian coordinates by the two equations
the dihedral angle, between them is given by:
and satisfies It can easily be observed that the angle is independent of and .
Alternatively, if and are normal vector to the planes, one has
where is the dot product of the vectors and is the product of their lengths.
The absolute value is required in above formulas, as the planes are not changed when changing all coefficient signs in one equation, or replacing one normal vector by its opposite.
However the absolute values can be and should be avoided when considering the dihedral angle of two half planes whose boundaries are the same line. In this case, the half planes can be described by a point of their intersection, and three vectors , and such that , and belong respectively to the intersection line, the first half plane, and the second half plane. The dihedral angle of these two half planes is defined by
,
and satisfies In this case, switching the two half-planes gives the same result, and so does replacing with In chemistry (see below), we define a dihedral angle such that replacing with changes the sign of the angle, which can be between and .
In polymer physics
In some scientific areas such as polymer physics, one may consider a chain of points and links between consecutive points. If the points are sequentially numbered and located at positions , , , etc. then bond vectors are defined by =−, =−, and =−, more generally. This is the case for kinematic chains or amino acids in a protein structure. In these cases, one is often interested in the half-planes defined by three consecutive points, and the dihedral angle between two consecutive such half-planes. If , and are three consecutive bond vectors, the intersection of the half-planes is oriented, which allows defining a dihedral angle that belongs to the interval . This dihedral angle is defined by
or, using the function atan2,
This dihedral angle does not depend on the orientation of the chain (order in which the point are considered) — reversing this ordering consists of replacing each vector by its opposite vector, and exchanging the indices 1 and 3. Both operations do not change the cosine, but change the sign of the sine. Thus, together, they do not change the angle.
A simpler formula for the same dihedral angle is the following (the proof is given below)
or equivalently,
This can be deduced from previous formulas by using the vector quadruple product formula, and the fact that a scalar triple product is zero if it contains twice the same vector:
Given the definition of the cross product, this means that is the angle in the clockwise direction of the fourth atom compared to the first atom, while looking down the axis from the second atom to the third. Special cases (one may say the usual cases) are , and , which are called the trans, gauche+, and gauche− conformations.
In stereochemistry
In stereochemistry, a torsion angle is defined as a particular example of a dihedral angle, describing the geometric relation of two parts of a molecule joined by a chemical bond. Every set of three non-colinear atoms of a molecule defines a half-plane. As explained above, when two such half-planes intersect (i.e., a set of four consecutively-bonded atoms), the angle between them is a dihedral angle. Dihedral angles are used to specify the molecular conformation. Stereochemical arrangements corresponding to angles between 0° and ±90° are called syn (s), those corresponding to angles between ±90° and 180° anti (a). Similarly, arrangements corresponding to angles between 30° and 150° or between −30° and −150° are called clinal (c) and those between 0° and ±30° or ±150° and 180° are called periplanar (p).
The two types of terms can be combined so as to define four ranges of angle; 0° to ±30° synperiplanar (sp); 30° to 90° and −30° to −90° synclinal (sc); 90° to 150° and −90° to −150° anticlinal (ac); ±150° to 180° antiperiplanar (ap). The synperiplanar conformation is also known as the syn- or cis-conformation; antiperiplanar as anti or trans; and synclinal as gauche or skew.
For example, with n-butane two planes can be specified in terms of the two central carbon atoms and either of the methyl carbon atoms. The syn-conformation shown above, with a dihedral angle of 60° is less stable than the anti-conformation with a dihedral angle of 180°.
For macromolecular usage the symbols T, C, G+, G−, A+ and A− are recommended (ap, sp, +sc, −sc, +ac and −ac respectively).
Proteins
A Ramachandran plot (also known as a Ramachandran diagram or a [φ,ψ] plot), originally developed in 1963 by G. N. Ramachandran, C. Ramakrishnan, and V. Sasisekharan, is a way to visualize energetically allowed regions for backbone dihedral angles ψ against φ of amino acid residues in protein structure.
In a protein chain three dihedral angles are defined:
ω (omega) is the angle in the chain Cα − C' − N − Cα,
φ (phi) is the angle in the chain C' − N − Cα − C'
ψ (psi) is the angle in the chain N − Cα − C' − N (called φ′ by Ramachandran)
The figure at right illustrates the location of each of these angles (but it does not show correctly the way they are defined).
The planarity of the peptide bond usually restricts ω to be 180° (the typical trans case) or 0° (the rare cis case). The distance between the Cα atoms in the trans and cis isomers is approximately 3.8 and 2.9 Å, respectively. The vast majority of the peptide bonds in proteins are trans, though the peptide bond to the nitrogen of proline has an increased prevalence of cis compared to other amino-acid pairs.
The side chain dihedral angles are designated with χn (chi-n). They tend to cluster near 180°, 60°, and −60°, which are called the trans, gauche−, and gauche+ conformations. The stability of certain sidechain dihedral angles is affected by the values φ and ψ. For instance, there are direct steric interactions between the Cγ of the side chain in the gauche+ rotamer and the backbone nitrogen of the next residue when ψ is near -60°. This is evident from statistical distributions in backbone-dependent rotamer libraries.
Geometry
Every polyhedron has a dihedral angle at every edge describing the relationship of the two faces that share that edge. This dihedral angle, also called the face angle, is measured as the internal angle with respect to the polyhedron. An angle of 0° means the face normal vectors are antiparallel and the faces overlap each other, which implies that it is part of a degenerate polyhedron. An angle of 180° means the faces are parallel, as in a tiling. An angle greater than 180° exists on concave portions of a polyhedron.
Every dihedral angle in an edge-transitive polyhedron has the same value. This includes the 5 Platonic solids, the 13 Catalan solids, the 4 Kepler–Poinsot polyhedra, the two quasiregular solids, and two quasiregular dual solids.
Law of cosines for dihedral angle
Given 3 faces of a polyhedron which meet at a common vertex P and have edges AP, BP and CP, the cosine of the dihedral angle between the faces containing APC and BPC is:
This can be deduced from the spherical law of cosines, but can also be found by other means.
| Mathematics | Two-dimensional space | null |
338154 | https://en.wikipedia.org/wiki/Wound | Wound | A wound is any disruption of or damage to living tissue, such as skin, mucous membranes, or organs. Wounds can either be the sudden result of direct trauma (mechanical, thermal, chemical), or can develop slowly over time due to underlying disease processes such as diabetes mellitus, venous/arterial insufficiency, or immunologic disease. Wounds can vary greatly in their appearance depending on wound location, injury mechanism, depth of injury, timing of onset (acute vs chronic), and wound sterility, among other factors. Treatment strategies for wounds will vary based on the classification of the wound, therefore it is essential that wounds be thoroughly evaluated by a healthcare professional for proper management. In normal physiology, all wounds will undergo a series of steps collectively known as the wound healing process, which include hemostasis, inflammation, proliferation, and tissue remodeling. Age, tissue oxygenation, stress, underlying medical conditions, and certain medications are just a few of the many factors known to affect the rate of wound healing.
Classification
Wounds can be broadly classified as either acute or chronic based on time from initial injury and progression through normal stages of wound healing. Both wound types can further be categorized by cause of injury, wound severity/depth, and sterility of the wound bed. Several classification systems have been developed to describe wounds and guide their management. Some notable classification systems include the CDC's Surgical Wound Classification, the International Red Cross Wound Classification, the Tscherne classification, the Gustilo-Anderson classification of open fractures, and the AO soft tissue grading system.
Acute wounds
An acute wound is any wound which results from direct trauma and progresses through the four stages of wound healing along an expected timeline. The first stage, hemostasis, lasts from minutes to hours after initial injury. This stage is followed by the inflammatory phase which typically lasts 1 to 3 days. Proliferation is the third stage of wound healing and lasts from a few days up to a month. The fourth and final phase of wound healing, remodeling/scar formation, typically lasts 12 months but can continue as long as 2 years after the initial injury. Acute wounds can further be classified as either open or closed. An open wound is any injury whereby the integrity of the skin has been disrupted and the underlying tissue is exposed. A closed wound, on the other hand, is any injury in which underlying tissue has been damaged but the overlying skin is still intact.
Open wounds
Incisions or incised wounds – caused by a clean, sharp-edged object such as a knife, razor, or glass splinter.
– irregular tear-like wounds caused by some blunt trauma. Lacerations and incisions may appear linear (regular) or stellate (irregular). The term laceration is commonly misused in reference to incisions.
Abrasions (grazes) – superficial wounds in which the topmost layer of the skin (the epidermis) is scraped off. Abrasions are often caused by a sliding fall onto a rough surface such as asphalt, tree bark or concrete.
Avulsions – injuries in which a body structure is forcibly detached from its normal point of insertion; a type of amputation where the extremity is pulled off rather than cut off. When used in reference to skin avulsions, the term 'degloving' is also sometimes used as a synonym.
Puncture wounds – caused by an object puncturing the skin, such as a splinter, nail, knife or sharp tooth.
Penetration wounds – caused by an object such as a knife entering and coming out from the skin.
Gunshot wounds – caused by a bullet or similar projectile driving into or through the body. There may be two wounds, one at the site of entry and one at the site of exit, generally referred to as a "through-and-through."
Critical wounds – Including large burns that have been split. These wounds can cause serious hydroelectrolytic and metabolic alterations including fluid loss, electrolyte imbalances, and increased catabolism.
Closed wounds
Hematomas (or blood tumor) – caused by damage to a blood vessel that in turn causes blood to collect under the skin.
Hematomas that originate from internal blood vessel pathology are petechiae, purpura, and ecchymosis. The different classifications are based on size.
Hematomas that originate from an external source of trauma are contusions, also commonly called bruises.
Crush injury – caused by a great or extreme amount of force applied over a long period of time.
Fractures
Fractures can be classified as either open or closed, depending on whether the integrity of the overlying skin has been disrupted or preserved, respectively. Several classification systems have been developed to further characterize soft tissue injuries in the setting of an underlying fracture:
Tscherne classification – Used to describe external appearance of wounds in both open and closed fractures.
Gustilo-Anderson classification – Classifies open fractures based on wound size, extent of soft tissue loss, and degree of contamination.
Hannover Fracture scale – Used in open fractures as an extremity salvage assessment.
AO Classification – adapted from the Tscherne classification, provides separate grading system for skin, muscles/tendons, and neurovascular structures.
Chronic wounds
Any wound which is arrested or delayed during any of the normal stages of wound healing is considered to be a chronic wound. Most commonly, these are wounds which develop due to an underlying disease process such as diabetes mellitus or arterial/venous insufficiency. However, it is important to note that any acute wound has the potential to become a chronic wound if any of the normal stages of wound healing are interrupted. Chronic wounds are most commonly a result of disruption of the inflammatory phase of wound healing, however errors in any phase can result in a chronic wound. The exact duration of time which distinguishes a chronic wound from an acute wound is not clearly defined, although many clinicians agree that wounds which have not progressed for over three months are considered chronic wounds.
Common causes of chronic wounds
Diabetes mellitus – Wound healing impairment in the setting of diabetes is multifactorial. Hyperglycemia, neuropathy, microvascular complications, impaired immune and inflammatory responses, and psychological factors have all been implicated in the formation and propagation of diabetic wounds. Feet are the most common location of diabetic wounds, although any type of wound can be negatively impacted by diabetes. It has been estimated that up to 25% of patients with diabetes mellitus will be affected by non-healing wounds in their lifetime.
Venous/Arterial insufficiency – Impaired blood outflow (venous) or inflow (arterial) can both impair wound healing, thereby causing chronic wounds. Much like diabetes, venous/arterial insufficiency most commonly result in chronic wounds of the lower extremities. In chronic venous insufficiency, blood pooling impedes oxygen exchange and creates a chronic pro-inflammatory environment which both promote formation of venous ulcers. Peripheral artery disease, on the other hand, causes wounds due to poor blood inflow and typically affects the most distal extremities (fingers, toes).
Immunologic disease – The immune system plays a critical role in the inflammatory process; therefore, any disease of the immune system has the potential to impair the inflammatory phase of wound healing, thereby leading to a chronic wound. Patients suffering from diseases such as rheumatoid arthritis and lupus have been found to have larger wounds and prolonged time to heal when compared to the general population.
Pressure ulcer – Also known as decubitus ulcers or bedsores, this type of wound is a result of chronic pressure to the skin over a prolonged period. While most individuals have intact sensation and motor function which allow for frequent positional change to prevent the formation of such ulcers, older individuals are particularly susceptible to this type of chronic injury due to impaired neurosensory responses. Pressure ulcers can occur in as little as two hours of immobility in a bedridden patient or person who is otherwise unconscious/sedated (surgery, syncope, etc.). In the United States, pressure ulcers are graded using the National Pressure Injury Advisory Panel (NPIAP) system. In this system, ulcers are graded on wound depth with stage 1 being the least severe (erythema, intact skin) and stage 4 being full thickness damage through subcutaneous tissue down to muscle, tendon, or bone. Any ulcer that cannot be assessed due to overlying eschar is considered unstageable.
Wound sterility
Wound sterility, or degree of contamination of a wound, is a critical consideration when evaluating a wound. In the United States, the CDC's Surgical Wound Classification System is most commonly used for classification of a wound's sterility, specifically within a surgical setting. According to this classification system, four different classes of wound exist, each with their own postoperative risk of surgical site infection:
Class 1 – clean wound: a wound that is not infected and without signs of inflammation. This type of wound is typically closed. By definition, this type of wound excludes any wounds of the respiratory, genital, alimentary, or urinary tract.
Class 2 – clean-contaminated wound: a wound with a low level of contamination. May involve entry into the respiratory, genital, alimentary, or urinary tract.
Class 3 – contaminated wound: an open, accidental wound resulting from trauma outside of a sterile setting is automatically considered a contaminated wound. Additionally, any surgical wound where there is a major break in sterile technique or obvious contamination from the gastrointestinal tract is considered a contaminated wound.
Class 4 – dirty/infected: a wound with evidence of an existing clinical infection. Class 4 wounds are usually found in old traumatic wounds which were not adequately treated and will show evidence of devitalized tissue or gross purulence.
Presentation
Workup
Physical examination
Wound presentation will vary greatly based on a number of factors, each of which is important to consider in order to establish a proper diagnosis and treatment plan. In addition to collecting a thorough history, the following factors should be considered when evaluating any wound:
Size of wound: Should be accurately measured at time of initial presentation and regularly remeasured until wound resolution.
Wound location: Very useful consideration in many chronic wounds, such as diabetic foot ulcers, pressure ulcers, and venous ulcers. Acute wounds will be located in areas consistent with the mechanism of injury (e.g. diagonal chest wall bruising from seatbelt following car accident).
Wound bed: A healthy wound bed will appear pink due to healthy granulation tissue. Presence of a dark red wound bed which bleeds easily on contact or excess granulation tissue (i.e. hypergranulation tissue) may indicate the presence of an infection or non-healing wound.
Wound depth: The depth of a wound is often not apparent on visual inspection alone. Proper evaluation of wound depth includes use of a probe to measure wound depth and evaluate for undermining of wound edges or sinus/fistula formation.
Necrotic tissue, slough, eschar: Wounds may be covered with a layer of dead tissue which may appear cream/yellow in color (slough) or as a black, hardened tissue (eschar). Removing this tissue is critical for properly evaluating both the depth of a wound and quality of the wound bed, and promotes wound healing.
Wound edges: May provide clues to cause of specific wounds, such as gently sloping edges of venous ulcers or rolled edges of certain tumors.
Surrounding skin: Appearance of the surrounding skin can provide clues to underlying disease processes, such as redness/erythema due to cellulitis, maceration due to uncontrolled wound exudate, or eczematous changes due to a chronic irritation (e.g. allergic reaction to wound dressing).
Infection: Classic signs of infection are redness, warmth, swelling, odor, and pain out of proportion to wound appearance.
Pain: Pain can be nociceptive, neuropathic, or inflammatory, each of which can provide clues to the cause of a wound. Proper pain control is an important consideration in wound management, particularly in burn care where analgesia is often necessary prior to dressing changes.
A thorough wound evaluation, particularly evaluation of wound depth and removal of necrotic tissue, should be performed only by a licensed healthcare professional in order to avoid damage to nearby structures, infection, or worsening pain.
Diagnostics
Additional diagnostic tests may be needed during wound evaluation based on the cause, appearance, and age of a wound.
Wound culture: If there is concern for infection, a wound can be more carefully evaluated for presence of bacteria via surface swabs, deep tissue biopsy, or needle biopsy. Surface swabs are most commonly used due to low cost, ease of use, and minimal pain to patient. Although swab cultures have been shown to reliably identify the organisms causing an infection, swabs are only able to identify bacteria on the surface of a wound and can occasionally be contaminated by normal skin flora. Deep tissue biopsy is considered the gold standard for diagnosing wound infections due to being both more accurate and precise than swabs, however it is more invasive, more painful, and less cost effective than swabs and therefore is not the first choice for collecting wound cultures. Needle aspiration can only be implemented in wounds with underlying abscesses or fluid collections.
Imaging: X-ray is useful to assess for an underlying fracture which may not be apparent on physical examination alone. Ultrasound, computed tomography (CT), and magnetic resonance imaging (MRI) can all be used to assess for identifying fluid collections, necrotic tissue, or inflammation. Ultrasound is portable, low cost, quickly implemented, and does not expose patients to radiation, but is limited in diagnostic capabilities. CT is another quickly implemented option which generally provides more diagnostic information compared to ultrasound, however it is less cost-effective and exposes patients to radiation. MRI offers the greatest image resolution and can provide diagnostic information on presence of soft tissue infection or bone infection. Like ultrasound, MRI does not expose patients to radiation, however it is the slowest and most difficult to implement of the all of these imaging methods.
Laboratory studies: Serum prealbumin levels may be useful in evaluating nutrition status in patients with chronic wounds or at risk for developing chronic wounds. Elevated erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP) can confirm presence of an infection but alone are not diagnostic. Routine bloodwork such as a basic metabolic panel (BMP) or complete blood count (CBC) are not typically required but may be useful in select circumstances.
Ankle-brachial index/toe-brachial index (ABI/TBI): These tests can be used to assess blood supply to the lower extremities and their results may affect management of lower extremity wounds such as venous/arterial ulcers, diabetic foot ulcers, or pressure ulcers.
Management
The goal of wound care is to promote an environment that allows a wound to heal as quickly as possible, with emphasis on restoring both form and function of the wounded area. Although optimal treatment strategies vary greatly depending on the specific cause, size, and age of a particular wound, there are universal principles of wound management that apply to all wounds. After a thorough evaluation is performed, all wounds should be properly irrigated and debrided. Proper cleansing of a wound is critical to prevent infection and promote re-epithelialization. Further efforts should be made to eliminate/limit any contributing factors to the wound (e.g. diabetes, pressure, etc.) and optimize the wound's healing ability (i.e. optimize nutritional status). The end goal of wound management is closure of the wound which can be achieved by primary closure, delayed primary closure, or healing by secondary intention, each of which is discussed below. Pain control is a mainstay of wound management, as wound evaluation, wound cleansing, and dressing changes can be a painful process.
Irrigation
Proper cleansing of a wound is critical in preventing infection and promoting healing of any wound. Irrigation is defined as constant flow of a solution over the surface of a wound. The goal of irrigation is not only to remove debris and potential contaminants from a wound, but also to assist in visual inspection of a wound and hydrate the wound. Irrigation is typically achieved with either a bulb or syringe and needle/catheter. The preferred solution for irrigation is normal saline which is readily accessible in the emergency department, although recent studies have shown no difference in emergency department infection rates when comparing normal saline to potable tap water. Irrigation can also be achieved with a diluted 1% povidone iodine solution, but studies have again shown no difference in infection rates when compared to normal saline. Irrigation with antiseptic solutions, such as non-diluted povidone iodine, chlorhexidine, and hydrogen peroxide is not preferred since these solutions are toxic to tissue and inhibit wound healing. The exact volume of irrigation used will vary depending on the appearance of the wound, although some sources have reported 50–100 mL of irrigation per 1 cm of wound length as a guideline.
Debridement
Debridement is defined as removal of devitalized or dead tissue, particularly necrotic tissue, eschar, or slough. Debridement is a critical aspect of wound care because devitalized tissue, particularly necrotic tissue, serves as nutrients for bacteria thereby promoting infection. Additionally, devitalized tissue creates a physical barrier over a wound which limits the effectiveness of any applied topical compounds and prevents re-epithelialization. Lastly, devitalized tissue, especially eschar, prevents accurate assessment of underlying tissue, making appropriate assessment of a wound impossible without adequate debridement. Debridement can be achieved in several ways:
Autolytic debridement: The most conservative type of debridement whereby the body's own natural defenses break down necrotic tissue via phagocytes and proteolytic enzymes. This method requires a moist environment and intact immune system.
Mechanical debridement: Achieved through use of mechanical force to remove devitalized tissue (e.g. wet-to-dry dressing, pressurized wound irrigation, pulse-lavage); however, this process will remove both healthy and non-healthy tissue and is therefore considered a non-selective debridement method.
Enzymatic debridement: A process of debridement in which enzymes such as proteinases or collagenases are applied topically to digest devitalized tissue. Depending on the agent, this process can be either selective or non-selective. Examples include trypsin, streptokinase-streptodornase combination, subtilisin, papain, and collagenase.
Surgical debridement: Also known as sharp debridement, this is a process in which devitalized tissue is removed through use of surgical instruments such as scalpels, curettes, or surgical scissors. Surgical debridement can be done in a hospital bed, in an outpatient clinic, or in an operating room depending on the particular wound, risk of bleeding, and anesthesia requirements.
Biological debridement: Also known as larval therapy, biological debridement is done through controlled application of sterile larvae (Lucilia sericata) to the wound bed. These larvae release proteolytic enzymes which dissolve necrotic tissue before then ingesting the now debrided tissue. Biologic debridement has the added benefit of being bactericidal since larvae will ingest bacteria as well as devitalized tissue. Despite the safety and effectiveness of this method, its applications are often limited due to patient's negative feelings towards larvae which are commonly associated with poor hygiene and perishable food.
Closure
The end goal of wound care is to re-establish the integrity of the skin, a structure which serves as a barrier to the external environment. The preferred method of closure is to reattach/reapproximate the wound edges together, a process known as primary closure/healing by primary intention. Wounds that have not been closed within several hours of the initial injury or wounds that are concerning for infection will often be left open and treated with dressings for several days before being closed 3–5 days later, a process known as delayed primary closure. The exact duration of time from initial injury in which delayed primary closure is preferred over primary closure is not clearly defined. Wounds that cannot be closed primarily due to substantial tissue loss can be healed by secondary intention, a process in which the wound is allowed to fill-in over time through natural physiologic processes. When healing by secondary intention, granulation tissue grows in from the wound edges slowly over time to restore integrity of the skin. Healing by secondary intention can take up to months, requires daily wound care, and leaves an unfavorable scar, thus primary closure is always preferred when possible. As an alternative, wounds that cannot be closed primarily can be addressed with skin grafting or flap reconstruction, typically done by a plastic surgeon. There are several methods that can be implemented to achieve primary closure of a wound, including suture, staples, skin adhesive, and surgical strips. Suture is the most frequently used for closure. There are many types of suture, but broadly they can be categorized as absorbable vs non-absorbable and synthetic vs natural. Absorbable sutures have the added benefit of not requiring removal and are often preferred in children for this reason. Staples are less time-consuming and more cost effective than suture but have a risk of worse scarring if left in place for too long. Adhesive glue and sutures have comparable cosmetic outcomes for minor lacerations <5 cm in adults and children. The use of adhesive glue involves considerably less time for the doctor and less pain for the person. The wound opens at a slightly higher rate but there is less redness. The risk for infections (1.1%) is the same for both. Adhesive glue should not be used in areas of high tension or repetitive movements, such as joints or the posterior trunk.
Dressings
After a wound is irrigated, debrided, and, if possible, closed, it should be dressed appropriately. The goals of a wound dressing are to act as a barrier to the outside environment, facilitate wound healing, promote hemostasis, and act as a form of mechanical debridement during dressing changes. The ideal wound dressing maintains a moist environment to optimize wound healing but is also capable of absorbing excess fluid as to avoid skin maceration or bacterial growth. Several wound dressing options are available, each tailored to different kinds of wounds:
Gauze: Composed of woven or non-woven cotton, rayon, and polyester, gauze is highly absorbent, but removal can be uncomfortable.
Films: Films are made of translucent polyurethane which is adherent to skin and semi-occlusive, allowing them to retain within the dressing but also allow for exchange of gases such as oxygen and carbon dioxide. The translucent nature of this dressing makes monitoring wounds simple.
Hydrocolloids: Consist of an outer, water-impermeable layer and an inner layer made of colloid. When the inner colloid layer comes in contact with liquid, it becomes a gel allowing the dressing to maintain a moist environment while simultaneously absorbing exudate. Hydrocolloids cause minimal pain on removal but are at increased risk of skin maceration and bacterial growth.
Hydrogels: An insoluble, hydrophilic material with soothing properties which is useful in treating burn wounds, dry chronic wounds, and pressure ulcers. Like hydrocolloids, hydrogels are capable of retaining excess moisture leading to skin maceration and bacterial growth.
Foams: A flexible material with a hydrophobic outer layer that shields liquid from the outside environment, while having a highly absorptive inner layer which is ideal for highly exuding wounds. Foams should not be used in dryer wounds that require exudate to stay moist.
Alginates: Derived from seaweed, alginates can absorb up to 15–20 times their weight in liquid and are ideal for highly exudative wounds. Like hydrocolloids, alginates form a gel when they come in contact with fluid, making removal relatively painless.
Hydrofibers: A derivative of hydrocolloid dressings, hydrofibers are able to absorb up to 25 times their weight in fluid, making them the most absorbent dressing. They are much like alginate dressings in their absorptive capacity and tendency to form a gel upon contact with liquid.
Medicated dressings: Many dressings come impregnated with medication, typically antimicrobial agents or debriding chemicals. Silver, iodine, growth hormones, enzymes, and antibacterial agents are most common.
Negative-pressure wound therapy (NPWT): A unique type of dressing which consists of a foam dressing surrounded with an airtight film and then connected to power-assisted vacuum suction, creating a negative pressure environment over the wound. This negative pressure environment is thought to promote formation of granulation tissue and decrease inflammatory fluid. NPWT has the added benefit of requiring less frequent dressing changes, a process that is often painful for patients. Since its implementation, NPWT has been implemented broadly for chronic non-healing wounds but can also be applied to acute wounds that cannot be closed primarily due to swelling or concern for infection. This type of dressing is typically applied in the operating room but can be done at bedside with appropriate analgesia.
Maintenance and surveillance
Ideally, wound dressings should be changed daily to promote a clean environment and allow for daily evaluation of wound progression. Highly exudative wounds and infected wounds should be monitored closely and may require more frequent dressing changes. Negative pressure wound dressings can be changed less frequently, every 2–3 days. Wound progression over time can be monitored with transparent sheet tracings or photographs, each of which produce reliable measurements of wound surface area.
Alternative medicine
There is moderate evidence that honey is more effective than antiseptic followed by gauze for healing wounds infected after surgical operations. There is a lack of quality evidence relating to the use of honey on other types of wounds, such as minor acute wounds, mixed acute and chronic wounds, pressure ulcers, Fournier's gangrene, venous leg ulcers, diabetic foot ulcers and Leishmaniasis.
Therapeutic touch has been implicated as a complementary therapy in wound healing; however, there is no high quality research supporting its use as an evidence based clinical intervention. More than 400 species of plants are identified as potentially useful for wound healing. Only three randomized controlled trials, however, have been done for the treatment of burns.
History
From the Classical Period to the Medieval Period, the body and the soul were believed to be intimately connected, based on several theories put forth by the philosopher Plato. Wounds on the body were believed to correlate with wounds to the soul and vice versa; wounds were seen as an outward sign of an inward illness. Thus, a man who was wounded physically in a serious way was said to be hindered not only physically but spiritually as well. If the soul was wounded, that wound may also eventually become physically manifest, revealing the true state of the soul. Wounds were also seen as writing on the "tablet" of the body. Wounds acquired in war, for example, told the story of a soldier in a form which all could see and understand, and the wounds of a martyr told the story of their faith.
Research
In humans and mice it has been shown that estrogen might positively affect the speed and quality of wound healing.
| Biology and health sciences | Injury | null |
338199 | https://en.wikipedia.org/wiki/Hasse%20diagram | Hasse diagram | In order theory, a Hasse diagram (; ) is a type of mathematical diagram used to represent a finite partially ordered set, in the form of a drawing of its transitive reduction. Concretely, for a partially ordered set one represents each element of as a vertex in the plane and draws a line segment or curve that goes upward from one vertex to another vertex whenever covers (that is, whenever , and there is no distinct from and with ). These curves may cross each other but must not touch any vertices other than their endpoints. Such a diagram, with labeled vertices, uniquely determines its partial order.
Hasse diagrams are named after Helmut Hasse (1898–1979); according to Garrett Birkhoff, they are so called because of the effective use Hasse made of them. However, Hasse was not the first to use these diagrams. One example that predates Hasse can be found in an 1895 work by Henri Gustave Vogt. Although Hasse diagrams were originally devised as a technique for making drawings of partially ordered sets by hand, they have more recently been created automatically using graph drawing techniques.
In some sources, the phrase "Hasse diagram" has a different meaning: the directed acyclic graph obtained from the covering relation of a partially ordered set, independently of any drawing of that graph.
Diagram design
Although Hasse diagrams are simple, as well as intuitive, tools for dealing with finite posets, it turns out to be rather difficult to draw "good" diagrams. The reason is that, in general, there are many different possible ways to draw a Hasse diagram for a given poset. The simple technique of just starting with the minimal elements of an order and then drawing greater elements incrementally often produces quite poor results: symmetries and internal structure of the order are easily lost.
The following example demonstrates the issue. Consider the power set of a 4-element set ordered by inclusion . Below are four different Hasse diagrams for this partial order. Each subset has a node labelled with a binary encoding that shows whether a certain element is in the subset (1) or not (0):
The first diagram makes clear that the power set is a graded poset. The second diagram has the same graded structure, but by making some edges longer than others, it emphasizes that the 4-dimensional cube is a combinatorial union of two 3-dimensional cubes, and that a tetrahedron (abstract 3-polytope) likewise merges two triangles (abstract 2-polytopes). The third diagram shows some of the internal symmetry of the structure. In the fourth diagram the vertices are arranged in a 4×4 grid.
Upward planarity
If a partial order can be drawn as a Hasse diagram in which no two edges cross, its covering graph is said to be upward planar. A number of results on upward planarity and on crossing-free Hasse diagram construction are known:
If the partial order to be drawn is a lattice, then it can be drawn without crossings if and only if it has order dimension at most two. In this case, a non-crossing drawing may be found by deriving Cartesian coordinates for the elements from their positions in the two linear orders realizing the order dimension, and then rotating the drawing counterclockwise by a 45-degree angle.
If the partial order has at most one minimal element, or it has at most one maximal element, then it may be tested in linear time whether it has a non-crossing Hasse diagram.
It is NP-complete to determine whether a partial order with multiple sources and sinks can be drawn as a crossing-free Hasse diagram. However, finding a crossing-free Hasse diagram is fixed-parameter tractable when parametrized by the number of articulation points and triconnected components of the transitive reduction of the partial order.
If the y-coordinates of the elements of a partial order are specified, then a crossing-free Hasse diagram respecting those coordinate assignments can be found in linear time, if such a diagram exists. In particular, if the input poset is a graded poset, it is possible to determine in linear time whether there is a crossing-free Hasse diagram in which the height of each vertex is proportional to its rank.
Use in UML notation
In software engineering / Object-oriented design, the classes of a software system and the inheritance relation between these classes is often depicted using a class diagram, a form of Hasse diagram in which the edges connecting classes are drawn as solid line segments with an open triangle at the superclass end.
| Mathematics | Order theory | null |
338314 | https://en.wikipedia.org/wiki/Personal%20watercraft | Personal watercraft | A personal watercraft (PWC), also called Jet Ski or water scooter, is a primarily recreational watercraft that is designed to hold only a small number of occupants, who sit or stand on top of the craft, not within the craft as in a boat.
Prominent brands of PWCs include Jet Skis and Sea-Doos.
PWCs have two style categories. The first and the most popular is a compact runabout, typically holding no more than two or three people, who mainly sit on top of the watercraft as one does when riding an ATV or snowmobile. The second style is a "stand-up" type, typically built for only one occupant who operates the watercraft standing up as in riding a motorized scooter; it is used more for doing tricks, racing, and in competitions. Both styles have an inboard engine driving a pump-jet that has a screw-shaped impeller to create thrust for propulsion and steering. Most are designed for two or three people, though four-passenger models exist. Many of today's models are built for more extended use and have the fuel capacity to make long cruises, in some cases even beyond .
Personal watercraft are often referred by the trademarked brand names of Kawasaki (Jet Ski), Yamaha (WaveRunner), Bombardier (Sea-Doo), Elaqua (E-PWC) and Honda (AquaTrax).
Personal watercraft boat conversion kits exist as Waveboats.
The United States Coast Guard defines a personal watercraft, amongst other criteria, as a jet-drive boat less than long. There are many larger "jetboats" not classed as PWCs, some more than long.
History
Water scooters—as they were originally termed—were first developed in the United Kingdom and Europe in the mid-1950s, with models such as the British 200cc propeller-driven Vincent Amanda, and the German Wave Roller. Two thousand Vincent Amandas were exported to Australia, Asia, Europe and the United States.
The Sea Skimmer was introduced in 1961 as a highly maneuverable version of a propelled surfboard. It was long, powered by an inboard/outboard motor and reached speeds up to . The rider lay on the boat, controlling the speed with hand throttles and using the feet as rudders. Originally manufactured in Kansas City, operations moved to Boynton, Florida, in 1962, and the name was changed to Aqua-Skimmer. Aqua-Skimmer ceased operations in 1962 and sold its inventory to the military. Renamed Aqua Dart (Aqua Dart INC), the Sea Skimmer, Aqua Skimmer, Aqua Dart was modified for military requirements, and saw service in 1962 river reconnaissance missions in Vietnam and other military missions until the 1970s.
In the 1960s, the idea was developed further by Clayton Jacobson II of Lake Havasu City, Arizona, USA. Originally a motocross enthusiast, Jacobson's idea was designed in the mid-1960s, powered by an internal pump-jet rather than an outboard motor, made of all aluminum, and had a fixed, upright handle. Jacobson eventually quit his job in banking to devote himself to developing the idea, and had a working prototype by 1965. It differed slightly from modern personal watercraft but had definite similarities. He completed a second prototype a year later made of fiberglass.
The first Clayton-type PWC to reach the market was designed by Bombardier in the late 1960s. Bombardier's original designs were not very popular and Bombardier left the business before 1970.
In Greece, an inventor named Dimitrios T. Moraitidis, built a prototype and submitted a patent to the government of the Kingdom of Greece on the 5th June 1970, with serial number 40056. He never exploited the invention commercially. He died on March 5, 2022.
Stand-up PWCs were first produced by the Japanese company Kawasaki (under the Jet Ski brand) in 1972, and appeared on the US market in 1973. These were mass-produced boats to be used by only one rider. While they are still produced today, the more popular design is the sit-down variety of PWC. These sit-down runabouts have been produced by Kawasaki (Jetski), Bombardier (Sea-Doo), Yamaha (WaveRunner), Honda (AquaTrax), Polaris (Sealion) and Arctic Cat (Tigershark). As of 2010, the major manufacturers of PWCs were Kawasaki, Bombardier and Yamaha. Both Yamaha and Kawasaki continue to sell stand-up models but it is a small percentage of the overall market.
Electric PWCs were commercialized in the early 2020s. Electric watercraft are increasing in popularity as gasoline engines produce greenhouse gases and can eject motor oil and gasoline directly into waterways.
Sports
PWC racing competitions take place around the world. There are several disciplines: closed circuit speed races, offshore speed races (offshore), endurance races, freestyle (freestyle) and freeride events. For all these types of events, with the exception of freestyle, there are at least two categories: saddle jets and stand-up jets. For speed races, gear is generally classified according to the degree of authorized modifications: minor modifications fall into the so-called "stock" category, intermediate modifications into the so-called "limited" category, and more extensive modifications into the category. known as "F1". In freestyle and freeride, these categories do not exist; rather, the competitors are classified according to the type of watercraft used (with stand-up or saddle).
The sport is ruled by the World Powerboating Federation (Union Internationale Motonautique, U.I.M.) recognised by the International Olympic Committee. The current official world series, established in 1996, is the Aquabike World Championship. The sport is also established at the national level and is ruled by each national federation's member of the U.I.M. Aquabike World Championship is known among the motorsports with most different national entries for each competition, reaching up to 32 nationalities and 140 riders registered to compete in Italy in 2018.
Other private competitions also exist, such as P1 AquaX, which is a personal watercraft racing series, first launched in the UK in May 2011 by London-based sports promoter Powerboat P1. The series attracted a mix of new and current racers to a new type of racing and in 2013, P1 rolled out a second series in the USA. Such was the uptake that the original format needed revising to cope with the influx of new riders and by the end of 2015 over 400 riders from 11 countries had registered to compete in an AquaX event.
In the United States, the main sanctioning bodies are the International Jet Sport Boating Association (IJSBA) and Pro Watercross (PWX). As of 2022, the sport is experiencing exceeding levels of fragmentation and conflict due to poor management of the sanctioning bodies and non-constructive competition between organizations. The IJSBA World Finals competition is traditionally held in Lake Havasu City, Arizona, in early October. The Pro Watercross World Finals are typically held in Naples, Florida, in November.
Non-recreational uses
PWCs are small, fast, easily handled, fairly easy to use, and affordable, and their propulsion systems do not have external propellers, making them in some respects safer than small motorboats for swimmers and wildlife. For these reasons, they are used for fishing, one of the PWC industry's fastest-growing segments.
Lifeguards use PWCs equipped with rescue platforms to rescue water users from trouble, as well as flood survivors, and carry them to safety. Police and rangers use them to enforce laws in coastal waters, lakes and rivers. A PWC combined with a wash-reduction system, carrying waterproof loudspeaker equipment and GPS for instructions and distance measurement, has reportedly been used by assistant coaches for rowing sports on the River Tyne.
Further, PWCs are used by the U.S. Navy as surface targets. When equipped with GPS, electronic compass, radar reflector, and a radio modem, the PWC can be controlled remotely with a two-way link. Its small shipboard footprint allows it to be stored in and deployed from the smallest of vessels, and it has been used for target practice for armaments of sizes from cannon to small arms.
Hazards
Apart from the obvious hazards of collisions and mechanical breakdowns common to all vehicles, operating or riding a PWC can involve a risk of orifice injuries. These injuries are typical of the kinds of injuries that waterskiers experience as a result of falling into the water at speed. Such injuries can occur from simply falling in the water at speed or they can occur from the output end of the pump jet. A rider who falls (or is ejected) off the back can land directly in the path of the PWC's high-pressure jet of water. Unless a rider is appropriately dressed in garments made out of a strong, thick substance like neoprene (as is commonly found in wetsuits), the jet may penetrate any orifice it reaches. All major PWC manufacturers warn about this risk and recommend that passengers wear wet suit bottoms or equivalent protection. The American Waterski Racing Association recommends that all of their racers wear wet suit bottoms for this same reason.
Such orifice injuries can result in permanent disability or death. For example, in 2006, the California Court of Appeal for the First Appellate District upheld a $3.7 million Napa County jury verdict against Polaris Industries arising out of one such incident (which had devastating effects on the victim's lower abdomen).
It is also possible for multiple riders on the same PWC to sustain orifice injuries in a single accident, as actually occurred in a 2007 accident at Mission Bay which resulted in a San Diego County jury verdict affirmed in full on appeal in 2014.
Another noteworthy risk of injury is known as off throttle steering, which results from the lack of steering capability while off throttle in certain models of PWCs. This can result in death or serious bodily injuries.
While also rare, spinal injuries can occur while surf jumping and, potentially, wake jumping. The PWC manufacturers' owner's manuals all include warnings regarding jumping at excessive heights, or operating a PWC if there is a prior history of back injury. The current on-product labels say "Jumping wakes or waves can increase the risk of spinal/backbone injuries (paralysis)". The current Kawasaki owner's manual provides: "Slow down before crossing waves. Do not ride if you have a back condition. High speed operation in choppy or rough water may cause back injuries."
Another rare, but unique injury risk with jetboats, is being sucked into the intake side of the pump jet. Current PWC products contain on-product warnings that state: "Keep away from Intake Grate while the engine is on. Items such as long hair, loose clothing, or PFD straps can become entangled in moving parts and result in severe injury or drowning".
There have been fatal accidents involving PWCs. In a notable case, U.S. astronaut Alan G. Poindexter died in 2012 from injuries sustained in a Jet Ski accident in Florida.
| Technology | Naval transport | null |
1005294 | https://en.wikipedia.org/wiki/Kentrosaurus | Kentrosaurus | Kentrosaurus ( ; ) is a genus of stegosaurid dinosaur from the Late Jurassic in Lindi Region of Tanzania. The type species is K. aethiopicus, named and described by German palaeontologist Edwin Hennig in 1915. Often thought to be a "primitive" member of the Stegosauria, several recent cladistic analyses find it as more derived than many other stegosaurs, and a close relative of Stegosaurus from the North American Morrison Formation within the Stegosauridae.
Fossils of K. aethiopicus have been found only in the Tendaguru Formation, dated to the late Kimmeridgian and early Tithonian ages, about 152 million years ago. Hundreds of bones were unearthed by German expeditions to German East Africa between 1909 and 1912. Although no complete skeletons are known, the remains provided a nearly complete picture of the build of the animal. In the Tendaguru Formation, it coexisted with a variety of dinosaurs such as the carnivorous theropods Elaphrosaurus and Veterupristisaurus, giant herbivorous sauropods Giraffatitan and Tornieria, and the dryosaurid Dysalotosaurus.
Kentrosaurus generally measured around in length as an adult, and weighed about . It walked on all fours with straight hindlimbs. It had a small, elongated head with a beak used to bite off plant material that would be digested in a large gut. It had a, probably double, row of small plates running down its neck and back. These plates gradually merged into spikes on the hip and tail. The longest spikes were on the tail end and were used to actively defend the animal. There also was a long spike on each shoulder. The thigh bones come in two different types, suggesting that one sex was larger and more stout than the other.
Discovery and naming
The first fossils of Kentrosaurus were discovered by the German Tendaguru Expedition in 1909, recognised as belonging to a stegosaur by expedition leader Werner Janensch on 24 July 1910, and described by German palaeontologist Edwin Hennig in 1915. The name Kentrosaurus was coined by Hennig and comes from the Greek /, meaning "sharp point" or "prickle", and / meaning "lizard", Hennig added the specific name aethiopicus to denote the provenance from Africa. Soon after its description, a controversy arose over the stegosaur's name, which is very similar to the ceratopsian Centrosaurus. Under the rules of biological nomenclature, forbidding homonymy, two animals may not be given the same name. Hennig renamed his stegosaur Kentrurosaurus, "pointed-tail lizard", in 1916, while Hungarian paleontologist Franz Nopcsa renamed the genus Doryphorosaurus, "lance-bearing lizard", the same year. If a renaming had been necessary, Hennig's would have had priority. However, because the spelling is different, both Doryphorosaurus and Kentrurosaurus are unneeded replacement names; Kentrosaurus remains the valid name for the genus with Kentrurosaurus and Doryphorosaurus being its junior objective synonyms.
Although no complete individuals were found, some material was discovered in association, including a nearly complete tail, hip, several dorsal vertebrae and some limb elements of one individual. These form the core of a mount in the Museum für Naturkunde by Janensch. The mount was dismantled during the museum renovation in 2006/2007, and re-mounted in an improved pose by Research Casting International. Some other material, including a braincase and spine, was thought to have been misplaced or destroyed during World War II. However, all the supposedly lost cranial material was later found in a drawer of a basement cupboard.
From 1909 onwards, Kentrosaurus remains were uncovered in four quarries in the Mittlere Saurierschichten (Middle Saurian Beds) and one quarry in the Obere Saurierschichten (Upper Saurian Beds). During four field seasons, the German Expedition found over 1200 bones of Kentrosaurus, belonging to about fifty individuals, many of which were destroyed during the Second World War. Today, almost all remaining material is housed in the Museum für Naturkunde Berlin (roughly 350 remaining specimens), while the museum of the Institute for Geosciences of the University of Tübingen houses a composite mount, roughly 50% of it being original bones.
In the original description, Hennig did not designate a holotype specimen. However, in a detailed monography on the osteology, systematic position and palaeobiology of Kentrosaurus in 1925, Hennig picked the most complete partial skeleton, today inventorised as MB.R.4800.1 through MB.R.4800.37, as a lectotype (see syntype). This material includes a nearly complete series of tail vertebrae, several vertebrae of the back, a sacrum with five sacral vertebrae and both ilia, both femora and an ulna, and is included in the mounted skeleton at the Museum für Naturkunde in Berlin, Germany. The type locality is Kindope, Tanzania, north of Tendaguru hill.
Unaware that Hennig had already defined a lectotype, Peter Galton selected two dorsal vertebrae, specimens MB.R.1930 and MB.R.1931, from the material figured in Hennig's 1915 description, as 'holotypes'. This definition of a holotype is not valid, because Hennig's selection has priority. In 2011, Heinrich Mallison clarified that all the material known to Hennig in 1915, i.e. all the bones discovered before 1912, when Hermann Heck concluded the last German excavations, are paralectotypes, and that MB.R.4800 is the correct lectotype.
Description
Kentrosaurus was a small stegosaur. It had the typical dinosaurian body bauplan, characterised by a small head, a long neck, short forelimbs and long hindlimbs, and a long, horizontal and muscular tail. Typical stegosaurid traits included the elongation and flatness of the head, the powerful build of the forelimbs, erect and pillar-like hindlimbs and an array of plates and spikes running along both sides of the top mid-line of the animal.
Size and posture
Kentrosaurus aethiopicus was a relatively small stegosaur, reaching in length and in body mass. Some specimens suggest that relatively larger individuals could have existed. These specimens are comparable to some Stegosaurus specimens in terms of the olecranon process in development.
The long tail of Kentrosaurus results in a position of the center of mass that is unusually far back for a quadrupedal animal. It rests just in front of the hip, a position usually seen in bipedal dinosaurs. However, the femora are straight in Kentrosaurus, as opposed to typical bipeds, indicating a straight and vertical limb position. Thus, the hindlimbs, though powered by massive thigh muscles attached to a long ilium, did not support the animal alone, and the very robust forelimbs took up 10 to 15% of the bodyweight. The center of mass was not heavily modified by the osteoderms (bony structures in skin) in Kentrosaurus or Stegosaurus, which allowed the animals to stay mobile despite their armament. The hindlimbs’ thigh muscles were very powerful, allowing Kentrosaurus to reach a tripod stance on its hindlegs and tail.
Skull and dentition
Eight specimens from the skull, mandible, and teeth have been collected and described from the Tendaguru Formation, most of them being isolated elements. Two quadrates (bones from the jaw joint) were referred to Kentrosaurus, but they instead belong to a juvenile brachiosaurid.
The long and narrow skull was small in proportion to the body. It had a small antorbital fenestra, the hole between the nose and eye common to most archosaurs, including modern birds, though lost in extant crocodylians. The skull's low position suggests that Kentrosaurus may have been a browser of low-growing vegetation. This interpretation is supported by the absence of premaxillary teeth and their likely replacement by a horny beak or rhamphotheca. The presence of a beak extended along much of the jaws may have precluded the presence of cheeks in stegosaurs. Due to its phylogenetic position, it is unlikely that Kentrosaurus had an extensive beak like Stegosaurus and it instead probably had a beak restricted to the jaw tips. Other researchers have interpreted these ridges as modified versions of similar structures in other ornithischians which might have supported fleshy cheeks, rather than beaks.
There are two nearly complete braincases known from Kentrosaurus though they exhibit some taphonomic distortion. The frontals and parietals are flat and broad, with the latter bearing two transversely concave ventral sides with a ridge running down the middle that divides them. The lateral surface of the frontals form part of the orbit (eye socket) and the medial side creates the anterior part of the endocranial cavity (braincase). Basioccipitals (where the skull articulated with the cervical vertebrae) form the posterior floor of the brain and the occipital condyle, which is large and spherical in Kentrosaurus. The rest of the braincase is formed by the presphenoid composing the anterior end. The overall braincase morphology is very similar to those of Tuojiangosaurus, Huayangosaurus, and Stegosaurus. However, the occipital condyle is a closer distance to the basisphenoid tubera (bone at the front of the braincase) in Kentrosaurus and Huayangosaurus than in Tuojiangosaurus and some specimens of Stegosaurus. Due to dinosaurs having more molding in their braincases, endocasts of Kentrosaurus can be reconstructed using the preserved fossils. The brain is relatively short, deep, and small, with a strong cerebral and pontine flexures and a steeply inclined posterodorsal edge when compared to those of other ornithischians. There is a small dorsal projection in the endocast where an unossified (lacking bone) region between the top of the supraoccipital (bone at the top-back of the braincase) and overlying parietal that was likely covered in cartilage. This characteristic is seen in other ornithischians. Because of the prominent flexures, many of the aspects of the brain can only be interpreted by the present structures.
In the mandible (lower jaw), only an incomplete right dentary is known from Kentrosaurus. The deep dentary is almost identical in shape to that of Stegosaurus, albeit much smaller. Similarly, the tooth is a typical stegosaurian tooth, small with a widened base and vertical grooves creating five ridges. The dentary has 13 preserved alveoli on the dorsomedial side and they are slightly convex in lateral and dorsal views. On the surface adjacent to the alveoli, there is a shallow groove bearing small foramina (small openings in bone) that is similar to grooves on the dentary of the Cretaceous neornithischian Hypsilophodon, with one foramina per tooth position. Stegosaurian teeth were small, triangular, and flat; wear facets show that they did grind their food. A single complete cheek tooth is preserved, with a large crown and long root. The crown notably has fewer marginal denticles and a prominent cingulum compared to Stegosaurus, Tuojiangosaurus, and Huayangosaurus.
Postcrania
The neck was composed of 13 cervical (neck) vertebrae, the first being the atlas which was strongly fused to the occipital region of the skull and followed by the axis. The other 11 cervicals had hourglass-shaped centra (the base of a vertebra) and rounded ventral keels. The diapophyses are large and strongly angled posteriorily and parallel to each other. The spinous processes got larger towards the posterior end, while the postzygapophyses became smaller and less horizontal, giving the anterior part of the neck lots of mobility laterally. The dorsal column consists of 13 dorsal (back) vertebrae which are tall and have short centra. They have a neural arch more than twice as high as the centrum, the vertebral body, and almost completely occupied by the extremely spacious neural canal, a trait unique to Kentrosaurus. The diapophyses too were laterally elongated, creating a Y-shape in anterior view. The sacrum (part of pelvis with vertebrae) consists of 6 fused centra, the first being a loose sacrodorsal, while the rest of the centra's transverse processes (extensions of bone) are fused to the dorsal parts of the sacral ribs into a solid sacral plate. The ribs also fuse to the ilium (the upper part of the pelvis) creating a fully ankylosed and solid sacrum. The ilium is notable in that the preacetabular process, front blade, of the ilium widens laterally, to the front outer side, and does not taper unlike in all other stegosaurs. Another characteristic is that the length of the ilium equals, or is greater than, that of the thigh bone. The caudal (tail) vertebrae are 29 in number, though 27-29 are coossified for attachment to the thagomizers (tail spikes). The caudal vertebrae are unique, as they have a combination of transverse processes up to the 28th vertebra and rod-shaped processes on the posterior caudals. These posterior caudal processes have narrow bases that do not tough the plate formed by the fusion of the processes of the sacral vertebrae. Kentrosaurus can be distinguished from other members of the Stegosauria by a number of processes of the vertebrae, which in the tail do not run sub-parallel, as in most dinosaurs. In the front third of the tail, they point backwards, the usual direction. In the middle tail, however, they are almost vertical, and further back they are hook-shaped and point obliquely forward. The chevrons, bones pointing to below from the bottom side of the tail vertebrae, have the shape of an inverted T.
The scapula (shoulder blade) is sub-rectangular, with a robust blade. Though it is not always perfectly preserved, the acromion ridge is slightly smaller than in Stegosaurus. The blade is relatively straight, although it curves towards the back. There is a small bump on the back of the blade, that would have served as the base of the triceps muscle. The coracoid is sub-circular. The fore limbs were much shorter than the stocky hind limbs, which resulted in an unusual posture. The humerus (upper arm bone), like other stegosaurs, has greatly expanded proximal and distal ends that were attachment points between the coracoid and ulna-radius (forearm bones) respectively. The radius was larger than the ulna and had a wedge-shaped proximal end. The manus (hand) was small and had five toes with 2 toes bearing only a single phalange. The hindlimbs were much larger and too are similar to those of other stegosaurs. The femur (thigh bone) is the longest element in the body, with the largest known femur measuring 665 mm from the proximal to distal end. The tibia (shin bone) was wide and robust, while the fibula was skinny and thin without a greatly expanded distal end. The pes (foot) terminated in 3 toes, all of which had hoof-like unguals (claws).
Armour
Typically for a stegosaur, Kentrosaurus had extensive osteoderm (bony structures in the skin) covering, including small plates (probably located on the neck and anterior trunk), and spikes of various shapes. The spikes of Kentrosaurus are very elongated, with one specimen having a bone core length of 731 millimetres. The plates have a thickened section in the middle, as if they were modified spines. The spikes and plates were likely covered by horn. Aside from a few exceptions they were not found in close association with other skeletal remains. Thus, the exact position of most osteoderms is uncertain. A pair of closely spaced spikes was found articulated with a tail tip, and a number of spikes were found apparently regularly spaced in pairs along the path of an articulated tail.
Hennig and Janensch, while grouping the dermal armour elements into four distinct types, recognised an apparently continuous change of shape among them, shorter and flatter plates at the front gradually merging into longer and more pointed spikes towards the rear, suggesting an uninterrupted distribution along the entire body, in fifteen pairs. Because each type of osteoderm was found in mirrored left and right versions, it seems probable that all types of osteoderms were distributed in two rows along the back of the animal, a marked contrast to the better-known North American Stegosaurus, which had one row of plates on the neck, trunk and tail, and two rows of spikes on the tail tip. There is one type of spike that differs from all others in being strongly, and not only slightly, asymmetrical, and having a very broad base. Because of bone morphology classic reconstructions placed it on the hips, at the iliac blade, while many recent reconstructions place it on the shoulder, because a similarly shaped spike is known to have existed on the shoulder in the Chinese stegosaurs Gigantspinosaurus and Huayangosaurus.
Classification and species
Like the spikes and shields of ankylosaurs, the bony plates and spines of stegosaurians evolved from the low-keeled osteoderms characteristic of basal thyreophorans. Galton (2019) interpreted plates of an armored dinosaur from the Lower Jurassic (Sinemurian-Pliensbachian) Lower Kota Formation of India as fossils of a member of Ankylosauria; the author argued that this finding indicates a probable early Early Jurassic origin for both Ankylosauria and its sister group Stegosauria.
The vast majority of stegosaurian dinosaurs thus far recovered belong to the Stegosauridae, which lived in the later part of the Jurassic and early Cretaceous, and which were defined by Paul Sereno as all stegosaurians more closely related to Stegosaurus than to Huayangosaurus. This group is widespread, with members across the Northern Hemisphere, Africa and possibly South America. The South American remains come from Chubut, Argentina and consist only of a partial humerus, but the anatomy of the humerus is very similar to that of Kentrosaurus and both date to the Late Jurassic. In a phylogenetic analysis, the Chubut stegosaurid was recovered in polytomy with Kentrosaurus as basal stegosaurids, further suggesting that they are closely related.
In Hennig's 1915 description, Kentrosaurus was assigned to the family Stegosauridae due to the preservation of dermal armor and features like posterodorsally angled neural spines on the caudal vertebrae. This is confirmed by modern cladistic analyses, although in 1915 Stegosauridae was a far more inclusive concept that included some taxa now classified as ankylosaurs. A consecutive narrowing down of this concept caused Kentrosaurus, until the 1980s to be seen as a typical "primitive" stegosaurian, to be placed in a more derived, higher, position in the stegosaur evolutionary tree. However, recent analyses have consistently found Kentrosaurus to be in Stegosauridae, though typically as one of the most basal genera in the family. Kentrosaurus has many traits not seen in other stegosaurids but seen in basal stegosaurians, such as the presence of a parascapular spine and maxillary teeth with only seven denticles at the margin.
The type and sole accepted species of Kentrosaurus is Kentrosaurus aethiopicus, named by Hennig in 1915. Fragmentary fossil material from Wyoming, named Stegosaurus longispinus by Charles Gilmore in 1914, was in 1993 classified as a North American species of Kentrosaurus, as K. longispinus. However, this action was not accepted by the paleontological community, and S. longispinus has been assigned to its own genus, Alcovasaurus, differing from Kentrosaurus in having more elongated tail spikes and the structure of the pelvis and vertebrae. Cladogram of the phylogenetic analysis of Stegosauridae conducted by Maidment et al (2019), which recovers a distinct Alcovasaurus:
Paleobiology
Feeding
Like all ornithischians, Kentrosaurus was a herbivore. The fodder was barely chewed and swallowed in large chunks. One hypothesis on stegosaurid diet holds that they were low-level browsers, eating foliage and low-growing fruit from various non-flowering plants. Kentrosaurus was capable of eating at heights of up to when on all fours. It may also have been possible for it to rear up on its hindlegs to reach vegetation higher in trees.
With its centre of mass close to the hind-limbs, the animal could potentially support itself as it stood up. The hips were likely capable of allowing a vertical trunk rotation of about 60 degrees and the tail probably would either have been fully lifted, not blocking this movement or have enough curvature to rest on the ground; thus it could have provided additional support, though precisely because of this flexibility it is not certain whether much support was actually provided: it was not stiff enough to function as a "third leg" as had been suggested by Robert Thomas Bakker. In this pose, Kentrosaurus could have fed at heights of .
Sexual dimorphism
Differences in the proportions, not the size, of the femurs (thighbones) led Holly Barden and Susannah Maidment to realize that Kentrosaurus probably showed sexual dimorphism. This dimorphism of the femurs consisted in them being either more or less robust than the other. The occurrence ratio of the robust morph to the gracile one was 2:1, and it is likely that the higher percentage of animals were females. Because of this ratio, it was considered reasonable to assume that in their society, Kentrosaurus males mated with more than one female, a behaviour also found in other vertebrates.
The problem posed by the ratio is that the multiple specimens studied, died in the same place, but probably not in a sudden mass-death and so do not represent a single herd or contemporary population. The results may have been distorted by a greater chance for robust animals of getting fossilised or discovered. In an earlier study by Galton in 1982, it was suggested that individual difference in the sacral rib count of both Kentrosaurus and Dacentrurus might be an indication of dimorphism: females would have had an extra pair of sacral ribs, having also the first sacral vertebra connected to the ilium, in addition to the subsequent four sacrals.
Reproduction and growth
As the plates and spikes would have been obstacles during copulation, it is possible that pairs mated back-to-back with the female staying still in a lordosis posture as the male maneuvers his penis into her cloaca. The shoulder spikes would have made the female unable to lie on her side during mating as is proposed for Stegosaurus.
In 2013, a study by Ragna Redelstorff e.a. concluded that the bone histology of Kentrosaurus indicated that it had a higher growth rate than reported for Stegosaurus and Scutellosaurus, in view of the relatively rapid deposition of highly vascularised fibrolamellar bone. As Stegosaurus was larger than Kentrosaurus, this contradicts the general rule that larger dinosaurs grew quicker than smaller ones.
Defense
Because the tail had at least forty caudal vertebrae, it was highly mobile. It could possibly swing at an arc of 180 degrees, covering the entire half circle behind it. Swing speeds at the tail end may have been as high as 50 km/h. Continuous rapid swings would have allowed the spikes to slash open the skin of its attacker or to stab the soft tissues and break the ribs or facial bones. More directed blows would have resulted in the sides of the spikes fracturing even sturdy longbones of the legs by blunt trauma. These attacks would have crippled small and medium-sized theropods and may even have done some damage to large ones. Earlier interpretations of the defensive behaviour of Kentrosaurus included the suggestion that the animal might have charged to the rear, to run through attackers with its spines, in the way of modern porcupines.
Though Kentrosaurus likely stood with forelimbs erect like in other dinosaurs, it is hypothesised that the animal adopted a sprawling posture when defending itself. Its neck was flexible enough to allow it to keep sight of predators, as it could reach the sides of its body with its snout and look over the back. In addition, the posterior position of the center of mass may not have been advantageous for rapid locomotion, but meant that the animal could quickly rotate around the hips by pushing sideways with the arms, keeping the tail pointed at the attacker. Kentrosaurus was nevertheless not invulnerable. A quick predator could have made it to the tail base (where the impact speed would be much lower) when the tail passed and the neck and upper-part of the body would have been unprotected by the tail swings. A successful predation of Kentrosaurus may have required group hunting. Compared to the more robust spikes of Stegosaurus, the thinner spikes of Kentrosaurus were at greater risk of bending.
Paleoecology
Kentrosaurus lived in what is now Tanzania in the Late Jurassic Tendaguru Formation. The main Kentrosaurus quarries were located in the Middle Saurian Beds dating from the upper Kimmeridgian. Some remains were found in the Upper Saurian Beds dating from the Tithonian. Since 2012, the boundary between the Kimmeridgian and Tithonian is dated at 152.1 million year ago.
The Tendaguru ecosystem primarily consisted of three types of environment: shallow, lagoon-like marine environments, tidal flats and low coastal environments; and vegetated inland environments. The marine environment existed above the fair weather wave base and behind siliciclastic and ooid barriers. It appeared to have had little change in salinity levels and experienced tides and storms. The coastal environments consisted of brackish coastal lakes, ponds and pools. These environments had little vegetation and were probably visited by herbivorous dinosaurs mostly during droughts. The well vegetated inlands were dominated by conifers. Overall, the Late Jurassic Tendaguru climate was subtropical to tropical with seasonal rains and pronounced dry periods. During the Early Cretaceous, the Tendaguru became more humid. The Tendaguru Beds are similar to the Morrison Formation of North America except in its marine interbeds.
Kentrosaurus would have coexisted with fellow ornithischians like Dysalotosaurus lettowvorbecki; the sauropods Giraffatitan brancai, Dicraeosaurus hansemanni and D. sattleri, Janenschia africana, Tendaguria tanzaniensis and Tornieria africanus; theropods "Allosaurus" tendagurensis, "Ceratosaurus" roechlingi, "Torvosaurus" ingens, Elaphrosaurus bambergi, Veterupristisaurus milneri and Ostafrikasaurus crassiserratus; and the pterosaur Tendaguripterus recki. Other organisms that inhabited the Tendaguru included corals, echinoderms, cephalopods, bivalves, gastropods, decapods, sharks, neopterygian fish, crocodilians and small mammals like Brancatherulum tendagurensis.
| Biology and health sciences | Ornitischians | Animals |
1005314 | https://en.wikipedia.org/wiki/Hypsilophodon | Hypsilophodon | Hypsilophodon (; meaning "high-crested tooth") is a neornithischian dinosaur genus from the Early Cretaceous period of England. It has traditionally been considered an early member of the group Ornithopoda, but recent research has put this into question.
The first remains of Hypsilophodon were found in 1849; the type species, Hypsilophodon foxii, was named in 1869. Abundant fossil discoveries were made on the Isle of Wight, giving a good impression of the build of the species. It was a small, agile bipedal animal with an herbivorous or possibly omnivorous diet, measuring long and weighing . It had a pointed head equipped with a sharp beak used to bite off plant material, much like modern-day parrots.
Some outdated studies have given rise to a number of misconceptions about Hypsilophodon, including that it was an armoured, arboreal animal, and that it could be found in areas outside of the Isle of Wight. However, research from the following years has shown these ideas to be incorrect.
Discovery and history
First specimens and the debate of distinctiveness
The first specimen of Hypsilophodon was recovered in 1849, when workers dug up the soon-called Mantell-Bowerbank block from an outcrop of the Wessex Formation, part of the Wealden Group, about one hundred yards west of Cowleaze Chine, on the south-west coast of Isle of Wight. The larger half of the block (including seventeen vertebrae, parts of ribs and a coracoid, some of the pelvis, and assorted hindleg remains) was given to naturalist James Scott Bowerbank, and the remainder (including eleven caudal vertebrae and most of the rest of hindlegs) to Gideon Mantell. After his death, Mantell's portion was acquired by the British Museum; Bowerbank's was acquired later, bringing both halves back together. Richard Owen studied both halves and, in 1855, published a short article on the specimen, considering it to be a young Iguanodon rather than a new taxon. This was unquestioned until 1867, when Thomas Henry Huxley compared the vertebrae and metatarsals of the specimen more closely to those of known Iguanodon, and concluded that it must be a different animal entirely. The next year, he saw a fossil skull discovered by William Fox on exhibition at the Norwich Meeting of the British Associations. Fox, who had also found his fossil in the Cowleaze Chine area, along with several other specimens, considered it to belong to a juvenile Iguanodon, or to represent a new, small species in the genus. Huxley noticed its unique dentition and edentulous premaxilla, reminiscent of but obviously distinct from that of Iguanodon. He concluded this specimen, too, represented a distinct animal from Iguanodon. After losing track of the specimen for some months, Huxley requested Fox grant him permission to study the specimen to a more extensive degree. The request was granted, and Huxley began work on his new species.
Huxley first announced the new species in 1869 in a lecture; the text of this, published the same year, forms the official naming article, because it contained a sufficient description. The species was named Hypsilophodon foxii, and the holotype was the Fox skull (which today has the inventory number NHM NHMUK PV R 197). The next year, Huxley published the expanded full description article. Within the same block of stone as the Fox skull, the centrum of a dorsal vertebra had been preserved. This allowed comparison with the Mantell-Bowerbank block, confirming it to belong to the same species. Further supporting this, Fox had confirmed that the block was found in the same geological bed as his material. As such, Huxley described this specimen in addition to the skull and centrum. It would become the paratype; its two pieces are now registered in the Natural History Museum as specimen NHMUK PV OR 28707, NHMUK PV OR 39560–1. Later in the same year, Huxley classified Hypsilophodon taxonomically, considering it to belong to the family Iguanodontidae, related to Iguanodon and Hadrosaurus. There would later be a persistent misunderstanding as to the meaning of the generic name, which is often translated directly from the Greek as "high-ridged tooth". In reality Huxley, analogous to the way the name of the related genus Iguanodon ("iguana-tooth") had been formed, intended to name the animal after an extant herbivorous lizard, choosing for this role Hypsilophus and combining its name with Greek ὀδών, odon, "tooth". Hypsilophodon thus means "Hypsilophus-tooth". The Greek ὑψίλοφος, hypsilophos, means "high-crested" and refers to the back frill of the lizard, not to the teeth of Hypsilophodon itself, which are not high-ridged in any case. The specific name foxii honours Fox.
Immediate reception to Huxley's proposal of a new genus, distinct from Iguanodon, was mixed. The issue of distinctiveness was seen as important as more information on the form of Iguanodon was in demand, and the cranial anatomy in particular was of importance. If the Cowleaze Chine material was a distinct genus, it ceased being useful in this respect. Whilst some palaeontologists such as William Boyd Dawkins and Harry Seeley supported distinction, Fox rejected Huxley's proposal of a distinct genus and subsequently took back his skull and gave it to Owen to study. In attempt to clarify the situation, John Whitaker Hulke returned to the Hypsilophodon fossil bed on the Isle of Wight to obtain more material. He remarked that the whole of the skeleton seemed to be present, but that fragility limited excavation. He published a description of his new specimens in 1873, and based on his examination of the new teeth fossils echoed Fox's sentiments of doubt. Owen followed this with a study comparising at length the teeth of known Iguanodon and those from Fox's specimens. He agreed there were differences, but found them lacking in sufficient distinctiveness to be considered a distinct genus. As such, he renamed the species Iguanodon foxii. But Hulke had by then shifted his opinion, having obtained two new more informative specimens. Building on Huxley's comments on the Mantell-Bowerbank block, he gave focus to vertebral characters. As a result of his study, he concluded Hypsilophodon was a distinct genus related to Iguanodon. He published these findings in a supplementary note, also in 1874. Finally, in 1882 he published a full osteology of the species, considering it of great importance to properly document the taxon as such a wealth of specimens had been discovered and comparison with American dinosaurs was necessary. Fox had by this point died, and no further argument against generic distinctiveness had occurred in the intervening time.
Later research
Later, the number of specimens was increased by Reginald Walter Hooley. In 1905, Baron Franz Nopcsa dedicated a study to Hypsilophodon, and in 1936 William Elgin Swinton did the same, on the occasion of the mounting of two restored skeletons in the British Museum of Natural History. Most known Hypsilophodon specimens were discovered between 1849 and 1921 and are in the possession of the Natural History Museum that acquired the collections of Mantell, Fox, Hulke and Hooley. These represent about twenty individual animals. Apart from the holotype and paratype, the most significant specimens are: NHM R5829, the skeleton of a large animal; NHMUK PV R 5830 and NHMUK PV R 196/196a, both skeletons of juvenile animals; and NHMUK PV R R2477, a block with a skull together with two separate vertebral columns. Although this was the largest find, new ones continue to be made.
Modern research of Hypsilophodon began with the studies of Peter Malcolm Galton, starting with his thesis of 1967. He and James Jensen briefly described a left femur, AMNH 2585, in 1975, and in 1979 formally coined a second species, Hypsilophodon wielandi, for the specimen. The femur was diagnosed with two supposed minor differences from that of H. foxii. The specimen was found in 1900 in the Black Hills of South Dakota, United States, by George Reber Wieland, who the species was named after. Geologically, it comes from the Lakota Sandstone. This species was seen at the time as indicative of a probable late land bridge between North America and Europe, and of the dinosaur fauna of both continents being similar. Spanish Palaeontologist José Ignacio Ruiz-Omeñaca proposed that H. wielandi was not a species of Hypsilophodon but instead related to or synonymous with "Camptosaurus" valdensis from England, both species being dryosaurids. Galton refuted this in his contribution to a 2012 book, noting the femurs of the two species to be quite different, and that of H. wielandi to be unlike those of dryosaurs. He, as well as other studies before and after Ruiz-Omeñaca's proposal, considered H. wielandi a dubious basal ornithopod, with H. foxii the only species in the genus. Galton elaborated on the invalidity of the species in 2009, noting that the two supposed diagnostic characters were variable in both H. foxii and Orodromeus makelai, making the species dubious. He speculated that it may belong to Zephyrosaurus, from a similar time and place, as no femur was known from that taxon.
Fossils from other locations, especially from the mainland of southern Great Britain, Portugal and Spain, have once been referred to Hypsilophodon. However, in 2009 Galton concluded that the specimens from Great Britain proper were either indeterminable or belonged to Valdosaurus, and that the fossils from the rest of Europe were those of related but different species. This leaves the finds on Isle of Wight, off the south coast of England, as the only known authentic Hypsilophodon fossils. The fossils have been found in the Hypsilophodon Bed, a one-metre thick marl layer surfacing in a 1200 metre long strip along the Cowleaze Chine parallel to the southwest coast of Wight, part of the upper Wessex Formation and dating to the late Barremian, about 126 million years old. Reports that Hypsilophodon would be present in the later Vectis Formation, Galton in 2009 considered as unsubstantiated.
Description
Hypsilophodon was a relatively small dinosaur, though not quite as small as, for example, Compsognathus. For Hypsilophodon often a maximum length of is stated. This has its origin in a study of 1974 by Galton, in which he extrapolated a length of based on specimen BMNH R 167, a thigh bone. However, in 2009, Galton concluded that this femur in fact belonged to Valdosaurus and downsized Hypsilophodon to a maximum known length of , the largest specimen being NHMUK PV R 5829 with a femur length of 202 millimetres. Typical specimens are about long. In 2010, Gregory S. Paul estimated a weight of for an animal in length.
Like most small dinosaurs, Hypsilophodon was bipedal: it ran on two legs. Its entire body was built for running. Numerous anatomical features aided this, such as: light-weight, minimized skeleton, low, aerodynamic posture, long legs, and stiff tail — immobilized by ossified tendons for balance. In light of this, Galton in 1974 concluded it would have been among the ornithischians best adapted to running. Despite living in the last of the periods in which non-avian dinosaurs walked the earth, the Cretaceous, Hypsilophodon had a number of seemingly "primitive" features. For example, there were five digits on each hand and four on each foot. With Hypsilophodon the fifth finger had gained a specialised function: being opposable it could serve to grasp food items.
Cranial anatomy
In an example of primitive anatomy, although it had a beak like most ornithischians, Hypsilophodon still had five pointed triangular teeth in the front of the upper jaw, the premaxilla. Most herbivorous dinosaurs had, by the Early Cretaceous, become sufficiently specialized that the front teeth had been altogether lost (although there is some debate as to whether these teeth may have had a specialized function in Hypsilophodon). More to the back, the upper jaw carried up to eleven teeth in the maxilla; the lower jaw had up to sixteen teeth. The number was variable, depending on the size of the animal. The teeth to the back were fan-shaped.
The skull of Hypsilophodon was short and relatively large. The snout was triangular in outline and sharply pointed, ending in an upper beak of which the cutting edge was markedly lower than the maxillary tooth row. The eye socket was very large. A palpebral with a length equal to half the diameter of the eye socket overshadowed its top section. A sclerotic ring of fifteen small bone plates supported the outer eye surface. The back of the skull was rather high, with a very large and high jugal and quadratojugal closing off a highly positioned small infratemporal fenestra.
Postcranial anatomy
The vertebral column consisted of nine cervical vertebrae, fifteen or sixteen dorsal vertebrae, six of five sacral vertebrae and about forty-eight vertebrae of the tail. Much of the back and the tail was stiffened by long ossified tendons connecting the spines on top of the vertebrae. The processes on the underside of the tail vertebrae, the chevrons, were also connected by ossified tendons, which however were of a different form: they were shorter and split and frayed at one end, with the point of the sharp other end laying within the diverging end of the subsequent tendon. Furthermore, there were several counterdirectional rows of these, resulting in a herring-bone pattern completely immobilising the tail end.
A long-lived misconception concerning the anatomy of Hypsilophodon has been that it was armoured. This was first suggested by Hulke in 1874, after the find of a bone plate in the neck region. If so, Hypsilophodon would have been the only known armoured ornithopod. As Galton pointed out in 2008, the putative armour instead appears to be from the torso, an example of internal intercostal plates associated with the rib cage. It consists of thin mineralized circular plates growing from the back end of the middle rib shaft and overlapping the front edge of the subsequent rib. Such plates are better known from Talenkauen and Thescelosaurus, and were probably cartilaginous in origin.
Phylogeny
Huxley originally assigned Hypsilophodon to the Iguanodontidae. In 1882 Louis Dollo named a separate Hypsilophodontidae. By the middle of the twentieth century that had become the accepted classification but in the early twenty-first century it became clear through cladistic analysis that hypsilophodontids formed an unnatural, paraphyletic group of successive off-shoots from throughout Neornithischia. Hypsilophodon in the modern view thus simply is a basal ornithopod.
In 2014, Norman resolved a monophyletic Hypsilophodontia (avoiding the name "Hypsilophodontidae" due to its complicated history). Hypsilophodon was recovered as the sister taxon to the clade containing Tenontosaurus and Rhabdodontidae.
In 2017, Daniel Madzia, Clint Boyd, and Martin Mazuch reassessed Hypsilophodon outside of Ornithopoda altogether, placing it in a more basal position, as the sister taxon to the Cerapoda; several other "hypsilophodontids" have undergone similar reclassifications. The following cladogram is reproduced from this study:
In one analysis in her 2022 review of iguanodontian phylogenetic relationships, Karen E. Poole recovered a large Hypsilophodontidae as the sister taxon of Iguanodontia, which consisted of several "traditional" hypsilophodontids, as well as Thescelosauridae. The Bayesian topology of her phylogenetic analyses is shown in the cladogram below:
In 2023, Longrich et al. described Vectidromeus as a new coeval genus of ornithopod closely related to Hypsilophodon. They suggested that Vectidromeus and Hypsilophodon represented the only members of the Hypsilophodontidae, since other taxa previously assigned to the group had subsequently been moved to other clades.
Paleobiology
Due to its small size, Hypsilophodon fed on low-growing vegetation, in view of the pointed snout most likely preferring high quality plant material, such as young shoots and roots, in the manner of modern deer. The structure of its skull, with the teeth set far back into the jaw, strongly suggests that it had cheeks, an advanced feature that would have facilitated the chewing of food. There were twenty-three to twenty-seven maxillary and dentary teeth with vertical ridges in the animal's upper and lower jaws which, due to the fact that the tooth row of the lower jaw, its teeth curving outwards, fitted within that of the upper jaw, with its teeth curving inwards, appear to have been self-sharpening, the occlusion wearing down the teeth and providing for a simple chewing mechanism. As in almost all dinosaurs and certainly all the ornithischians, the teeth were continuously replaced in an alternate arrangement, with the two replacement waves moving from the back to the front of the jaw. The Zahnreihen-spacing, the average distance in tooth position between teeth of the same eruption stage, was rather low with Hypsilophodon, about 2,3. Such a dentition would have allowed to process relatively tough plants.
Early paleontologists modelled the body of this small, bipedal, herbivorous dinosaur in various ways. In 1882 Hulke suggested that Hypsilophodon was quadrupedal but also, in view of its grasping hand, able to climb rocks and trees in order to seek shelter. In 1912 this line of thought was further pursued by Austrian paleontologist Othenio Abel. Concluding that the first toe of the foot could function as an opposable hallux, Abel stated that Hypsilophodon was a fully arboreal animal and even that an arboreal lifestyle was primitive for the dinosaurs as a whole. Though this hypothesis was doubted by Nopcsa, it was adopted by the Danish researcher Gerhard Heilmann who in 1916 proposed that a quadrupedal Hypsilophodon lived like the modern tree-kangaroo Dendrolagus. In 1926 Heilmann had again changed his mind, denying that the first toe was opposable because the first metatarsal was firmly connected to the second, but in 1927 Abel refused to accept this. In this he was in 1936 supported by Swinton who claimed that even a forward pointing first metatarsal might carry a movable toe. As Swinton was a very influential populariser of dinosaurs, this remained the accepted view for over three decades, most books typically illustrating Hypsilophodon sitting on a tree branch. However, Peter M. Galton in 1969 performed a more accurate analysis of the musculo-skeletal structure, showing that the body posture was horizontal. In 1971 Galton in detail refuted Abel's arguments, showing that the first toe had been incorrectly reconstructed and that neither the curvature of the claws, nor the level of mobility of the shoulder girdle or the tail could be seen as adaptations for climbing, concluding that Hypsilophodon was a bipedal running form. This convinced the paleontological community that Hypsilophodon remained firmly on the ground.
The level of parental care in this dinosaur has not been defined, nests not having been found, although neatly arranged nests are known from related species, suggesting that some care was taken before hatching. The Hypsilophodon fossils were probably accumulated in a single mass mortality event, so it has been considered likely that the animals moved in large groups. For these reasons, the hypsilophodonts, particularly Hypsilophodon, have often been referred to as the "deer of the Mesozoic". Some indications about the reproductive habits are provided by the possibility of sexual dimorphism: Galton considered it likely that exemplars with five instead of six sacral vertebrae — with some specimens the vertebra that should normally count as the first of the sacrum has a rib not touching the pelvis — represented female individuals.
| Biology and health sciences | Ornitischians | Animals |
1005316 | https://en.wikipedia.org/wiki/Camarasaurus | Camarasaurus | Camarasaurus ( ) was a genus of quadrupedal, herbivorous dinosaurs and is the most common North American sauropod fossil. Its fossil remains have been found in the Morrison Formation, dating to the Late Jurassic epoch (Kimmeridgian to Tithonian stages), between 155 and 145 million years ago.
Camarasaurus presented a distinctive cranial profile of a blunt snout and an arched skull that was remarkably square, typical of basal Macronarians.
The name means "chambered lizard", referring to the hollow chambers, known as pleurocoels, in its cervical vertebrae (Greek () meaning "vaulted chamber", or anything with an arched cover, and () meaning "lizard").
Camarasaurus contains four species that are commonly recognized as valid: Camarasaurus grandis, Camarasaurus lentus, Camarasaurus lewisi, and Camarasaurus supremus. C. supremus, the type species, is the largest and geologically youngest of the four. Camarasaurus is the type genus of Camarasauridae, which also includes its European close relative Lourinhasaurus.
Camarasaurus was named in 1877 by Edward Drinker Cope, during the period of scientific rivalry between him and Othniel Charles Marsh known as the Bone Wars. Soon after, Marsh named a genus Morosaurus, but it was subsequently shown to be synonymous with Camarasaurus.
History
Initial discovery
The first record of Camarasaurus comes from the spring of 1877 when Mr. Oramel William Lucas of Cañon City, Colorado discovered some large vertebrae at Garden Park, which he sent to Edward Drinker Cope who was based in Philadelphia, Pennsylvania. The original material sent consisted of a partial cervical vertebra, which would become the taxon's namesake, three dorsal vertebrae, and four caudal vertebrae. This specimen is now thought to have been composed of several individuals. From these initial fragmentary remains, Cope made his original description of Camarasaurus supremus (“supreme chambered lizard”) and founded the genus; these remains are now in the American Museum of Natural History under the catalogue number AMNH 560. After receiving the original bones, Cope employed collectors who gathered more of the material which was described in 1921 by Henry Osborn and Charles Mook.
The amount of material was great, it composed of several jumbled partial skeletons. It was not all prepared at once, but a considerable amount of it was cleaned up by Jacob Geismar under Cope's direction throughout the 1870s to 1890s. In 1877 a reconstruction of the skeleton of Camarasaurus was painted by Dr. John Ryder on several canvasses, under the direction of Professor Cope who would use them in lectures to impress his audience. This reconstruction would be the first ever made of a sauropod dinosaur, was natural size and represented the remains of a number of individuals. The reconstruction was over fifty feet in length. Cope's collectors sent in more material from 1877 to 1878, and as Cope would get more material, he would name taxa based on these newly sent remains. Most of these additional taxa are now considered dubious or synonymous with Camarasaurus. By the end of collecting in Garden Park, at least four individuals and several hundred bones had been found from nearly every part of the skeleton.
Como Bluff finds and Morosaurus
The next Camarasaurus discovery came later in 1877, when a fragmentary posterior skull and a partial postcranial skeleton was discovered and collected in Quarry 1, Como Bluff, Wyoming by crews working for Othniel Charles Marsh. This skeleton would be the best preserved single individual of Camarasaurus at the time, and it was named as a new species of Apatosaurus in 1877. The specimen was not fully collected until 1879 and the specimen contains the majority of a juvenile's skeleton (holotype YPM 1901) Meanwhile, crews working for Edward Cope in Garden Park, collected a fragmentary specimen consisting of a femur and 2 caudal vertebrae was made a new species of Amphicoelias by Cope which he named Amphicoelias latus in 1877. This species was tentatively synonymized with C. supremus in 1921. In 1998, Kenneth Carpenter argued that the stratigraphic position of the find suggested it was more likely to be synonymous with C. grandis, but in a 2005 study of the biostratigraphic distribution of Camarasaurus, Takehito Ikejiri retained it in synonymy with C. supremus. In 1878, a sauropod sacrum was discovered with several other jumbled sauropod postcranial elements, again at Como Bluff. The remains were also sent to Marsh and in 1878 the sacrum was assigned to a new genus and species, Morosaurus impar ("unpaired stupid lizard"). Morosaurus would receive several new species throughout the late 19th century, even becoming part of a new family in 1892, the Morosauridae. A majority of Morosaurus species are now considered dubious, including the type species, or reclassified. In 1889, a new species of Morosaurus was named based on a partial skull and skeleton from Como Bluff. Morosaurus lentus was the name given to the skeleton (holotype YPM 1910) and the skeleton was mounted in the Yale Peabody Museum fossil hall in 1930.
Second Dinosaur Rush finds
In the late 1890s, the American Museum of Natural History and the Field Museum found additional Morosaurus material at Como Bluff and Fruita respectively. Mostly consisting of limb material, the new Morosaurus material led to new reconstructions of sauropod manus and pes structure. The AMNH made an important discovery in 1899 at their Bone Cabin Quarry in Wyoming with the discovery of the first complete Camarasaurus skull and mandible with associated cervical vertebrae. Major reassessment of Morosaurus and Camarasaurus came in 1901, a reassessment by Elmer Riggs concluded that of the five Morosaurus species named by Marsh, only three were valid. Morosaurus grandis, Morosaurus lentus, and Morosaurus agilis (now known as Smitanosaurus) were accepted as valid, with Morosaurus impar synonymous with M. grandis. Possible synonymy between Morosaurus and Camarasaurus was also suggested by Riggs. In 1905, the first mounted skeleton of a sauropod was mounted at the AMNH of a Brontosaurus, the skull of the mount was notoriously based on material that was likely from a Camarasaurus from Como Bluff.
The Carnegie Museum had an important Camarasaurus discovery in 1909 of a nearly complete skeleton of a juvenile, now under specimen number CM 11338. The specimen was notably found articulated in a death pose and is prominently displayed at the Carnegie Museum hall. Earl Douglass discovered the specimen and it was collected from 1909 to 1910 by Carnegie Museum crew working at Dinosaur National Monument. This skeleton was not described until 1925 by Charles W. Gilmore This specimen was referred to Camarasaurus lentus. The skeleton is one of the best sauropod specimens known, with almost every element preserved in articulation including the fragile cervical vertebrae.
Another Camarasaurus skeleton was found in 1918, again at Dinosaur National Monument by Carnegie crews, this specimen can be viewed at the National Museum of Natural History. The specimen, known as USNM V 13786, was traded to the USNM in 1935 and prep work started on the specimen in 1936 at the Texas Centennial Exposition in Dallas where it could be viewed by visitors of the event. Preparation work would continue until 1947 when the skeleton was mounted in a death pose in the fossil hall. The USNM's Camarasaurus was also referred to C. lentus.
In 1919, W. J. Holland would name Uintasaurus douglassi based another sauropod specimen from DNM that was discovered by the Carnegie Museum in 1909. The type specimen was incomplete, consisting of 5 anterior cervical vertebrae, and is a synonym of Camarasaurus lentus. Additional Camarasaurus material was found at near Black Mesa in western Oklahoma during the 1940s and has been referred to Camarasaurus supremus, the material consists of many large vertebrae and some skull elements.
Resurgent discoveries
No major discoveries would come for Camarasaurus until in 1967, James Jensen collected a well preserved and articulated partial postcranial skeleton, including majority of the vertebral column, at Uncompahgre Hill in western Colorado and was deposited at Brigham Young University under specimen number BYU 9740. The skeleton wasn't full prepared until years later, and was described in 1988 as a new genus and species of Camarasaurid, Cathetosaurus lewisi. C. lewisis original description was brief, but later in 1996 the skeleton was given a full osteology and placed as a species of Camarasaurus by John McIntosh and colleagues. In their paper, they determined that C. supremus, C. grandis, C. lentus, and C. lewisi were valid. In 2013, Octavio Mateus and Emanuel Tschopp argued that C. lewisi was actually its own genus based on a specimen found at Howe Quarry in 1992 that they referred to the species. Further research by Tschopp concluded that the Howe Quarry specimen was most likely to represent Camarasaurus after all. As of 2019, most researchers considered C. lewisi to be a species of Camarasaurus.
In 1992, another substantial and articulated skeleton of Camarasaurus was collected, this skeleton by Jeffrie Parker and colleagues near the AMNH's Bone Cabin Quarry at Como Bluff. This skeleton was referred to Camarasaurus grandis and is one of the most complete specimens assigned to the species, it now resides at the Gunma Museum of Natural History in Tokyo under specimen number GMNH-PV 101. 1992 saw yet another Camarasaurus skeleton discovery further north at Howe Quarry, Wyoming by crews working for the Sauriermuseum Aathal in Switzerland. The skeleton is one of the best known, with nearly every element articulated and skin impressions from the skull and hindlimb. The specimen, SMA 002, has not yet gotten a full identification, but has been suggested to be a specimen of C. lewisi. In 1996, several fragmentary remains of Camarasaurus were described from western South Dakota and New Mexico, extending the northeastern and southern range of the genus, with the New Mexican remains from the Summerville Formation. The northernmost specimen of Camarasaurus was discovered in 2005 in the Snowy Mountains region of central Montana and consists of a nearly complete skull and several postcranial elements.
Fossil record
Camarasaurus fossils are very common. Over 500 specimens are known, including many isolated bones and about 50 partial skeletons. It is found in a wide area over the western United States, from as far north as Montana to as far south as New Mexico, in rocks of the Morrison Formation. Due to this abundance, Camarasaurus is a very well-known sauropod. A juvenile specimen of Camarasaurus, CM 11338, is the most complete sauropod skeleton ever discovered. Numerous skulls are known. Even though complete necks are rarely found in sauropods, five specimens of Camarasaurus preserve all or nearly all of the cervical vertebrae. Most identifiable specimens of Camarasaurus belong to one of two species, C. grandis and C. lentus; C. lewisi and C. supremus are rarer.
Description
Camarasaurus is among the most common and frequently well-preserved sauropod dinosaurs uncovered and has been well described in numerous publications. Similar to other Macronarians, it had the typical large naris, long forelimbs, and short tail compared to the contemporary Diplodocids. Camarasaurus was a medium-sized sauropod compared to contemporary species in the same formation, but in the Tithonian reached large sizes with C. supremus. The maximum size of the most common species, C. lentus, was about 15 m (49 ft) in length. The largest species, C. supremus, reached a maximum length of 18 meters (59 ft) - 23 m (75 ft) and, a maximum estimated weight of 47 metric tons (51.8 tons). In 2016, Gregory S. Paul estimated its weight at 23 metric tons (25.4 tons), whereas in 2020, John Foster estimated its weight at 42.3 metric tons (46.6 tons).
The arched skull of Camarasaurus was remarkably square and the blunt snout had many fenestrae. The robust skull of Camarasaurus preserves much better than many other sauropods, unlike the gracile skulls that Diplodocids that are also found in the Morrison Formation. The 19-cm-long (7.5-in) teeth were shaped like chisels (spatulate) and arranged evenly along the jaw. The strength of the teeth indicates that Camarasaurus probably ate coarser plant material than the slender-toothed diplodocids.
A specimen of Camarasaurus called SMA 0002 (which has also been assigned to Cathetosaurus) from Wyoming's Howe-Stephens Quarry, referred to as "E.T.", shows evidence of soft tissue. Along the jaw line, ossified remains of what appear to have been the animal's gums have been recovered, indicating that it had deep-set teeth covered by gums, with only the tips of the crowns protruding. The teeth were, upon death, pushed further out from their sockets as the gums retracted, dried, and tightened through decay. The examinations of the specimen also indicate that the teeth were covered by tough outer scales and possibly a beak of some variety, though this is not known for certain.
The neck of Camarasaurus was of only moderate length by sauropod standards. It was composed of 12 vertebrae. Most of the cervical neural spines were bifurcated, with more vertebrae developing bifurcated neural spines as the animal grew. As in other sauropods, the vertebrae of the neck and torso contained chambers that in life were filled by air sacs connected to the respiratory system. The air sacs could take up more than half of the space inside the vertebrae, making them as highly pneumatic as the bones of birds. It is these chambers that give Camarasaurus its name, "chambered lizard".
The tail of Camarasaurus was composed of 53 vertebrae.
Classification and species
Camarasaurus is the type genus of the family Camarasauridae, members of which are medium-sized Macronarian sauropods that mostly date to the Late Jurassic. Camarasaurids had shorter forelimbs than hindlimbs, large scapulocoracoids, and longer tails than necks. When Edward Cope described Camarasaurus in 1877, he believed it was a dinosaur closely related to Cetiosaurus, Bothriospondylus, Ornithopsis, Anchisaurus (Megadactylus), and Pneumatosteus, but didn’t name a group for these taxa until the description of Amphicoelias when he erected Camarasauridae. Camarasaurus is the only taxon uncontroversially regarded as a valid genus of camarasaurid. It contains four species: C. grandis, C. lentus, C. lewisi, and C. supremus. C. lewisi may represent a distinct genus, Cathetosaurus. Lourinhasaurus, the type species of which was formerly assigned to Camarasaurus, is regarded as a camarasaurid by most studies, though it has also been considered to be a basal eusauropod.
A simplified cladogram of basal Macronaria after Tan et al (2020) is shown below:
Camarasaurus is considered to be a basal macronarian, more closely related to the common ancestor of all macronarians than to more derived forms like Brachiosaurus.
Species
Camarasaurus is regarded as containing four valid species by most researchers: C. grandis, C. lentus, C. lewisi, and C. supremus. C. supremus, the species named by Cope in 1877, is the type species. C. grandis was named in 1877 and C. lentus in 1889. The fourth species, C. lewisi, is of uncertain affinities. It was originally described as a distinct genus, Cathetosaurus, in 1988, but reclassified as a species of Camarasaurus in 1996. Some researchers have suggested that Cathetosaurus should be reinstated as a distinct genus, whereas others have suggested that C. lewisi may be synonymous with another Camarasaurus species.
C. supremus, as its name suggests, is the largest known species of Camarasaurus and one of the most massive sauropods known from the late Jurassic Morrison Formation. Except for its huge size, it was nearly indistinguishable from C. lentus. C. supremus was not typical of the genus as a whole, and is known only from the latest, uppermost parts of the formation and is extremely uncommon. Both C. grandis, C. lentus, and C. lewisi were smaller, as well as occurring in the earlier stages of the Morrison.
Stratigraphic evidence suggests that chronological sequence aligned with the physical differences between the three species, and it describes an evolutionary progression within the Morrison Formation. C. grandis is the oldest species and occurred in the lowest rock layers of the Morrison. C. lewisi only briefly coexisted with C. grandis in the lowest strata of the upper Morrison until going extinct, but it is possible this is because of a lack of specimens from C. lewisi. C. lentus appeared later, co-existing with C. grandis for several million years, possibly due to different ecological niches as suggested by differences in the spinal anatomy of the two species. At a later stage, C. grandis disappeared from the rock record, leaving only C. lentus. Then C. lentus, too, disappeared; at the same time, C. supremus appeared in the uppermost layers. This immediate succession of species, as well as the very close similarity between the two, suggests that C. supremus may have evolved directly from C. lentus, representing a larger, later-surviving population of animals.
Synonyms and dubious species
Amphicoelias latus was named by Edward Cope in 1877 based on a right femur and 4 caudal vertebrae found at Garden Park and is synonymous with either C. supremus or C. grandis.
Caulodon diversidens was also named by Cope in 1877 on, now dubious, teeth that can only be placed as a Macronarian or as synonymous with Camarasaurus supremus.
Caulodon leptoganus was named in 1878 by Cope on 2 partial teeth and is also considered to be unclassifiable beyond Macronaria or as synonymous with Camarasaurus supremus.
Morosaurus impar was named by Marsh in 1878 as the type species of Morosaurus, and the material consisted only of a sacrum and possibly additional postcranial material found at Como Bluff. It is now considered a synonym of C. grandis.
Morosaurus robustus was named on the basis of an ilium by Marsh in 1878 collected at Como Bluff. It is now considered a synonym of C. grandis.
Camarasaurus leptodirus was another one of Cope's Garden Park sauropods and was named in 1879 on 3 partial cervical vertebrae, it has been suggested to be a synonym of C. supremus.
Diplodocus lacustris was named by Othniel Marsh in 1884 on the basis of several teeth, a premaxilla, and a maxilla from Morrison, Colorado that were collected by Arthur Lakes and Benjamin Mudge in 1877. Although the teeth and dentary of D. lacustris are Flagellicaudatan, the skull material is likely that of a Camarasaurus.
Pleurocoelus montanus was also named by Marsh in 1896 as a new species of Pleurocoelus, the material consisting of several vertebral centra and assorted postcrania of a juvenile individual from Como Bluff. It is generally regarded as a synonym of C. grandis.
Uintasaurus douglassi was named in 1919 by W. J. Holland for 5 anterior cervical vertebrae from Dinosaur National Monument, the species was later regarded as a synonym of Camarasaurus lentus.
Camarasaurus annae was named by Tage Ellinger based on an anterior dorsal vertebra in 1950. This species is generally considered a synonym of C.lentus.
Reassigned species
Morosaurus agilis was named in 1889 by Marsh based on a partial skull and 3 vertebrae from Garden Park, Colorado. The species remained in taxonomic uncertainty until in 2020, it was placed in a new genus, Smitanosaurus, and reclassified as a dicraeosaurid.
Camarasaurus becklesiii was described as Pelorosaurus becklesii in 1842 by Gideon Mantell based on a partial forelimb from Sussex, United Kingdom. It was placed in Morosaurus by Marsh in 1889 and Camarasaurus by von Huene in 1932 until in 2015, it was placed in its own genus, Haestasaurus.
Morosaurus marchei was named by Sauvage in 1898 based on an incomplete distal caudal vertebra and tooth from the Upper Jurassic strata of the Alcobaca Formation of Portugal. Lapparent & Zbyszewski referred the holotype vertebra to Megalosaurus insignis and Madsen et al., 1995 referred it to Megalosauria. The referred tooth was identified as belonging to Turiasauria in 2017.
Camarasaurus alenquerensis was named as a species of Apatosaurus in 1957 by Albert-Félix de Lapparent and Georges Zbyweski on a partial postcranial skeleton from Lourinha, Portugal. It was placed in Camarasaurus by John McIntosh in 1990, but was granted a new genus in 1998, Lourinhasaurus.
Paleobiology
Feeding
Previously, scientists have suggested that Camarasaurus and other sauropods may have swallowed gastroliths (stones) to help grind the food in the stomach, regurgitating or passing them when they became too smooth. More recent analysis, however, of the evidence for stomach stones suggests this was not the case. The strong, robust teeth of Camarasaurus were more developed than those of most sauropods and were replaced on average every 62 days (M. D'Emic et al.), indicating that Camarasaurus may have masticated food in its mouth to some degree before swallowing. Other findings indicate that Camarasaurus spp. preferred vegetation different from other sauropods, allowing them to share the same environment without competing.
Growth
Long-bone histology enables researchers to estimate the age that a specific individual reached. A study by Griebeler et al. (2013) examined long-bone histological data and concluded that the Camarasaurus sp. CM 36664 weighed , reached sexual maturity at 20 years and died at age 26.
Metabolism
Eagle et al. performed clumped isotope thermometry on the enamel covering the teeth of various Jurassic sauropods, including Camarasaurus. Temperatures of were obtained, which is comparable to that of modern mammals. Camarasaurus grew in size quickly to limit the time it would be vulnerable to predation. This would imply it had a relatively high metabolic rate as a juvenile.
Paleopathology
A Camarasaurus pelvis recovered from Dinosaur National Monument in Utah shows gouging attributed to Allosaurus and on the ilium of the C. lewisi holotype there are large Theropod bite marks.
In 1992, a partial C. grandis skeleton was discovered at the Bryan Small Stegosaurus Quarry of the Morrison Formation near Canon City, Colorado. This specimen preserved a partial right humerus cataloged as DMNH 2908 and associated vertebrae from the back and tail. In 2001, Lorie McWhinney, Kenneth Carpenter, and Bruce Rothschild published a description of a pathology observed on the humerus. They noted a juxtacortical lesion 25 by 18 cm wide made of bone that resembled woven fibers. Although woven bone forms in accessory dental bone, in other locations, it is a sign of injury or illness. The woven bone's "undulating fibrous bundles" were observed oriented in the direction of the m. brachialis. The lesion's fusion and lack of porosity at its near and far ends indicate the periostitis was inactive or healed. McWhinney and the other researchers argued that this injury would have been a continuous source of hardship for the animal. It would have exerted pressure on the muscles. This pressure would have compressed the muscles' blood vessels and nerves, reducing the range of motion of both the limb's flexor and extensor muscles. This effect would have hindered the M. brachialis, m. brachoradialis, and to a lesser degree the m. biceps brachii to the lesion's position on the humerus. The researchers inferred that the inflammation of the muscles and periosteum would have caused additional complications in the lower region of the fore limb, as well. The lesion would also have caused long-term fasciitis and myosistis. The cumulative effect of these pathological processes would have moderate to severe effects on the ability of the limb to move and "made everyday activities such as foraging for food and escaping predators harder to accomplish." To help determine the cause of the pathology, McWhinney and the other researchers performed a CT scan in 3-mm increments. The CT scan found that the mass had a consistent radiodensity and was separated from the cortex of the bone by a radiolucent line. No evidence was found of stress fracture or infectious processes like osteomyelitis or infectious periostitis. They also ruled out osteochondroma because the axis of the spur is 25° relative to the vertical axis of the humerus, whereas an osteochondroma would have formed at 90° to the axis of the humerus. Other candidates identified by the scientists for the origin of the spur-bearing lesion included:
Hypertrophic osteoarthropathy – although this was ruled out by the presence of the spur-like process
Osteoid osteoma – but this would not explain the spur or osteoblastic reaction
Shin splints or tibial stress syndrome – a possible origin, as many symptoms would be held in common, but shin splints would not explain the spur.
Myositis ossificans traumatica (circumscripta) – Possible, but unlikely source.
Avulsion injury – McWhinney and the other researchers considered an avulsion injury caused by "repetitive overexertion of the muscles" to be the most likely source for the lesion on the humerus. The researchers believed the lesion to have originated with the avulsion of the m. brachialis causing the formation of "a downward-sloping elliptical mass". The bone spur was caused by an osteoblastic response following a tear at the base of the m. brachioradialis caused by its flexor motion.
Paleoecology
Habitat
The Morrison Formation, situated along the eastern flank of the Rocky Mountains, is home to a fossil-rich stretch of Late Jurassic rock. A large number of dinosaur species can be found here, including relatives of Camarasaurus such as Diplodocus, Apatosaurus, and Brachiosaurus, but camarasaurs are the most abundant of the dinosaurs in the formation. Camarasaurus fossils have been found in almost every major locality and have one of the greatest known distributions of Morrison dinosaurs, with fossils found in localities from New Mexico to Montana and Utah to Oklahoma. According to radiometric dating, the Morrison sedimentary layers range between 156.3 million years ago (Mya) at the base, to 146.8 Mya at the top, which places it in the late Oxfordian, Kimmeridgian, and early Tithonian stages of the Late Jurassic period. Its environment is interpreted as semiarid with distinct wet and dry seasons.
Dinosaur and trace fossils are found particularly in the Morrison Basin, which stretches from New Mexico to Alberta and Saskatchewan and was formed when the precursors to the Front Range of the Rocky Mountains started pushing up to the west. Eroded material from their east-facing drainage basins was carried by streams and rivers and deposited in swampy lowlands, lakes, river channels, and floodplains. The formation is similar in age to the Lourinha Formation in Portugal and the Cañadón Calcáreo Formation in Argentina, Camarasaurid fossils have been found at the 2 formations. In 1877, it became the center of the Bone Wars, a fossil-collecting rivalry between early paleontologists Othniel Charles Marsh and Edward Drinker Cope, with Camarasaurus itself being discovered and named by the latter Paleontologist during the conflict.
Paleofauna
The Morrison Formation records an environment and time dominated by gigantic sauropod dinosaurs such as Maraapunisaurus, Amphicoelias, Barosaurus, Diplodocus, Apatosaurus, Brontosaurus, and Brachiosaurus. Dinosaurs living alongside Camarasaurus included the herbivorous ornithischians Camptosaurus, Gargoyleosaurus, Dryosaurus, Stegosaurus, and Nanosaurus. Predators in this paleoenvironment included the theropods Saurophaganax, Torvosaurus, Ceratosaurus, Marshosaurus, Stokesosaurus, Ornitholestes, and Allosaurus, which accounted for up to 75% of theropod specimens, and was at the top trophic level of the Morrison food web. Camarasaurus is commonly found at the same sites as Allosaurus, Apatosaurus, Stegosaurus, and Diplodocus.
Other organisms in this region included bivalves, snails, ray-finned fishes, frogs, salamanders, turtles, sphenodonts, lizards, terrestrial and aquatic crocodylomorphs, and several species of pterosaurs such as Harpactognathus and Mesadactylus. Early mammals present were docodonts (such as Docodon), multituberculates, symmetrodonts, and triconodonts. The flora of the period has been revealed by fossils of green algae, fungi, mosses, horsetails, cycads, ginkgoes, and several families of conifers. Vegetation varied from river-lining forests of tree ferns, and ferns (gallery forests), to fern savannas with occasional trees such as the Araucaria-like conifer Brachyphyllum.
| Biology and health sciences | Sauropods | Animals |
1005426 | https://en.wikipedia.org/wiki/Erythrite | Erythrite | Erythrite, also known as red cobalt, previously cobalt ochre is a secondary hydrated cobalt arsenate mineral with the formula . Erythrite and annabergite, chemical formula , or nickel arsenate form a complete series with the general formula .
Erythrite crystallizes in the monoclinic system and forms prismatic crystals. The color is crimson to pink and occurs as a secondary coating known as cobalt bloom on cobalt arsenide minerals. Well-formed crystals are rare, with most of the mineral manifesting in crusts or small reniform aggregates.
Erythrite was first described in 1832 for an occurrence in Grube Daniel, Schneeberg, Saxony, and takes its name from the Greek έρυθρος (erythros), meaning red. Historically, erythrite itself has not been an economically important mineral, but the prospector may use it as a guide to associated cobalt and native silver.
Erythrite occurs as a secondary mineral in the oxide zone of Co–Ni–As bearing mineral deposits. It occurs in association with cobaltite, skutterudite, symplesite, roselite-beta, scorodite, pharmacosiderite, adamite, morenosite, retgersite, and malachite.
Notable localities are Cobalt, Ontario; La Cobaltera, Chile, Schneeberg, Saxony, Germany; Joachimsthal, Czech Republic; Cornwall, England; Bou Azzer, Morocco; the Blackbird mine, Lemhi County, Idaho; Sara Alicia mine, near Alamos, Sonora, Mexico; Mt. Cobalt, Queensland and the Dome Rock copper mine, Mingary, South Australia.
Other varieties
The nickel variety, annabergite, occurs as a light green nickel bloom on nickel arsenides. In addition iron, magnesium and zinc can also substitute for the cobalt position, creating three other minerals: parasymplesite (Fe), hörnesite (Mg) and köttigite (Zn).
| Physical sciences | Minerals | Earth science |
1005690 | https://en.wikipedia.org/wiki/Spirogyra | Spirogyra | Spirogyra (common names include water silk, mermaid's tresses, and blanket weed) is a genus of filamentous charophyte green algae of the order Zygnematales, named for the helical or spiral arrangement of the chloroplasts that is characteristic of the genus. Spirogyra species, of which there are more than 500, are commonly found in freshwater habitats. Spirogyra measures approximately 10 to 150 micrometres in width (though not usually more that 60) and may grow to several centimetres in length.
Distribution
Spirogyra can be been found on every continent on Earth, including Antarctica. It is freshwater algae, found rivers, ponds, and other such bodies of water.
Taxonomy
The genus Spirogyra was named by German naturalist Johann Heinrich Friedrich Link in 1820. The lectotype, Spirogyra porticalis was designated in 1952 by Paul C. Silvia.
Reproduction
Spirogyra can reproduce both sexually and asexually. In vegetative reproduction, fragmentation takes place, and Spirogyra simply undergoes intercalary cell division to extend the length of the new filaments.
Sexual reproduction is of two types:
Scalariform conjugation requires association of two or more different filaments lined side by side, either partially or throughout their length. One cell each from opposite lined filaments emits tubular protuberances known as conjugation tubes, which elongate and fuse to make a passage called the conjugation canal. The cytoplasm of the cell acting as the male travels through this tube and fuses with the female cytoplasm, and the gametes fuse to form a zygospore.
In lateral conjugation, gametes are formed in a single filament. Two adjoining cells near the common transverse wall give out protuberances known as conjugation tubes, which further form the conjugation canal upon contact. The male cytoplasm migrates through the conjugation canal, fusing with the female. The rest of the process proceeds as in scalariform conjugation.
The essential difference is that scalariform conjugation occurs between two filaments and lateral conjugation occurs between two adjacent cells on the same filament.
Usage
Spirogyra species are being researched for their potential in bioremediation. Specifically, in stemming toxic runoff from mines (where they are often found occurring naturally) and from municipal wastewater. Spirogyra has also been investigated as a potential biofuel.
Spirogyra species, such as S. varians, are also being researched for potential pharmaceutical usage due to their high nutrient densities.
Species
The following species are currently accepted. For a more comprehensive and up-to-date list of currently excepted species, view the pages on AlgaeBase or WoRMS.
Spirogyra abbreviata Zheng
Spirogyra acanthophora (Skuja) Czurda
Spirogyra acumbentis Vodenicarov
Spirogyra adjerensis Gauthier-Lièvre
Spirogyra adnata (Vaucher) Kützing
Spirogyra adornata Ling
Spirogyra aequinoctialis G.S.West
Spirogyra affinis (Hassall) Petit
Spirogyra africana (F.E.Fritsch) Czurda
Spirogyra ahmedabadensis Kamat
Spirogyra alpina Kützing
Spirogyra alternata Kützing
Spirogyra amplectens Skuja
Spirogyra ampliata L.Liu
Spirogyra anchora Skuja
Spirogyra angolensis Welwitsch
Spirogyra angulata Nipkow
Spirogyra anomala Bhashyakarla Rao
Spirogyra anzygoapora Singh
Spirogyra aphanosculpta Skuja
Spirogyra aplanospora Randhawa
Spirogyra arcta (C.Agardh) Endlichter
Spirogyra arcuata Liu
Spirogyra areolata Lagerheim
Spirogyra arizonensis Rickert & Hoshaw
Spirogyra arthuri Woodhead & Tweed
Spirogyra articulata Transeau
Spirogyra asiatica Czurda
Spirogyra atasiana Czurda
Spirogyra atrobrunnea Gauthier-Lièvre
Spirogyra aubvillei Gauthier-Lièvre
Spirogyra australica Czurda
Spirogyra australiensis K.Möbius
Spirogyra austriaca Czurda
Spirogyra azygospora Singh
Spirogyra baileyi Schmidle
Spirogyra batekiana Gauthier-Lièvre
Spirogyra bellis (Hassall) P.Crouan & H.Crouan
Spirogyra bicalyptrata Czurda
Spirogyra bichromatophora (Randhawa) Transeau
Spirogyra biformis C.-C.Jao
Spirogyra biharensis A.M.Verma & B.Kumari
Spirogyra bii Kadlubowska
Spirogyra bireticulata Liu
Spirogyra borealis Zheng & Ling
Spirogyra borgeana Transeau
Spirogyra borgei Kadlubowska
Spirogyra borkuense Gauthier-Lièvre
Spirogyra borysthenica Kasanowsky & Smirnoff [Smirnov]
Spirogyra bourrellyana Gauthier-Lièvre
Spirogyra braziliensis (Nordstedt) Transeau
Spirogyra britannica Godward
Spirogyra brunnea Czurda
Spirogyra buchetii Petit
Spirogyra bullata C.-C.Jao
Spirogyra calcarea Transeau
Spirogyra calchaquiesiae B.Tracanna
Spirogyra californica Stancheva, J.D.Hall, McCourt & Sheath
Spirogyra calospora Cleve
Spirogyra canaliculata Segar
Spirogyra cardinia S.H.Lewis
Spirogyra caroliniana G.E.Dillard
Spirogyra castanacea G.C.Couch
Spirogyra cataeniformis (Hassall) Kützing
Spirogyra catenaeformis (Hassall) Kützing
Spirogyra cavata Vodenicarov
Spirogyra chakiaensis (Bhashyakarla Rao) Kreiger
Spirogyra chandigarhensis
Spirogyra chekiangensis C.-C.Jao
Spirogyra chenii C.-C.Jao
Spirogyra chungkingensis C.-C.Jao
Spirogyra chuniae C.-C.Jao
Spirogyra circumlineata Transeau
Spirogyra clavata Segar
Spirogyra cleveana Transeau
Spirogyra colligata Hodgetts
Spirogyra columbiana Czurda
Spirogyra communis (Hassall) Kützing
Spirogyra condensata (Vaucher) Dumortier
Spirogyra congolensis Gauthier-Lièvre
Spirogyra conspicua Gay
Spirogyra convoluta
Spirogyra corrugata Woodhead & Tweed
Spirogyra costata Kadlubowska
Spirogyra costulata Kadlubowska
Spirogyra coumbiana Czurda
Spirogyra crassa (Kützing) Kützing
Spirogyra crassispina C.-C.Jao
Spirogyra crassiuscula (Wittrock & Nordstedt) Transeau
Spirogyra crassivallicularis C.-C.Jao
Spirogyra crassoidea (Transeau) Transeau
Spirogyra crenulata Singh
Spirogyra croasdaleae Blum
Spirogyra cyanosporum
Spirogyra cylindrica Czurda
Spirogyra cylindrosperma (West & G.S.West) Krieger
Spirogyra cylindrospora West & G.S.West
Spirogyra czubinskii Kadlubowska
Spirogyra czurdae Misra
Spirogyra czurdiana Kadlubowska
Spirogyra dacimina (O.F.Müller) Kützing
Spirogyra daedalea Lagerheim
Spirogyra daedaleoides Czurda
Spirogyra danica Kadlubowska
Spirogyra decimina (O.F.Müller) Dumortier
Spirogyra densa Kützing
Spirogyra denticulata Transeau
Spirogyra dentireticulata C.-C.Jao
Spirogyra desikacharyensis Rattan
Spirogyra dialyderma Ling & Zheng
Spirogyra dicephala C.-C.Jao & H.Z.Zhu
Spirogyra dictyospora C.-C.Jao
Spirogyra diluta H.C.Wood
Spirogyra dimorpha Geitler
Spirogyra discoidea Transeau
Spirogyra distenta Transeau
Spirogyra diversizygotica (V.I.Polyanskij) L.A.Rundina
Spirogyra djalonensis Gauthier-Lièvre
Spirogyra djiliense Gauthier-Lièvre
Spirogyra dodgeana
Spirogyra drilonensis Petkoff
Spirogyra dubia Kützing
Spirogyra echinata Tiffany
Spirogyra echinospora Blum
Spirogyra eillipsospora Transeau
Spirogyra elegans Wollny
Spirogyra elegantissima Y.J.Ling & Y.M.Zheng
Spirogyra ellipsospora Transeau
Spirogyra elliptica C.-C.Jao
Spirogyra elongata (H.C.Wood) H.C.Wood
Spirogyra elongata (Vaucher) Dumortier
Spirogyra emilianensis Bonhomme
Spirogyra endogranulata O.Bock & W.Bock
Spirogyra exilis West & G.S.West
Spirogyra fallax (Hansgirg) Wille
Spirogyra fassula Zheng
Spirogyra favosa Y.-X.Wei & Y.-K.Yung
Spirogyra fennica Cedercreutz
Spirogyra ferruginea H.W.Liang
Spirogyra flavescens (Hassall) Kützing
Spirogyra flavicans Kützing
Spirogyra fluviatilis Hilse
Spirogyra formosa (Transeau) Czurda
Spirogyra fossa C.-C.Jao
Spirogyra fossulata C.-C.Jao & Hu
Spirogyra foveolata (Transeau) Czurda
Spirogyra franconica O.Bock & W.Bock
Spirogyra frankliniana Tiffany
Spirogyra frigida F.Gay
Spirogyra fritschiana Czurda
Spirogyra fukienica Wei
Spirogyra fuzhouensis H.-J.Hu
Spirogyra gallica Petit
Spirogyra gaterslebensis Reith
Spirogyra gauthier-lievrae Kadlubowska
Spirogyra gauthieri Gayral
Spirogyra gharbensis Gauthier-Lièvre
Spirogyra ghosei Singh
Spirogyra gibberosa C.-C.Jao
Spirogyra glabra Czurda
Spirogyra globulispora Gauthier-Lièvre
Spirogyra gobonensis Gauthier-Lièvre
Spirogyra goetzei Schmidle
Spirogyra gracilis Kützing
Spirogyra granulata C.-C.Jao
Spirogyra gratiana Transeau
Spirogyra groenlandica Rosenvinge
Spirogyra guangchowensis Zhu & Zhong
Spirogyra guineense Gauthier-Lièvre
Spirogyra gujaratensis Kamat
Spirogyra gurdaspurensis Rattan
Spirogyra haimenensis C.-C.Jao
Spirogyra hartigii (Kützing) De Toni
Spirogyra hassalii (Jenner) Petit
Spirogyra hassallii (Jenner ex Hassall) P.Crouan & H.Crouan
Spirogyra hatillensis Transeau
Spirogyra heeriana Nägeli ex Kützing
Spirogyra henanensis (L.J.Bi) L.J.Bi
Spirogyra herbipolensis O.Bock & W.Bock
Spirogyra heterospora Liu
Spirogyra hoehnei O.Borge
Spirogyra hoggarica (Gauthier-Lièvre) Gauthier-Lièvre
Spirogyra hollandiae Taft
Spirogyra hopeiensis C.-C.Jao
Spirogyra hunanensis C.-C.Jao
Spirogyra hungarica Langer
Spirogyra hyalina Cleve
Spirogyra hymerae Britton & B.H.Smith
Spirogyra inconstans Collins
Spirogyra incrassata Czurda
Spirogyra indica Krieger
Spirogyra inflata (Vaucher) Dumortier
Spirogyra insignis (Hassall) Kützing
Spirogyra insueta Zhu & Zhong
Spirogyra intermedia Rabenhorst
Spirogyra intorta C.-C.Jao
Spirogyra ionia Wade
Spirogyra irregularis Nägeli ex Kützing
Spirogyra ivorensis Gauthier-Lièvre
Spirogyra iyengarii Kadlubowska
Spirogyra jaoensis Randhawa
Spirogyra jaoi S.H.Ley
Spirogyra jassiensis (Teodoresco) Czurda
Spirogyra jatobae Transeau
Spirogyra jogensis Iyengar
Spirogyra jugalis (Dillwyn) Kützing
Spirogyra juliana Stancheva, J.D.Hall, McCourt & Sheath
Spirogyra kaffirita Transeau
Spirogyra kamatii Kamat
Spirogyra karnalae Randhawa
Spirogyra kolae Hajdu
Spirogyra koreana J.-H.Kim, Y.H.Kim, & I.K.Lee
Spirogyra krubergii V.J.Poljanski
Spirogyra kundaensis Singh
Spirogyra kuusamoensis Hirn
Spirogyra labbei Gauthier-Lièvre
Spirogyra labyrinthica Transeau
Spirogyra lacustris Czurda
Spirogyra lagerheimii Wittrock
Spirogyra laka Kützing
Spirogyra lallandiae Taft
Spirogyra lambertiana Transeau
Spirogyra lamellata (Bhashyakarla Rao) Krieger
Spirogyra lamellosa C.-C.Jao
Spirogyra lapponica Lagerheim
Spirogyra latireticulata Zheng & Ling
Spirogyra latviensis Czurda
Spirogyra laxa Kützing
Spirogyra laxistrata C.-C.Jao
Spirogyra lenticularis Transeau
Spirogyra lentiformis L.J.Bi
Spirogyra lians Transeau
Spirogyra libyca Gauthier-Lièvre
Spirogyra lismorensis Playfair
Spirogyra lodziensis Kadlubowska
Spirogyra longifissa Wei
Spirogyra lubrica Kützing
Spirogyra lucknowensis (Prasad & Dutta) Kadlubowska
Spirogyra lushanensis L.C.Li
Spirogyra luteospora Czurda
Spirogyra lutetiana Petit
Spirogyra lymerae Britton & Smith
Spirogyra macrospora (C.B.Rao) Krieger
Spirogyra maghrebiana Gauthier-Lièvre
Spirogyra major Kützing
Spirogyra majuscula Kützing
Spirogyra malmeana Hirn
Spirogyra manormae Randhawa
Spirogyra maravillosa Transeau
Spirogyra marchica H.Krieger
Spirogyra margalefii Aboal & Llimona
Spirogyra margaritata Wollny
Spirogyra marocana Gauthier-Lièvre
Spirogyra maxima (Hassall) Wittrock
Spirogyra megaspora Transeau
Spirogyra meinningensis
Spirogyra meridionalis W.J.Zhu & Zhong
Spirogyra miamiana Taft
Spirogyra microdictyon C.-C.Jao & Hu
Spirogyra microgranulata C.-C.Jao
Spirogyra micropunctata Transeau
Spirogyra microspora C.-C.Jao
Spirogyra mienningensis L.-C.Li
Spirogyra minor (Schmidle) Transeau
Spirogyra minuticrassoidea Yamagishi
Spirogyra minutifossa C.-C.Jao
Spirogyra mirabilis (Hassall) Kützing
Spirogyra miranda Kadlubowska
Spirogyra mirifica Zheng & Ling
Spirogyra mithalaensis A.M.Verma & B.Kumari
Spirogyra moebii Transeau
Spirogyra monodiana Gauthier-Lièvre
Spirogyra montserrati Margalef
Spirogyra multiconjugata N.C.Ferrer & E.J.Cáceres
Spirogyra multiformis Kadlubowska
Spirogyra multistrata Zheng & Ling
Spirogyra multitrata Zheng & Ling
Spirogyra mutabilis C.-C.Jao & H.J.Hu
Spirogyra narcissiana Transeau
Spirogyra natchita Transeau
Spirogyra nawaschinii Kasanowsky
Spirogyra neglecta (Hassall) Kützing
Spirogyra neorhizobranchialis C.-C.Jao & Zheng
Spirogyra nitida (O.F.Müller) Leiblein
Spirogyra nodifera O.Bock & W.Bock
Spirogyra notabilis Taft
Spirogyra nova-angliae Transeau
Spirogyra novae-angliae Transeau
Spirogyra nyctigama Taft
Spirogyra oblata C.-C.Jao
Spirogyra oblonga Liu
Spirogyra obovata C.-C.Jao
Spirogyra occidentalis (Transeau) Czurda
Spirogyra oligocarpa C.-C.Jao
Spirogyra olivascens Rabenhorst
Spirogyra ollicola C.-C.Jao & Zhong
Spirogyra oltmannsii Huber-Pestalozzi
Spirogyra orientalis West & G.S.West
Spirogyra orthospira Nägeli
Spirogyra ouarsenica Gauthier-Lièvre
Spirogyra oudhensis Randhawa
Spirogyra ovigera Montagne
Spirogyra pachyderma Gauthier-Lièvre
Spirogyra palghatensis Erady
Spirogyra paludosa Czurda
Spirogyra papulata C.-C.Jao
Spirogyra paradoxa Bhashyakarla Rao
Spirogyra paraguayensis O.Borge
Spirogyra parva (Hassall) Kützing
Spirogyra parvispora H.C.Wood
Spirogyra parvula (Transeau) Czurda
Spirogyra pascheriana Czurda
Spirogyra patliputri A.M.Verma & B.Kumari
Spirogyra peipeingensis C.-C.Jao
Spirogyra pellucida (Hassall) Kützing
Spirogyra perforans Transeau
Spirogyra plena (West & G.S.West) Czurda
Spirogyra poljanskii Kadlubowska
Spirogyra polymorpha Kirchner
Spirogyra polytaeniata Strasburger
Spirogyra porangabae Transeau
Spirogyra porticalis (O.F.Müller) Dumortier- type
Spirogyra pratensis Transeau
Spirogyra princeps (Vaucher) Link ex Meyen
Spirogyra proavita Langer
Spirogyra prolifica Kamat
Spirogyra propria Transeau
Spirogyra protecta H.C.Wood
Spirogyra pseudoaedaloides Kadlubowska
Spirogyra pseudobellis W.J.Zhu & Zhong
Spirogyra pseudocorrugata Gauthier-Lièvre
Spirogyra pseudocylindrica Prescott
Spirogyra pseudogibberosa Gauthier-Lièvre
Spirogyra pseudogranulata S.-H.Ley
Spirogyra pseudojuergensii H.Silva
Spirogyra pseudomaiuscula Gauthier-Lièvre
Spirogyra pseudomajuscula Gauthier-Lièvre
Spirogyra pseudomaxima Kadlubowska
Spirogyra pseudoneglecta Czurda
Spirogyra pseudonodifera O.Bock & W.Bock
Spirogyra pseudoplena Liu
Spirogyra pseudopulchrata C.-C.Jao
Spirogyra pseudoreticulata Kreiger
Spirogyra pseudorhizopus L.J.Bi
Spirogyra pseudosahnii Kadlubowska
Spirogyra pseudospreeiana C.-C.Jao
Spirogyra pseudosubreticulata Rickert & Hoshaw
Spirogyra pseudotenuissima O.Bock & W.Bock
Spirogyra pseudotetrapla Kadlubowska
Spirogyra pseudotexensis Bourrelly
Spirogyra pseudovarians Czurda
Spirogyra pseudovenusta Liu & Wei
Spirogyra pseudowoodii V.J.Poljanski
Spirogyra pulchella (H.C.Wood) H.C.Wood
Spirogyra pulchra Alexenko
Spirogyra pulchrifigurata C.-C.Jao
Spirogyra puncticulata C.-C.Jao
Spirogyra punctulata C.-C.Jao
Spirogyra quadrata (Hassall) P.Petit
Spirogyra quadrilaminata C.-C.Jao
Spirogyra quezelii Gauthier-Lièvre
Spirogyra quilonensis Kothari
Spirogyra quinina Kützing
Spirogyra quinquilaminata C.-C.Jao
Spirogyra randhawae Krieger
Spirogyra rattanii Kadlubowska
Spirogyra rectangularis Transeau
Spirogyra rectispira Merriman
Spirogyra regularis (Cedercreutz) Krieger
Spirogyra reinhardii V.Chmielevsky
Spirogyra reticulata Nordstedt
Spirogyra reticulatum Randhawa
Spirogyra reticuliana Randhawa
Spirogyra rhizobrachialis C.-C.Jao
Spirogyra rhizobranchialis C.-C.Jao
Spirogyra rhizoides Randhawa
Spirogyra rhizopus C.-C.Jao
Spirogyra rhodopea Petkoff
Spirogyra rivularis (Hassall) Rabenhorst
Spirogyra robusta (Nygaard) Czurda
Spirogyra rugosa (Transeau) Czurda
Spirogyra rugulosa Iwanoff
Spirogyra rupestris Schmidle
Spirogyra sahnii Randhawa
Spirogyra salina Aleem
Spirogyra sanjingensis Y.Wang & Z.Wang
Spirogyra sarmae M.Singh & M.Srivastava
Spirogyra schmidtii West & G.S.West
Spirogyra schweickerdtii Cholonky
Spirogyra scripta Nygaard
Spirogyra scrobiculata (Stockmayer) Czurda
Spirogyra sculpta Gauthier-Lièvre
Spirogyra semiornata C.-C.Jao
Spirogyra senegalensis Gauthier-Lièvre
Spirogyra setiformis (Roth) Martens ex Meneghini
Spirogyra shantungensis L.-C.Li
Spirogyra shanxiensis Zheng & Ling
Spirogyra shenzaensis Zheng
Spirogyra siamensis Transeau
Spirogyra siberica Skvortzov
Spirogyra silesiaca Kadlubowska
Spirogyra sinensis L.-C.Li
Spirogyra singularis Nordstedt
Spirogyra skujae Randhawa
Spirogyra skvortzowii Willi Kreiger
Spirogyra smithii Transeau
Spirogyra speciosa Liu
Spirogyra sphaerica (Misra) Willi Krieger
Spirogyra sphaerocarpa C.-C.Jao
Spirogyra sphaerospora Hirn
Spirogyra spinescens Kirjakov
Spirogyra splendida G.S West
Spirogyra spreeiana Rabenhorst
Spirogyra subaffinis F.E.Fritsch & M.F.Rich
Spirogyra subbullata Kadlubowska
Spirogyra subcolligata L.J.Bi
Spirogyra subcrassa Woronchin
Spirogyra subcrassiuscula L.J.Bi
Spirogyra subcylindrospora C.-C.Jao
Spirogyra subechinata Godward
Spirogyra subfossulata C.-C.Jao
Spirogyra subglabra Zheng & Ling
Spirogyra sublambertiana Zhao
Spirogyra subluteospora C.-C.Jao & Hu
Spirogyra submajuscula Ling & Zheng
Spirogyra submargaritata Godward
Spirogyra submarina (Collins) Transeau
Spirogyra submaxima Transeau
Spirogyra subobovata Chian
Spirogyra subpapulatata C.-C.Jao
Spirogyra subpellucida C.-C.Jao
Spirogyra subpolytaeniata C.-C.Jao
Spirogyra subpratensis Woronichin
Spirogyra subreflexa Liang & Wang
Spirogyra subreticulata F.E.Fritsch
Spirogyra subsalina Cedercreutz
Spirogyra subsalsa Kützing
Spirogyra subsalso-punctatulata Kadlubowska
Spirogyra subtropica Chian
Spirogyra suburbana C.-C.Jao
Spirogyra subvelata Kreiger
Spirogyra sulcata Blum
Spirogyra sundanensis Gauthier-Lièvre
Spirogyra superba L.Liu
Spirogyra supervarians Transeau
Spirogyra szechwanensis C.-C.Jao
Spirogyra taftiana Transeau
Spirogyra taiyuanensis Ling
Spirogyra tandae Randhawa
Spirogyra taylorii C.-C.Jao
Spirogyra tenuior (Transeau) Krieger
Spirogyra tenuispina Rundina
Spirogyra tenuissima (Hassall) Kützing
Spirogyra teodoresci Transeau
Spirogyra ternata Ripart
Spirogyra tetrapla Transeau
Spirogyra tibetensis C.-C.Jao
Spirogyra tjibodensis Faber
Spirogyra tolosana Comère
Spirogyra torta Blum
Spirogyra trachycarpa Skuja
Spirogyra transeauiana C.-C.Jao
Spirogyra triplicata (Collins) Transeau
Spirogyra trochainii Gauthier-Lièvre
Spirogyra tropica Kützing
Spirogyra tsingtaoensis L.-C.Li
Spirogyra tuberculata Lagerheim
Spirogyra tuberculosa Liang & Wang
Spirogyra tucumaniae B.Tracanna
Spirogyra tumida C.-C.Jao
Spirogyra turfosa F.Gay
Spirogyra tuwensis R.J.Patel & C.K.Asok Kumar
Spirogyra ugandense Gauthier-Lièvre
Spirogyra unduliseptum Randhawa
Spirogyra urbana C.-C.Jao & Zhong
Spirogyra van-zantenii Cholonky
Spirogyra variabilis C.-C.Jao & Hu
Spirogyra varians (Hassall) Kützing
Spirogyra variaspora Rickert & Hoshaw
Spirogyra variformis Transeau
Spirogyra varshaii Prasad & Dutta
Spirogyra vasishtii Rattan
Spirogyra velata Nordstedt
Spirogyra venkataramanii Rattan
Spirogyra venosa Kadlubowska
Spirogyra venusta C.-C.Jao
Spirogyra vermiculata C.-C.Jao & H.J.Hu
Spirogyra verrucogranulata I.C A.Dias & C.E.de M.Bicudo
Spirogyra verrucosa (C.B.Rao) Krieger
Spirogyra verruculosa C.-C.Jao
Spirogyra voltaica Gauthier-Lièvre
Spirogyra wangii L.C.Li
Spirogyra weberi Kützing
Spirogyra weishuiensis Ling & Zheng
Spirogyra weletischii West & G.S.West
Spirogyra welwitschii West & G.S.West
Spirogyra westii Transeau
Spirogyra willei Skuja
Spirogyra wittrockii Alexenko
Spirogyra wollnyi De Toni
Spirogyra wrightiana Transeau
Spirogyra wuchanensis C.-C.Jao & H.-J.Hu
Spirogyra wuhanensis C.-C.Jao & H.-J.Hu
Spirogyra xiaoganensis Liu
Spirogyra xinxiangensis L.J.Bi
Spirogyra yexianensis L.J.Bi
Spirogyra yuin S.Skinner & Entwisle
Spirogyra yunnanensis L.-C.Li C
Trivia
American jazz fusion band Spyro Gyra was named after this genus of algae.
It is also the subject of the Brazilian Samba rock song "Spirogyra story" by Jorge Ben.
Gallery
| Biology and health sciences | Green algae | Plants |
1005946 | https://en.wikipedia.org/wiki/Squalene | Squalene | Squalene is an organic compound. It is a triterpene with the formula C30H50. It is a colourless oil, although impure samples appear yellow. It was originally obtained from shark liver oil (hence its name, as Squalus is a genus of sharks). An estimated 12% of bodily squalene in humans is found in sebum. Squalene has a role in topical skin lubrication and protection.
Most plants, fungi, and animals produce squalene as biochemical precursor in sterol biosynthesis, including cholesterol and steroid hormones in the human body. It is also an intermediate in the biosynthesis of hopanoids in many bacteria.
Squalene is an important ingredient in some vaccine adjuvants: The Novartis and GlaxoSmithKline adjuvants are called MF59 and AS03, respectively.
Role in triterpenoid synthesis
Squalene is a biochemical precursor to both steroids and hopanoids. For sterols, the squalene conversion begins with oxidation (via squalene monooxygenase) of one of its terminal double bonds, resulting in 2,3-oxidosqualene. It then undergoes an enzyme-catalysed cyclisation to produce lanosterol, which can be elaborated into other steroids such as cholesterol and ergosterol in a multistep process by the removal of three methyl groups, the reduction of one double bond by NADPH and the migration of the other double bond. In many plants, this is then converted into stigmasterol, while in many fungi, it is the precursor to ergosterol.
The biosynthetic pathway is found in many bacteria, and most eukaryotes, though has not been found in Archaea.
Production
Biosynthesis
Squalene is biosynthesised by coupling two molecules of farnesyl pyrophosphate. The condensation requires NADPH and the enzyme squalene synthase.
Industry
Synthetic squalene is prepared commercially from geranylacetone.
Shark conservation
In 2020, conservationists raised concerns about the potential slaughter of sharks to obtain squalene for a COVID-19 vaccine.
Environmental and other concerns over shark hunting have motivated its extraction from other sources. Biosynthetic processes use genetically engineered yeast or bacteria.
Uses
As an adjuvant in vaccines
Immunologic adjuvants are substances, administered in conjunction with a vaccine, that stimulate the immune system and increase the response to the vaccine. Squalene is not itself an adjuvant, but it has been used in conjunction with surfactants in certain adjuvant formulations.
An adjuvant using squalene is Seqirus' proprietary MF59, which is added to influenza vaccines to help stimulate the human body's immune response through production of CD4 memory cells. It is the first oil-in-water influenza vaccine adjuvant to be commercialised in combination with a seasonal influenza virus vaccine. It was developed in the 1990s by researchers at Ciba-Geigy and Chiron; both companies were subsequently acquired by Novartis. The Influenza vaccine business of Novartis was later acquired by CSL Bering and created the company Seqirus. It is present in the form of an emulsion and is added to make the vaccine more immunogenic. However, the mechanism of action remains unknown. MF59 is capable of switching on a number of genes that partially overlap with those activated by other adjuvants. How these changes are triggered is unclear; to date, no receptors responding to MF59 have been identified. One possibility is that MF59 affects the cell behaviour by changing the lipid metabolism, namely by inducing accumulation of neutral lipids within the target cells. An influenza vaccine called FLUAD which used MF59 as an adjuvant was approved for use in the US in people 65 years of age and older, beginning with the 2016–2017 flu season.
A 2009 meta-analysis assessed data from 64 clinical trials of influenza vaccines with the squalene-containing adjuvant MF59 and compared them to the effects of vaccines with no adjuvant. The analysis reported that the adjuvated vaccines were associated with slightly lower risks of chronic diseases, but that neither type of vaccines altered the rate of autoimmune diseases; the authors concluded that their data "supports the good safety profile associated with MF59-adjuvated influenza vaccines and suggests there may be a clinical benefit over non-MF59-containing vaccines".
Safety
Toxicology studies indicate that in the concentrations used in cosmetics, squalene has low acute toxicity, and is not a significant contact allergen or irritant.
The World Health Organization and the US Department of Defense have both published extensive reports that emphasise that squalene is naturally occurring, even in oils of human fingerprints. The WHO goes further to explain that squalene has been present in over 22 million flu vaccines given to patients in Europe since 1997 without significant vaccine-related adverse events.
Controversies
Attempts to link squalene to Gulf War syndrome have been debunked.
| Physical sciences | Terpenes and terpenoids | Chemistry |
1005984 | https://en.wikipedia.org/wiki/Baby%20transport | Baby transport | Various methods of transporting children have been used in different cultures and times. These methods include baby carriages (prams in British English), infant car seats, portable bassinets (carrycots), strollers (pushchairs), slings, backpacks, baskets and bicycle carriers.
The large, heavy prams (short for perambulator), which had become popular during the Victorian era, were replaced by lighter designs during the latter half of the 1900s.
Baskets, slings and backpacks
Infant carrying likely emerged early in human evolution as the emergence of bipedalism would have necessitated some means of carrying babies who could no longer cling to their mothers and/or simply sit on top of their mother's back. On-the-body carriers are designed in various forms such as baby sling, backpack carriers, and soft front or hip carriers, with varying materials and degrees of rigidity, decoration, support and confinement of the child. Slings, soft front carriers, and "baby carriages" are typically used for infants who lack the ability to sit or to hold their head up. Frame backpack carriers (a modification of the frame backpack), hip carriers, slings, mei tais and a variety of other soft carriers are used for older children.
Images of children being carried in slings can be seen in Egyptian artwork dating back to the time of the Pharaohs, and have been used in many indigenous cultures. One of the earliest European artworks showing baby wearing is a fresco by Giotto painted in around 1306 AD, which depicts Mary carrying Jesus in a sling. Baby wearing in a sling was well known in Europe in medieval times, but was mainly seen as a practice of marginalised groups such as beggars and Romani people. A cradleboard is a Native American baby carrier used to keep babies secure and comfortable and at the same time allowing the mothers freedom to work and travel. The cradleboards were attached to the mother's back straps from the shoulder or the head. For travel, cradleboards could be hung on a saddle or travois. Ethnographic tradition indicates that it was common practice to cradleboard newborn children until they were able to walk, although many mothers continued to swaddle their children well past the first birthday. Bound and wrapped on a cradleboard, a baby can feel safe and secure. Soft materials such as lichens, moss and shredded bark were used for cushioning and diapers. Cradleboards were either cut from flat pieces of wood or woven from flexible twigs like willow and hazel, and cushioned with soft, absorbent materials. The design of most cradleboards is a flat surface with the child wrapped tightly to it. It is usually only able to move its head.
On-the-body baby carrying started being known in western countries in the 1960s, with the advent of the structured soft pack in the mid-1960s. Around the same time, the frame backpack quickly became a popular way to carry older babies and toddlers. In the early 1970s, the wrap was reintroduced in Germany. The two ringed sling was invented by Rayner and Fonda Garner in 1981 and popularized by Dr William Sears starting in around 1985. In the early 1990s, the modern pouch carrier was created in Hawaii. While the Chinese mei tai has been around in one form or another for centuries, it did not become popular in the west until it was modernized with padding and other adjustments. It first became popular and well known in mid-2003.
Portable cradles, including cradleboards, baskets, and bassinets, have been used by many cultures to carry young infants.
Wheeled transport methods
Wheeled devices are generally divided into prams, used for newborn babies in which the infant normally lies down facing the pusher, and the strollers, which are used for the small child up to about three years old in a sitting position facing forward.
History
William Kent developed an early stroller in 1733. In 1733, the Duke of Devonshire asked Kent to build a means of transport that would carry his children. Kent obliged by constructing a shell shaped basket on wheels that the children could sit in. This was richly decorated and meant to be pulled by a goat or small pony. Benjamin Potter Crandall sold baby carriages in the US in the 1830s which have been described as the "first baby carriages manufactured in the US". Another early development was F.A. Whitney Carriage Company. His son, Jesse Armour Crandall was issued a number of patents for improvements and additions to the standard models. These included adding a brake to carriages, a model which folded, designs for parasols and an umbrella hanger. By 1840, the baby carriage became extremely popular. Queen Victoria bought three carriages from Hitchings Baby Store.
The carriages of those days were built of wood or wicker and held together by expensive brass joints. These sometimes became heavily ornamented works of art. Models were also named after royalty: Princess and Duchess being popular names, as well as Balmoral and Windsor.
In June 1889, an African American man named William H. Richardson patented his idea of the first reversible stroller. The bassinet was designed so it could face out or in towards the parent. He also made structural changes to the carriage. Until then the axle did not allow each wheel to move separately. Richardson's design allowed this, which increased maneuverability of the carriages. As the 1920s began, prams were now available to all families and were becoming safer, with larger wheels, brakes, deeper prams, and lower, sturdier frames.
In 1965, Owen Maclaren, an aeronautical engineer, worked on complaints his daughter made about travelling from England to America with her heavy pram. Using his knowledge of aeroplanes, Maclaren designed a stroller with an aluminium frame and created the first true umbrella stroller. He then went on to found Maclaren, which manufactured and sold his new design. The design took off and soon "strollers" were easier to transport and used everywhere.
In the 1970s, however, the trend was more towards a more basic version, not fully sprung, and with a detachable body known as a "carrycot". Now, prams are very rarely used, being large and expensive when compared with "buggies" (see below). One of the longer lived and better known brands in the UK is Silver Cross, first manufactured in Hunslet, Leeds, in 1877, and later Guiseley from 1936 until 2002 when the factory closed. Silver Cross was then bought by the toy company David Halsall and Sons who relocated the head office to Skipton and expanded into a range of new, modern baby products including pushchairs and "travel systems". They continue to sell the traditional Silver Cross coach prams which are manufactured at a factory in Bingley in Yorkshire.
Since the 1980s, the stroller industry has developed with new features, safer construction and more accessories.
Prams
Larger and heavier prams, or perambulators, had been used since their introduction in the Victorian era; prams were also used for infants, often sitting up. The term carrycot became more common in the UK after the introduction of lighter units with detachable baby carriers in the 1970s.
As they developed through the years suspension was added, making the ride smoother for both the baby and the person pushing it.
The word pram is etymologically a shortening of its now less common synonym perambulator.
Strollers
"Strollers" or "pushchairs/buggies" (British English) are used for small children up to about three years old in a sitting position facing forward.
"Pushchair" was the popularly used term in the UK between its invention and the early 1980s, when a more compact design known as a "buggy" became the trend, popularised by the conveniently collapsible aluminium-framed Maclaren buggy designed and patented by the British aeronautical designer Owen Maclaren in 1965. "Buggy" is the usual term in the UK (sometimes "pushchair"); in American English, buggy usually refers to a four-wheeled vehicle known as a quad or quad bike in the UK. "Stroller" is the usual term in the USA. Newer versions can be configured to carry a baby lying down like a low pram and then be reconfigured to carry the child in the forward-facing position.
A variety of twin pushchairs are manufactured, some designed for babies of a similar age (such as twins) and some for those with a small age gap. Triple pushchairs are a fairly recent addition, due to the number of multiple births being on the increase. Safety guidelines for standard pushchairs apply. Most triple buggies have a weight limit of 50 kg and recommended use for children up to the age of four years.
A travel system is typically a set consisting of a chassis with a detachable baby seat and/or carrycot. Thus a travel system can be switched between a pushchair and a pram. Another benefit of a travel system is that the detached chassis (generally an umbrella closing chassis) when folded will usually be smaller than other types, to transport it in a car trunk or boot. Also, the baby seat will snap into a base meant to stay in an automobile, becoming a car seat. This allows undisturbed movement of the baby into or out of a car and a reduced chance of waking a sleeping baby.
Another modern design showcases a stroller that includes the possibility for the lower body to be elongated, thereby transforming the stroller into a kick scooter. Steering occurs by leaning towards either side. Depending on the model, it can be equipped with a foot- and/or handbrake. Speeds up to can be reached. The first stroller of this kind was the so-called "Roller Buggy", developed by industrial designer Valentin Vodev in 2005. In 2012 the manufacturer Quinny became interested in the concept and teamed up with a Belgian studio to design another model.
The modern infant car seat is a relative latecomer. It is used to carry a child within a car. Such car seats are required by law in many countries to safely transport young children.
In contemporary culture, with four-figure systems or sleek jogging strollers common in some circles, strollers often serve as not only an infant transport device but also a highly visible symbol of everything from class to parenting philosophy.
Others
Bicycles can be fitted with a bicycle trailer or a children's bicycle seat to carry small children. An older child can ride his own bike, or ride a one-wheel trailer bike with an integrated seat and handle bars.
A "travel system" includes a car seat base, an infant car seat, and a baby stroller. The car seat base is installed in a car. The infant car seat snaps into the car seat base when traveling with a baby. From the car, the infant car seat can be hand carried and snapped onto the stroller.
Gallery
| Technology | Other | null |
1006035 | https://en.wikipedia.org/wiki/Unix%20time | Unix time | Unix time is a date and time representation widely used in computing. It measures time by the number of non-leap seconds that have elapsed since 00:00:00 UTC on 1 January 1970, the Unix epoch. For example, at midnight on 1 January 2010, Unix time was 1262304000.
Unix time originated as the system time of Unix operating systems. It has come to be widely used in other computer operating systems, file systems, programming languages, and databases. In modern computing, values are sometimes stored with higher granularity, such as microseconds or nanoseconds.
Definition
Unix time is currently defined as the number of non-leap seconds which have passed since 00:00:00UTC on Thursday, 1 January 1970, which is referred to as the Unix epoch. Unix time is typically encoded as a signed integer.
The Unix time is exactly midnight UTC on 1 January 1970, with Unix time incrementing by 1 for every non-leap second after this. For example, 00:00:00UTC on 1 January 1971 is represented in Unix time as . Negative values, on systems that support them, indicate times before the Unix epoch, with the value decreasing by 1 for every non-leap second before the epoch. For example, 00:00:00UTC on 1 January 1969 is represented in Unix time as . Every day in Unix time consists of exactly seconds.
Unix time is sometimes referred to as Epoch time. This can be misleading since Unix time is not the only time system based on an epoch and the Unix epoch is not the only epoch used by other time systems.
Leap seconds
Unix time differs from both Coordinated Universal Time (UTC) and International Atomic Time (TAI) in its handling of leap seconds. UTC includes leap seconds that adjust for the discrepancy between precise time, as measured by atomic clocks, and solar time, relating to the position of the earth in relation to the sun. International Atomic Time (TAI), in which every day is precisely seconds long, ignores solar time and gradually loses synchronization with the Earth's rotation at a rate of roughly one second per year. In Unix time, every day contains exactly seconds. Each leap second uses the timestamp of a second that immediately precedes or follows it.
On a normal UTC day, which has a duration of seconds, the Unix time number changes in a continuous manner across midnight. For example, at the end of the day used in the examples above, the time representations progress as follows:
When a leap second occurs, the UTC day is not exactly seconds long and the Unix time number (which always increases by exactly each day) experiences a discontinuity. Leap seconds may be positive or negative. No negative leap second has ever been declared, but if one were to be, then at the end of a day with a negative leap second, the Unix time number would jump up by 1 to the start of the next day. During a positive leap second at the end of a day, which occurs about every year and a half on average, the Unix time number increases continuously into the next day during the leap second and then at the end of the leap second jumps back by 1 (returning to the start of the next day). For example, this is what happened on strictly conforming POSIX.1 systems at the end of 1998:
Unix time numbers are repeated in the second immediately following a positive leap second. The Unix time number is thus ambiguous: it can refer either to start of the leap second (2016-12-31 23:59:60) or the end of it, one second later (2017-01-01 00:00:00). In the theoretical case when a negative leap second occurs, no ambiguity is caused, but instead there is a range of Unix time numbers that do not refer to any point in UTC time at all.
A Unix clock is often implemented with a different type of positive leap second handling associated with the Network Time Protocol (NTP). This yields a system that does not conform to the POSIX standard. See the section below concerning NTP for details.
When dealing with periods that do not encompass a UTC leap second, the difference between two Unix time numbers is equal to the duration in seconds of the period between the corresponding points in time. This is a common computational technique. However, where leap seconds occur, such calculations give the wrong answer. In applications where this level of accuracy is required, it is necessary to consult a table of leap seconds when dealing with Unix times, and it is often preferable to use a different time encoding that does not suffer from this problem.
A Unix time number is easily converted back into a UTC time by taking the quotient and modulus of the Unix time number, modulo . The quotient is the number of days since the epoch, and the modulus is the number of seconds since midnight UTC on that day. If given a Unix time number that is ambiguous due to a positive leap second, this algorithm interprets it as the time just after midnight. It never generates a time that is during a leap second. If given a Unix time number that is invalid due to a negative leap second, it generates an equally invalid UTC time. If these conditions are significant, it is necessary to consult a table of leap seconds to detect them.
Non-synchronous Network Time Protocol-based variant
Commonly a Mills-style Unix clock is implemented with leap second handling not synchronous with the change of the Unix time number. The time number initially decreases where a leap should have occurred, and then it leaps to the correct time 1 second after the leap. This makes implementation easier, and is described by Mills' paper. This is what happens across a positive leap second:
This can be decoded properly by paying attention to the leap second state variable, which unambiguously indicates whether the leap has been performed yet. The state variable change is synchronous with the leap.
A similar situation arises with a negative leap second, where the second that is skipped is slightly too late. Very briefly the system shows a nominally impossible time number, but this can be detected by the TIME_DEL state and corrected.
In this type of system the Unix time number violates POSIX around both types of leap second. Collecting the leap second state variable along with the time number allows for unambiguous decoding, so the correct POSIX time number can be generated if desired, or the full UTC time can be stored in a more suitable format.
The decoding logic required to cope with this style of Unix clock would also correctly decode a hypothetical POSIX-conforming clock using the same interface. This would be achieved by indicating the TIME_INS state during the entirety of an inserted leap second, then indicating TIME_WAIT during the entirety of the following second while repeating the seconds count. This requires synchronous leap second handling. This is probably the best way to express UTC time in Unix clock form, via a Unix interface, when the underlying clock is fundamentally untroubled by leap seconds.
Variant that counts leap seconds
Another, much rarer, non-conforming variant of Unix time keeping involves incrementing the value for all seconds, including leap seconds; some Linux systems are configured this way. Time kept in this fashion is sometimes referred to as "TAI" (although timestamps can be converted to UTC if the value corresponds to a time when the difference between TAI and UTC is known), as opposed to "UTC" (although not all UTC time values have a unique reference in systems that do not count leap seconds).
Because TAI has no leap seconds, and every TAI day is exactly 86400 seconds long, this encoding is actually a pure linear count of seconds elapsed since 1970-01-01T00:00:10TAI. This makes time interval arithmetic much easier. Time values from these systems do not suffer the ambiguity that strictly conforming POSIX systems or NTP-driven systems have.
In these systems it is necessary to consult a table of leap seconds to correctly convert between UTC and the pseudo-Unix-time representation. This resembles the manner in which time zone tables must be consulted to convert to and from civil time; the IANA time zone database includes leap second information, and the sample code available from the same source uses that information to convert between TAI-based timestamps and local time. Conversion also runs into definitional problems prior to the 1972 commencement of the current form of UTC (see section UTC basis below).
This system, despite its superficial resemblance, is not Unix time. It encodes times with values that differ by several seconds from the POSIX time values. A version of this system, in which the epoch was 1970-01-01T00:00:00TAI rather than 1970-01-01T00:00:10TAI, was proposed for inclusion in ISO C's , but only the UTC part was accepted in 2011. A does, however, exist in C++20.
Representing the number
A Unix time number can be represented in any form capable of representing numbers. In some applications the number is simply represented textually as a string of decimal digits, raising only trivial additional problems. However, certain binary representations of Unix times are particularly significant.
The Unix time_t data type that represents a point in time is, on many platforms, a signed integer, traditionally of 32bits (but see below), directly encoding the Unix time number as described in the preceding section. A signed 32-bit value covers about 68 years before and after the 1970-01-01 epoch. The minimum representable date is Friday 1901-12-13, and the maximum representable date is Tuesday 2038-01-19. One second after 2038-01-19T03:14:07Z this representation will overflow in what is known as the year 2038 problem.
In some newer operating systems, time_t has been widened to 64 bits. This expands the times representable to about in both directions, which is over twenty times the present age of the universe.
There was originally some controversy over whether the Unix time_t should be signed or unsigned. If unsigned, its range in the future would be doubled, postponing the 32-bit overflow (by 68 years). However, it would then be incapable of representing times prior to the epoch. The consensus is for time_t to be signed, and this is the usual practice. The software development platform for version 6 of the QNX operating system has an unsigned 32-bit time_t, though older releases used a signed type.
The POSIX and Open Group Unix specifications include the C standard library, which includes the time types and functions defined in the <time.h> header file. The ISO C standard states that time_t must be an arithmetic type, but does not mandate any specific type or encoding for it. POSIX requires time_t to be an integer type, but does not mandate that it be signed or unsigned.
Unix has no tradition of directly representing non-integer Unix time numbers as binary fractions. Instead, times with sub-second precision are represented using composite data types that consist of two integers, the first being a time_t (the integral part of the Unix time), and the second being the fractional part of the time number in millionths (in struct timeval) or billionths (in struct timespec). These structures provide a decimal-based fixed-point data format, which is useful for some applications, and trivial to convert for others.
UTC basis
The present form of UTC, with leap seconds, is defined only starting from 1 January 1972. Prior to that, since 1 January 1961 there was an older form of UTC in which not only were there occasional time steps, which were by non-integer numbers of seconds, but also the UTC second was slightly longer than the SI second, and periodically changed to continuously approximate the Earth's rotation. Prior to 1961 there was no UTC, and prior to 1958 there was no widespread atomic timekeeping; in these eras, some approximation of GMT (based directly on the Earth's rotation) was used instead of an atomic timescale.
The precise definition of Unix time as an encoding of UTC is only uncontroversial when applied to the present form of UTC. The Unix epoch predating the start of this form of UTC does not affect its use in this era: the number of days from 1 January 1970 (the Unix epoch) to 1 January 1972 (the start of UTC) is not in question, and the number of days is all that is significant to Unix time.
The meaning of Unix time values below (i.e., prior to 1 January 1972) is not precisely defined. The basis of such Unix times is best understood to be an unspecified approximation of UTC. Computers of that era rarely had clocks set sufficiently accurately to provide meaningful sub-second timestamps in any case. Unix time is not a suitable way to represent times prior to 1972 in applications requiring sub-second precision; such applications must, at least, define which form of UT or GMT they use.
, the possibility of ending the use of leap seconds in civil time is being considered. A likely means to execute this change is to define a new time scale, called International Time, that initially matches UTC but thereafter has no leap seconds, thus remaining at a constant offset from TAI. If this happens, it is likely that Unix time will be prospectively defined in terms of this new time scale, instead of UTC. Uncertainty about whether this will occur makes prospective Unix time no less predictable than it already is: if UTC were simply to have no further leap seconds the result would be the same.
History
The earliest versions of Unix time had a 32-bit integer incrementing at a rate of 60 Hz, which was the rate of the system clock on the hardware of the early Unix systems. Timestamps stored this way could only represent a range of a little over two and a quarter years. The epoch being counted from was changed with Unix releases to prevent overflow, with midnight on 1 January 1971 and 1 January 1972 both being used as epochs during Unix's early development. Early definitions of Unix time also lacked timezones.
The current epoch of 1 January 1970 00:00:00 UTC was selected arbitrarily by Unix engineers because it was considered a convenient date to work with. The precision was changed to count in seconds in order to avoid short-term overflow.
When POSIX.1 was written, the question arose of how to precisely define time_t in the face of leap seconds. The POSIX committee considered whether Unix time should remain, as intended, a linear count of seconds since the epoch, at the expense of complexity in conversions with civil time or a representation of civil time, at the expense of inconsistency around leap seconds. Computer clocks of the era were not sufficiently precisely set to form a precedent one way or the other.
The POSIX committee was swayed by arguments against complexity in the library functions, and firmly defined the Unix time in a simple manner in terms of the elements of UTC time. This definition was so simple that it did not even encompass the entire leap year rule of the Gregorian calendar, and would make 2100 a leap year.
The 2001 edition of POSIX.1 rectified the faulty leap year rule in the definition of Unix time, but retained the essential definition of Unix time as an encoding of UTC rather than a linear time scale. Since the mid-1990s, computer clocks have been routinely set with sufficient precision for this to matter, and they have most commonly been set using the UTC-based definition of Unix time. This has resulted in considerable complexity in Unix implementations, and in the Network Time Protocol, to execute steps in the Unix time number whenever leap seconds occur.
Usage
Unix time is widely adopted in computing beyond its original application as the system time for Unix. Unix time is available in almost all system programming APIs, including those provided by both Unix-based and non-Unix operating systems. Almost all modern programming languages provide APIs for working with Unix time or converting them to another data structure. Unix time is also used as a mechanism for storing timestamps in a number of file systems, file formats, and databases.
The C standard library uses Unix time for all date and time functions, and Unix time is sometimes referred to as time_t, the name of the data type used for timestamps in C and C++. C's Unix time functions are defined as the system time API in the POSIX specification. The C standard library is used extensively in all modern desktop operating systems, including Microsoft Windows and Unix-like systems such as macOS and Linux, where it is a standard programming interface.
iOS provides a Swift API which defaults to using an epoch of 1 January 2001 but can also be used with Unix timestamps. Android uses Unix time alongside a timezone for its system time API.
Windows does not use Unix time for storing time internally but does use it in system APIs, which are provided in C++ and implement the C standard library specification. Unix time is used in the PE format for Windows executables.
Unix time is typically available in major programming languages and is widely used in desktop, mobile, and web application programming. Java provides an Instant object which holds a Unix timestamp in both seconds and nanoseconds. Python provides a time library which uses Unix time. JavaScript provides a Date library which provides and stores timestamps in milliseconds since the Unix epoch and is implemented in all modern desktop and mobile web browsers as well as in JavaScript server environments like Node.js.
Filesystems designed for use with Unix-based operating systems tend to use Unix time. APFS, the file system used by default across all Apple devices, and ext4, which is widely used on Linux and Android devices, both use Unix time in nanoseconds for file timestamps. Several archive file formats can store timestamps in Unix time, including RAR and tar. Unix time is also commonly used to store timestamps in databases, including in MySQL and PostgreSQL.
Limitations
Unix time was designed to encode calendar dates and times in a compact manner intended for use by computers internally. It is not intended to be easily read by humans or to store timezone-dependent values. It is also limited by default to representing time in seconds, making it unsuited for use when a more precise measurement of time is needed, such as when measuring the execution time of programs.
Range of representable times
Unix time by design does not require a specific size for the storage, but most common implementations of Unix time use a signed integer with the same size as the word size of the underlying hardware. As the majority of modern computers are 32-bit or 64-bit, and a large number of programs are still written in 32-bit compatibility mode, this means that many programs using Unix time are using signed 32-bit integer fields. The maximum value of a signed 32-bit integer is , and the minimum value is , making it impossible to represent dates before 13 December 1901 (at 20:45:52 UTC) or after 19 January 2038 (at 03:14:07 UTC). The early cutoff can have an impact on databases that are storing historical information; in some databases where 32-bit Unix time is used for timestamps, it may be necessary to store time in a different form of field, such as a string, to represent dates before 1901. The late cutoff is known as the Year 2038 problem and has the potential to cause issues as the date approaches, as dates beyond the 2038 cutoff would wrap back around to the start of the representable range in 1901.
Date range cutoffs are not an issue with 64-bit representations of Unix time, as the effective range of dates representable with Unix time stored in a signed 64-bit integer is over 584 billion years, or 292 billion years in either direction of the 1970 epoch.
Alternatives
Unix time is not the only standard for time that counts away from an epoch. On Windows, the FILETIME type stores time as a count of 100-nanosecond intervals that have elapsed since 0:00 GMT on 1 January 1601. Windows epoch time is used to store timestamps for files and in protocols such as the Active Directory Time Service and Server Message Block.
The Network Time Protocol used to coordinate time between computers uses an epoch of 1 January 1900, counted in an unsigned 32-bit integer for seconds and another unsigned 32-bit integer for fractional seconds, which rolls over every 2 seconds (about once every 136 years).
Many applications and programming languages provide methods for storing time with an explicit timezone. There are also a number of time format standards which exist to be readable by both humans and computers, such as ISO 8601.
Notable events in Unix time
Unix enthusiasts have a history of holding "time_t parties" (pronounced "time tea parties") to celebrate significant values of the Unix time number. These are directly analogous to the new year celebrations that occur at the change of year in many calendars. As the use of Unix time has spread, so has the practice of celebrating its milestones. Usually it is time values that are round numbers in decimal that are celebrated, following the Unix convention of viewing time_t values in decimal. Among some groups round binary numbers are also celebrated, such as +230 which occurred at 13:37:04 UTC on Saturday, 10 January 2004.
The events that these celebrate are typically described as "N seconds since the Unix epoch", but this is inaccurate; as discussed above, due to the handling of leap seconds in Unix time the number of seconds elapsed since the Unix epoch is slightly greater than the Unix time number for times later than the epoch.
At 18:36:57 UTC on Wednesday, 17 October 1973, the first appearance of the date in ISO 8601 format within the digits of Unix time (119731017) took place.
At 01:46:40 UTC on Sunday, 9 September 2001, the Unix billennium (Unix time number ) was celebrated. The name billennium is a portmanteau of billion and millennium. Some programs which stored timestamps using a text representation encountered sorting errors, as in a text sort, times after the turnover starting with a 1 digit erroneously sorted before earlier times starting with a 9 digit. Affected programs included the popular Usenet reader KNode and e-mail client KMail, part of the KDE desktop environment. Such bugs were generally cosmetic in nature and quickly fixed once problems became apparent. The problem also affected many Filtrix document-format filters provided with Linux versions of WordPerfect; a patch was created by the user community to solve this problem, since Corel no longer sold or supported that version of the program.
At 23:31:30 UTC on Friday, 13 February 2009, the decimal representation of Unix time reached seconds. Google celebrated this with a Google Doodle. Parties and other celebrations were held around the world, among various technical subcultures, to celebrate the th second.
In popular culture
Vernor Vinge's novel A Deepness in the Sky describes a spacefaring trading civilization thousands of years in the future that still uses the Unix epoch. The "programmer-archaeologist" responsible for finding and maintaining usable code in mature computer systems first believes that the epoch refers to the time when man first walked on the Moon, but then realizes that it is "the 0-second of one of humankind's first computer operating systems".
| Technology | Computer architecture concepts | null |
1006179 | https://en.wikipedia.org/wiki/Avenue%20%28landscape%29 | Avenue (landscape) | In landscaping, an avenue (from the French), alameda (from the Portuguese and Spanish), or allée (from the French), is a straight path or road with a line of trees or large shrubs running along each side, which is used, as its Latin source venire ("to come") indicates, to emphasize the "coming to," or arrival at a landscape or architectural feature. In most cases, the trees planted in an avenue will be all of the same species or cultivar, so as to give uniform appearance along the full length of the avenue.
The French term allée is used for avenues planted in parks and landscape gardens, as well as boulevards such as the Grande Allée in Quebec City, Canada, and Karl-Marx-Allee in Berlin.
History
The avenue is one of the oldest implements in the history of gardens. An Avenue of Sphinxes still leads to the tomb of the pharaoh Hatshepsut. Avenues similarly defined by guardian stone lions lead to the Ming tombs in China. British archaeologists have adopted highly specific criteria for "avenues" within the context of British archaeology.
In French formal garden Baroque landscape design style, avenues of trees that were centered upon the dwelling radiated across the landscape. See the avenues in the Gardens of Versailles or Het Loo. Other late 17th-century French and Dutch landscapes, in that intensely ordered and flat terrain, fell naturally into avenues; Meindert Hobbema, in The Avenue at Middelharnis (1689) presents such an avenue in farming country, neatly flanked at regular intervals by rows of young trees that have been rigorously limbed up; his central vanishing point mimics the avenue's propensity to draw the spectator forwards along it.
In Austria-Hungary, the fashion for establishing representative avenues appeared as early as the Renaissance and reached its peak in the Baroque period. Avenues lined the access roads to chateaus and manors, as well as pilgrimage routes and Stations of the Cross. The manorial landscape architecture was followed by "folk landscaping" with wayside chapels, crosses and shrines accompanied by trees. Later, Maria Theresa decreed in 1752 to plant trees along the new imperial roads for economic, aesthetic, orientation and safety reasons. Most avenues were created during the reigns of Maria Theresa and Joseph II. At the turn of the 18th and 19th centuries, new landscaping came from England, and formal aesthetics were replaced by the aesthetics of the natural landscape. During Napoleonic wars, pyramidal poplars became a new element, popular due to their fast growth and distinctive shape. Also in the middle of the 19th century, when the construction of imperial roads continued, but at the same time a network of non-state side roads was created, the law ordered the planting of avenues along them, especially fruit trees and mulberries. Many baroque alleys have aged and been felled, and fruit tree alleys have become increasingly popular. At the time of the development of motoring, the oldest avenues often hinder the widening and modernization of rural roads and are the subject of dispute between conservationists and traffic safety requirements.
Design
To enhance the approach to mansions or manor houses, avenues were planted along the entrance drive. Sometimes the avenues are in double rows on each side of a road. Trees preferred for avenues were selected for their height and speed of growth, such as poplar, beech, lime, and horse chestnut. In the American antebellum era South, the southern live oak was typically used, because the trees created a beautiful shade canopy.
Sometimes tree avenues were designed to direct the eye toward some distinctive architectural building or feature, such as a chapels, gazebos, or architectural follies.
Street name
Origin
Avenue as a street name in French, Spanish (avenida) and other languages implies a large straight street in a city, often created as part of a large scheme of urban planning such as Baron Haussmann's remodelling of Paris or the L'Enfant Plan for Washington D.C.; "avenues" will typically be the main roads. This pattern is very often followed in the United States, indeed all the Americas, but in the United Kingdom this sense is less strong and the name is used more randomly, mostly for suburban streets developed in the 20th century, though Western and Eastern Avenues in London are main traffic arteries out of the city, if not very straight.
Cities
In cities which have a grid-based naming system, such as the borough of Manhattan in New York City, there may be a convention that the streets called avenues run parallel in one direction – roughly north–south in the case of Manhattan – while "streets" run at 90 degrees to them across the avenues; roughly east–west in Manhattan. In Washington, DC the avenues radiate from the centre running diagonally across the grid of streets, which follows typical French usage of the name (in France "boulevards" are often main roads running round the city centre). In Phoenix, Arizona, "the avenues" can colloquially mean "the west side of town", due to the numbered north–south-running roads being called "Avenues" in the western part of the city, separated from the eastern "Streets" by a "Central Avenue". Similarly, "the avenues" in San Francisco, California refers to the Richmond District and the Sunset District, the two neighborhoods on the Pacific coast, north and south of Golden Gate Park, respectively.
In Anglophone urban or suburban settings, "avenue" is one of the usual suite of words used in street names, along with "boulevard", "circle", "court", "drive", "lane", "place", "road", "street", "terrace", "way", "gate" and so on, any of which may carry connotations as to the street's size, importance, or function. Avenues were usually lined with trees when first built, although many avenues have lost their trees to make way for overhead wiring, parking or to allow light into properties.
Notable avenues
Paseo del Prado, Madrid, Spain
Avenue des Champs-Élysées, Paris, France
Avenida da Liberdade, Lisbon, Portugal
Paseo de la Reforma, Mexico City, Mexico
Fifth Avenue, Manhattan, New York City, United States
Madison Avenue, Manhattan, New York City, United States
Michigan Avenue, Chicago, Illinois, United States
La Brea Avenue, Los Angeles, California, United States
Holland Park Avenue, London, United Kingdom
9 de Julio Avenue, Buenos Aires, Argentina
Paulista Avenue, São Paulo, Brazil
Kurfürstendamm (Ku'Damm), Berlin, Germany
Wutong Avenue, Nanjing, China
Gallery
| Technology | Road infrastructure | null |
1008170 | https://en.wikipedia.org/wiki/Oryza%20sativa | Oryza sativa | Oryza sativa, having the common name Asian cultivated rice, is the much more common of the two rice species cultivated as a cereal, the other species being O. glaberrima, African rice. It was first domesticated in the Yangtze River basin in China 13,500 to 8,200 years ago.
Oryza sativa belongs to the genus Oryza and the BOP clade in the grass family Poaceae. With a genome consisting of 430Mbp across 12 chromosomes, it is renowned for being easy to genetically modify and is a model organism for the study of the biology of cereals and monocots.
Description
O. sativa has an erect stalk stem that grows tall, with a smooth surface. The leaf is lanceolate, long, and grows from a ligule long.
Classification
The generic name Oryza is a classical Latin word for rice, while the specific epithet sativa means "cultivated".
Oryza sativa contains two major subspecies: the sticky, short-grained japonica or sinica variety, and the nonsticky, long-grained rice variety. Japonica was domesticated in the Yangtze Valley 9–6,000 years ago, and its varieties can be cultivated in dry fields (it is cultivated mainly submerged in Japan), in temperate East Asia, upland areas of Southeast Asia, and high elevations in South Asia, while indica was domesticated around the Ganges 8,500–4,500 years ago, and its varieties are mainly lowland rices, grown mostly submerged, throughout tropical Asia. Rice grain occurs in a variety of colors, including white, brown, black (purple when cooked), and red.
A third subspecies, which is broad-grained and thrives under tropical conditions, was identified based on morphology and initially called javanica, but is now known as tropical japonica. Examples of this variety include the medium-grain 'Tinawon' and 'Unoy' cultivars, which are grown in the high-elevation rice terraces of the Central Cordillera Mountains of northern Luzon, Philippines.
Glaszmann (1987) used isozymes to sort O. sativa into six groups: japonica, aromatic, indica, aus, rayada, and ashina.
Garris et al. (2004) used simple sequence repeats to sort O. sativa into five groups: temperate japonica, tropical japonica and aromatic comprise the japonica varieties, while indica and aus comprise the indica varieties. The Garris scheme has held up against newer analyses as of 2019, though one 2014 article argues that rayada is distinct enough to be its own group under japonica.
Genetics
/ is a gene that regulates the overall architecture/growth habit of the plant. Some of its epialleles increase rice yield. An accurate and usable simple sequence repeat marker set was developed and used to generate a high-density map. A multiplex high-throughput marker assisted selection system has been developed but as with other crop HTMAS systems has proven difficult to customize, costly (both directly and for the equipment), and inflexible. Other molecular breeding tools have produced rice blast resistant cultivars. DNA microarray has been used to advance understanding of hybrid vigor in rice, QTL sequencing has been used to elucidate seedling vigor, and genome wide association study (GWAS) by whole genome sequencing (WGS) has been used to investigate various agronomic traits.
In total, 641 copy number variations are known. Exome capture often reveals new single nucleotide polymorphisms in rice, due to its large genome and high degree of DNA repetition.
Resistance to the rice blast fungus Magnaporthe grisea is provided by various resistance genes including , , and . O. sativa uses the plant hormones abscisic acid and salicylic acid to regulate immune responses. Salicylic acid broadly stimulates, and abscisic acid suppresses, immunity to M. grisea; success depends on the balance between their levels.
O. sativa has a large number of insect resistance genes specifically for the brown planthopper. , 15 R genes have been cloned and characterized.
| Biology and health sciences | Poales | null |
3048284 | https://en.wikipedia.org/wiki/Moons%20of%20Pluto | Moons of Pluto | The dwarf planet Pluto has five natural satellites. In order of distance from Pluto, they are Charon, Styx, Nix, Kerberos, and Hydra. Charon, the largest, is mutually tidally locked with Pluto, and is massive enough that Pluto and Charon are sometimes considered a binary dwarf planet.
History
The innermost and largest moon, Charon, was discovered by James Christy on 22 June 1978, nearly half a century after Pluto was discovered. This led to a substantial revision in estimates of Pluto's size, which had previously assumed that the observed mass and reflected light of the system were all attributable to Pluto alone.
Two additional moons were imaged by astronomers of the Pluto Companion Search Team preparing for the New Horizons mission and working with the Hubble Space Telescope on 15 May 2005, which received the provisional designations S/2005 P 1 and S/2005 P 2. The International Astronomical Union officially named these moons Nix (Pluto II, the inner of the two moons, formerly P 2) and Hydra (Pluto III, the outer moon, formerly P 1), on 21 June 2006. Kerberos, announced on 20 July 2011, was discovered while searching for Plutonian rings. The discovery of Styx was announced on 7 July 2012 while looking for potential hazards for New Horizons.
Charon
Charon is about half the diameter of Pluto and is massive enough (nearly one eighth of the mass of Pluto) that the system's barycenter lies between them, approximately above Pluto's surface. Charon and Pluto are also tidally locked, so that they always present the same face toward each other. The IAU General Assembly in August 2006 considered a proposal that Pluto and Charon be reclassified as a double planet, but the proposal was abandoned.
Like Pluto, Charon is a perfect sphere to within measurement uncertainty.
Circumbinary moons
Pluto's four small circumbinary moons orbit Pluto at two to four times the distance of Charon, ranging from Styx at 42,700 kilometres to Hydra at 64,800 kilometres from the barycenter of the system. They have nearly circular prograde orbits in the same orbital plane as Charon.
All are much smaller than Charon. Nix and Hydra, the two larger, are roughly 42 and 55 kilometers on their longest axis respectively, and Styx and Kerberos are 7 and 12 kilometers respectively. All four are irregularly shaped.
Characteristics
The Pluto system is highly compact and largely empty: prograde moons could stably orbit Pluto out to 53% of the Hill radius (the gravitational zone of Pluto's influence) of 6 million km, or out to 69% for retrograde moons. However, only the inner 3% of the region where prograde orbits would be stable is occupied by satellites, and the region from Styx to Hydra is packed so tightly that there is little room for further moons with stable orbits within this region.
An intense search conducted by New Horizons confirmed that no moons larger than 4.5 km in diameter exist out to distances up to 180,000 km from Pluto (6% of the stable region for prograde moons), assuming Charon-like albedoes of 0.38 (for smaller distances, this threshold is still smaller).
The orbits of the moons are confirmed to be circular and coplanar, with inclinations differing less than 0.4° and eccentricities less than 0.005.
The discovery of Nix and Hydra suggested that Pluto could have a ring system. Small-body impacts could eject debris off of the small moons which can form into a ring system. However, data from a deep-optical survey by the Advanced Camera for Surveys on the Hubble Space Telescope, by occultation studies, and later by New Horizons, suggest that no ring system is present.
Resonances
Styx, Nix, and Hydra are thought to be in a 3-body Laplace orbital resonance with orbital periods in a ratio of 18:22:33. The ratios should be exact when orbital precession is taken into account. Nix and Hydra are in a simple 2:3 resonance. Styx and Nix are in an 9:11 resonance, while the resonance between Styx and Hydra has a ratio of 6:11. The Laplace resonance also means that ratios of synodic periods are then such that there are 5 Styx–Hydra conjunctions and 3 Nix–Hydra conjunctions for every 2 conjunctions of Styx and Nix. If denotes the mean longitude and the libration angle, then the resonance can be formulated as . As with the Laplace resonance of the Galilean satellites of Jupiter, triple conjunctions never occur. librates about 180° with an amplitude of at least 10°.
All of the outer circumbinary moons are also close to mean motion resonance with the Charon–Pluto orbital period. Styx, Nix, Kerberos, and Hydra are in a 1:3:4:5:6 sequence of near resonances, with Styx approximately 5.4% from its resonance, Nix approximately 2.7%, Kerberos approximately 0.6%, and Hydra approximately 0.3%. It may be that these orbits originated as forced resonances when Charon was tidally boosted into its current synchronous orbit, and then released from resonance as Charon's orbital eccentricity was tidally damped. The Pluto–Charon pair creates strong tidal forces, with the gravitational field at the outer moons varying by 15% peak to peak.
However, it was calculated that a resonance with Charon could boost either Nix or Hydra into its current orbit, but not both: boosting Hydra would have required a near-zero Charonian eccentricity of 0.024, whereas boosting Nix would have required a larger eccentricity of at least 0.05. This suggests that Nix and Hydra were instead captured material, formed around Pluto–Charon, and migrated inward until they were trapped in resonance with Charon. The existence of Kerberos and Styx may support this idea.
Rotation
Prior to the New Horizons mission,
Nix, Hydra, Styx, and Kerberos
were predicted to rotate chaotically or tumble.
However, New Horizons imaging found that they had not tidally
spun down to near a spin synchronous state where chaotic rotation or tumbling would be expected. New Horizons imaging found that all 4 moons were at high obliquity. Either they were born that way, or they were tipped by a spin precession resonance.
Styx may be experiencing intermittent and chaotic obliquity variations.
Mark R. Showalter had speculated that, "Nix can flip its entire pole. It could actually be possible to spend a day on Nix in which the sun rises in the east and sets in the north. It is almost random-looking in the way it rotates."
Only one other moon, Saturn's moon Hyperion, is known to tumble, though it is likely that Haumea's moons do so as well.
Origin
It is suspected that Pluto's satellite system was created by a massive collision, similar to the Theia impact thought to have created the Moon. In both cases, the high angular momenta of the moons can only be explained by such a scenario. The nearly circular orbits of the smaller moons suggests that they were also formed in this collision, rather than being captured Kuiper Belt objects. This and their near orbital resonances with Charon (see below) suggest that they formed closer to Pluto than they are at present and migrated outward as Charon reached its current orbit. Their grey color is different from that of Pluto, one of the reddest bodies in the Solar System. This is thought to be due to a loss of volatiles during the impact or subsequent coalescence, leaving the surfaces of the moons dominated by water ice. However, such an impact should have created additional debris (more moons), yet no moons or rings were discovered by New Horizons, ruling out any more moons of significant size orbiting Pluto. An alternative hypothesis is that the collision happened at about 2,000 miles per hours, not powerful enough to destroy Charon and Pluto. Instead they remained attached to each other for up to ten hours before separating again. The faster rotation of Pluto back then, with one rotation every third hour, would have created a centrifugal force stronger than the gravitational attraction between the two bodies, which made Charon separate from Pluto, but remained gravitationally bound with each other. The same process could have created the other known four moons, from material that escaped Pluto and Charon.
List
Pluto's moons are listed here by orbital period, from shortest to longest. Charon, which is massive enough to have collapsed into a spheroid under its own gravitation, is highlighted in light purple. As the system barycenter lies far above Pluto's surface, Pluto's barycentric orbital elements have been included as well. All elements are with respect to the Pluto-Charon barycenter. The mean separation distance between the centers of Pluto and Charon is 19,596 km.
Scale model of the Pluto system
Mutual events
Transits occur when one of Pluto's moons passes between Pluto and the Sun. This occurs when one of the satellites' orbital nodes (the points where their orbits cross Pluto's ecliptic) lines up with Pluto and the Sun. This can only occur at two points in Pluto's orbit; coincidentally, these points are near Pluto's perihelion and aphelion. Occultations occur when Pluto passes in front of and blocks one of Pluto's satellites.
Charon has an angular diameter of 4 degrees of arc as seen from the surface of Pluto; the Sun appears much smaller, only 39 to 65 arcseconds. By comparison, the Moon as viewed from Earth has an angular diameter of only 31 minutes of arc, or just over half a degree of arc. Therefore, Charon would appear to have eight times the diameter, or 25 times the area of the Moon; this is due to Charon's proximity to Pluto rather than size, as despite having just over one-third of a Lunar radius, Earth's Moon is 20 times more distant from Earth's surface as Charon is from Pluto's. This proximity further ensures that a large proportion of Pluto's surface can experience an eclipse. Because Pluto always presents the same face towards Charon due to tidal locking, only the Charon-facing hemisphere experiences solar eclipses by Charon.
The smaller moons can cast shadows elsewhere. The angular diameters of the four smaller moons (as seen from Pluto) are uncertain. Nix's is 3–9 minutes of arc and Hydra's is 2–7 minutes. These are much larger than the Sun's angular diameter, so total solar eclipses are caused by these moons.
Eclipses by Styx and Kerberos are more difficult to estimate, as both moons are very irregular, with angular dimensions of 76.9 x 38.5 to 77.8 x 38.9 arcseconds for Styx, and 67.6 x 32.0 to 68.0 x 32.2 for Kerberos. As such, Styx has no annular eclipses, its widest axis being more than 10 arcseconds larger than the Sun at its largest. However, Kerberos, although slightly larger, cannot make total eclipses as its largest minor axis is a mere 32 arcseconds. Eclipses by Kerberos and Styx will entirely consist of partial and hybrid eclipses, with total eclipses being extremely rare.
The next period of mutual events due to Charon will begin in October 2103, peak in 2110, and end in January 2117. During this period, solar eclipses will occur once each Plutonian day, with a maximum duration of 90 minutes.
Exploration
The Pluto system was visited by the New Horizons spacecraft in July 2015. Images with resolutions of up to 330 meters per pixel were returned of Nix and up to 1.1 kilometers per pixel of Hydra. Lower-resolution images were returned of Styx and Kerberos.
| Physical sciences | Solar System | Astronomy |
3049231 | https://en.wikipedia.org/wiki/Jacob%20sheep | Jacob sheep | The Jacob is a British breed of domestic sheep. It combines two characteristics unusual in sheep: it is piebald—dark-coloured with areas of white wool—and it is often polycerate or multi-horned. It most commonly has four horns. The origin of the breed is not known; broken-coloured polycerate sheep were present in England by the middle of the seventeenth century, and were widespread a century later. A breed society was formed in 1969, and a flock book was published from 1972.
The Jacob was kept for centuries as a "park sheep", to ornament the large estates of landowners. In modern times it is reared mainly for wool, meat and skins.
History
The origins of the Jacob are not known. It has been bred in the British Isles for several hundred years. Sheep of this kind, little different from the modern breed, were shown in paintings from about 1760 at Tabley House in Cheshire, and – by George Stubbs – at Wentworth Woodhouse in Yorkshire.
In the de Tabley family, the tradition was that the piebald sheep had come ashore in Ireland from a wrecked ship of the Spanish Armada in 1588, and been brought to England by Sir John Byrne on his marriage.
Among the many accounts of ancient breeds of piebald sheep is the story of Jacob from the Book of Genesis (Genesis 30:31–43). Jacob took every speckled and spotted sheep from his father-in-law's (Laban's) flock and bred them. Due to the resemblance to the animal described in Genesis, the Jacob sheep was named for the Biblical figure of Jacob sometime in the 20th century.
In 2009, a study which used endogenous retrovirus markers to investigate the history of sheep domestication found the Jacob to be more closely linked to sheep from Africa and South-west Asia than to other British breeds, though all domestic breeds can be traced back to an origin in the Fertile Crescent.
Some believe that the modern breed is actually the same one mentioned in the Bible (although there is little genetic evidence) having accompanied the westward expansion of human civilisation through Northern Africa, Sicily, Spain and eventually England. Elisha Gootwine, a sheep expert at the Israeli Agriculture Ministry, says that the resemblance of a British breed to the Bible story is a coincidence, that the breed was not indigenous to ancient Israel, and that "Jacob Sheep are related to Jacob the same as the American Indians are related to India".
The Jacob was referred to as the "Spanish sheep" for much of its early recorded history. It has been bred in England for at least 350 years, and spotted sheep were widespread in England by the mid–18th century. The British landed gentry used Jacob as ornamental sheep on their estates and kept importing the sheep which probably kept the breed extant.
A breed society, the Jacob Sheep Society, was formed in July 1969. Mary Cavendish, dowager Duchess of Devonshire, who had a flock of Jacob sheep at Chatsworth House in Derbyshire, was the first president of the society. From 1972 onwards, the society published a flock book.
Jacobs were first exported to North America in the early 20th century. Some individuals acquired them from zoos in the 1960s and 1970s, but the breed remained rare in America until the 1980s; registration began in 1985. The first North American association for the breed, the Jacob Sheep Breeders Association, was established in 1988. The Jacob was introduced to Israel in 2016, when a small flock of about 120 head was shipped there from Canada by a couple who believed the breed is the same one mentioned in Genesis.
Conservation status
In 2012 the total Jacob population in the UK was reported to the DAD-IS database of the FAO as 5638, of which 2349 were registered breeding ewes. In 2017, the Rare Breeds Survival Trust listed the Jacob in Category 6 ("Other UK Native Breeds") of its watchlist, in which categories 1–5 are for various degrees of conservation risk, and category 6 is for breeds which have more than 3000 breeding females registered in the herd-book. Small numbers of Jacobs are reported from four other countries: the Czech Republic, Germany, the Netherlands and the United States, with conservation status in those countries ranging from critical to endangered-maintained.
Characteristics
The Jacob is a small, multi-horned, piebald sheep that resembles a goat in its conformation. However, it is not the only breed that can produce polycerate or piebald offspring. Other polycerate breeds include the Hebridean, Icelandic, Manx Loaghtan, and the Navajo-Churro, and other piebald breeds include the Finnsheep, Shetland Sheep and the West African Dwarf.
Mature rams (males) weigh about , while ewes (females) weigh about . The body frame is long, with a straight back and a rump that slopes toward the base of the tail. The rams have short scrotums free of wool which hold the testicles closer to the body than those of modern breeds, while the ewes have small udders free of wool that are also held closer to the body than those of modern breeds. The head is slender and triangular, and clear of wool forward of the horns and on the cheeks. The tail is long and woolly, extending almost to the hock if it has not been docked. Jacob owners do not usually dock the tail completely, even for market sheep, but instead leave several inches (several centimetres) to cover the anus and vulva. The legs are medium-length, slender, free of wool below the knees, and preferably white with or without coloured patches. The hooves are black or striped. It is not unusual for Jacobs to be cow-hocked. They provide a lean carcass with little external fat, with a high yield of meat compared to more improved breeds.
Horns
The most distinguishing features of the Jacob are their four horns, although they may have as few as two or as many as six. Both sexes are always horned, and the rams tend to have larger and more impressive horns. Two-horned rams typically have horizontal double-curled horns. Four-horned rams have two vertical centre horns which may be or more in length, and two smaller side horns, which grow down along the sides of the head. The horns on the ewe are smaller in diameter, shorter in length and appear more delicate than those of the ram. British Jacobs most often have two horns, while American Jacobs are more often polycerate. Polled (hornless) sheep are not registrable, since this trait is considered an indication of past cross-breeding, and as such there is no such thing as a polled purebred Jacob.
The horns are normally black, but may be black and white striped; white horns are undesirable. Ideally, horns are smooth and balanced, strongly attached to the skull, and grow in a way that does not impede the animal's sight or grazing abilities. Rams have larger horns than ewes. The horns in two-horned sheep, and the lower horns in four-horned animals, grow in a spiral shape. The rostral set of horns usually extend upwards and outwards, while the caudal set of horns curls downwards along the side of the head and neck. On polycerate animals it is preferred that there is a fleshy gap between the two pairs of horns. Partial or deformed horns that are not firmly attached to the skull, often referred to as "scurs", are not unusual but are considered undesirable.
Markings
Each Jacob has distinctive markings that enable the shepherd to identify specific sheep from a distance. Desirable colour traits include an animal which is approximately 60% white, with the remaining 40% consisting of a random pattern of black or "lilac" (brownish-gray) spots or patches. The skin beneath the white fleece is pink, while skin beneath coloured spots is darkly pigmented. Both rams and ewes exhibit black markings, some of which are breed specific and some of which are random.
Breed specific markings include large, symmetrical dark patches incorporating the ears, eyes and cheeks, and a dark cape over the dorsal part of the neck and shoulders. The face should have a white blaze extending from the poll to the muzzle. The muzzle itself should be dark. The classic Jacob face is often referred to as "badger-faced", consisting of black cheeks and muzzle with a white blaze running down the front of the face. In addition to these markings, random spots may occur on the rest of the body and legs (including the carpi, hocks, and pasterns). Certain markings are common in particular lines: large muzzle markings, lack of leg markings, lack of muzzle markings, etc.
The lilac color is caused by a recessive variant of the MLPH gene.
Diseases
Several rare or unusual diseases have been identified in Jacob sheep.
The condition known as split eyelid is a congenital defect common to several polycerate British breeds, and is genetically linked to the multi-horned trait. In mild cases, the eyelid shows a "peak" but does not impair vision or cause discomfort. Extreme cases (Grade 3 or higher) result in a complete separation of the upper eyelid in the middle.
In 1994, an unusual form of asymmetric occipital condylar dysplasia was found in two Jacob lambs; a possible link to the multi-horn trait has been suggested.
In 2008, researchers in Texas identified the hexosaminidase A deficiency known in humans as Tay–Sachs disease in four Jacob lambs. Subsequent testing found some fifty carriers of the genetic defect among Jacobs in the United States. The discovery offers hope of a possible pathway to effective treatment in humans. In 2022, two babies diagnosed with the disease were treated with gene therapy developed from this research and further follow up studies are being conducted.
Husbandry
The Jacob is generally considered to be an "unimproved" or "heirloom" breed (one that has survived with little human selection). Such breeds have been left to mate amongst themselves, often for centuries, and therefore retain much of their original wildness and physical characteristics. American breeders have not subjected Jacobs to extensive cross-breeding or selective breeding, other than for fleece characteristics. Like other unimproved breeds, significant variability is present among individuals within a flock. In contrast, the British Jacob has been selected for greater productivity of meat, and therefore tends to be larger, heavier and have a more uniform appearance. As a result, the American Jacob has retained nearly all of the original phenotypic characteristics of its Old World ancestors while its British counterpart has lost many of its unimproved physical characteristics through cross-breeding and selective breeding. The British Jacob has thus diverged from the American Jacob as a result of artificial selection.
Jacobs are typically hardy, low-maintenance animals with a naturally high resistance to parasites and hoof problems. Jacobs do not show much flocking behaviour. They can be skittish if not used to people, although with daily handling they will become tame and make good pets. They require shelter from extreme temperatures, but the shelter can be open and simple. They tend to thrive in extremes of heat and cold and have good or excellent foraging capabilities. They can secure adequate nutrition with minimal to no supplementation, even in the presence of suboptimal soil conditions.
Due to their low tail dock and generally unimproved anatomy, Jacob ewes are widely reputed to be easy-lambing. Jacobs are seasonal breeders, with ewes generally cycling in the cooler months of the autumn. They will begin to cycle during the first autumn following their birth and most often the ewe's first lamb is a single. Subsequent gestations will typically bear one or two lambs in the spring, and triplets are not unusual. The lambs will exhibit their spotting and horn characteristics at birth, with the horn buds more readily apparent on ram lambs. Lambs may be weaned at two months of age, but many shepherds do not separate lambs and allow the ewe to wean the lamb at about 4 months of age. Jacob ewes are instinctively attentive mothers and are protective of their lambs. They are included in commercial flocks in England because of their ease of lambing and strong mothering instincts.
Use
Wool and skins
Jacobs are shorn once a year, most often in the spring. The average weight of the fleece is . The wool is medium to coarse: staple length is about and fibre diameter about (Bradford count ).
In general, the fleece is light, soft, springy and open, with little lanolin (grease); there may be some kemp. In some sheep (particularly British Jacobs, which have denser fleeces), the black wool grows longer or shorter than the white wool. This is called "quilted fleece", and is an undesirable trait.
While other British and Northern European multi-horned sheep have a fine inner coat and a coarse, longer outer coat, Jacobs have a medium grade fleece and no outer coat. Lambs of the more primitive lines are born with a coat of guard hair that is protective against rain and cold; this birth coat is shed at 3–6 months.
Some individual sheep may develop a natural "break", or marked thinning, of the fleece in springtime, which can lead to a natural shedding of the fleece, particularly around the neck and shoulders. The medium-fine grade wool has a high lustre, and is highly sought after by handspinners. The colours may be separated or blended after shearing and before spinning to produce various shades of yarn from a single fleece, from nearly white to nearly black. Tanned Jacob sheepskins also command high market prices.
| Biology and health sciences | Sheep | Animals |
3049420 | https://en.wikipedia.org/wiki/E1cB-elimination%20reaction | E1cB-elimination reaction | The E1cB elimination reaction is a type of elimination reaction which occurs under basic conditions, where the hydrogen to be removed is relatively acidic, while the leaving group (such as -OH or -OR) is a relatively poor one. Usually a moderate to strong base is present. E1cB is a two-step process, the first step of which may or may not be reversible. First, a base abstracts the relatively acidic proton to generate a stabilized anion. The lone pair of electrons on the anion then moves to the neighboring atom, thus expelling the leaving group and forming a double or triple bond. The name of the mechanism - E1cB - stands for Elimination Unimolecular conjugate Base. Elimination refers to the fact that the mechanism is an elimination reaction and will lose two substituents. Unimolecular refers to the fact that the rate-determining step of this reaction only involves one molecular entity. Finally, conjugate base refers to the formation of the carbanion intermediate, which is the conjugate base of the starting material.
E1cB should be thought of as being on one end of a continuous spectrum, which includes the E1 mechanism at the opposite end and the E2 mechanism in the middle. The E1 mechanism usually has the opposite characteristics: the leaving group is a good one (like -OTs or -Br), while the hydrogen is not particularly acidic and a strong base is absent. Thus, in the E1 mechanism, the leaving group leaves first to generate a carbocation. Due to the presence of an empty p orbital after departure of the leaving group, the hydrogen on the neighboring carbon becomes much more acidic, allowing it to then be removed by the weak base in the second step. In an E2 reaction, the presence of a strong base and a good leaving group allows proton abstraction by the base and the departure of the leaving group to occur simultaneously, leading to a concerted transition state in a one-step process.
Mechanism
There are two main requirements to have a reaction proceed down an E1cB mechanistic pathway. The compound must have an acidic hydrogen on its β-carbon and a relatively poor leaving group on the α- carbon.
The first step of an E1cB mechanism is the deprotonation of the β-carbon, resulting in the formation of an anionic transition state, such as a carbanion. The greater the stability of this transition state, the more the mechanism will favor an E1cB mechanism. This transition state can be stabilized through induction or delocalization of the electron lone pair through resonance. In general it can be claimed that an electron withdrawing group on the substrate, a strong base, a poor leaving group and a polar solvent triggers the E1cB mechanism. An example of an E1cB mechanism that has a stable transition state can be seen in the degradation of ethiofencarb - a carbamate insecticide that has a relatively short half-life in Earth's atmosphere. Upon deprotonation of the amine, the resulting amide is relatively stable because it is conjugated with the neighboring carbonyl.
In addition to containing an acidic hydrogen on the β-carbon, a relatively poor leaving group is also necessary. A bad leaving group is necessary because a good leaving group will leave before the ionization of the molecule. As a result, the compound will likely proceed through an E2 pathway. Some examples of compounds that contain poor leaving groups and can undergo the E1cB mechanism are alcohols and fluoroalkanes.
It has also been suggested that the E1cB mechanism is more common among alkenes eliminating to alkynes than from an alkane to alkene. One possible explanation for this is that the sp2 hybridization creates slightly more acidic protons. Although this mechanism is not limited to carbon-based eliminations. It has been observed with other heteroatoms, such as nitrogen in the elimination of a phenol derivative from ethiofencarb.
Distinguishing E1cB-elimination reactions from E1- and E2-elimination reactions
All elimination reactions involve the removal of two substituents from a pair of atoms in a compound. Alkene, alkynes, or similar heteroatom variations (such as carbonyl and cyano) will form. The E1cB mechanism is just one of three types of elimination reaction. The other two elimination reactions are E1 and E2 reactions. Although the mechanisms are similar, they vary in the timing of the deprotonation of the α-carbon and the loss of the leaving group. E1 stands for unimolecular elimination, and E2 stands for bimolecular elimination.
In an E1 mechanism, the molecule contains a good leaving group that departs before deprotonation of the α-carbon. This results in the formation of a carbocation intermediate. The carbocation is then deprotonated resulting in the formation of a new pi bond. The molecule involved must also have a very good leaving group such as bromine or chlorine, and it should have a relatively less acidic α-carbon.
In an E2-elimination reaction, both the deprotonation of the α-carbon and the loss of the leaving group occur simultaneously in one concerted step. Molecules that undergo E2-elimination mechanisms have more acidic α-carbons than those that undergo E1 mechanisms, but their α-carbons are not as acidic as those of molecules that undergo E1cB mechanisms. The key difference between the E2 vs E1cb pathways is a distinct carbanion intermediate as opposed to one concerted mechanism. Studies have been shown that the pathways differ by using different halogen leaving groups. One example uses chlorine as a better stabilizing halogen for the anion than fluorine, which makes fluorine the leaving group even though chlorine is a much better leaving group. This provides evidence that the carbanion is formed because the products are not possible through the most stable concerted E2 mechanism.
The following table summarizes the key differences between the three elimination reactions; however, the best way to identify which mechanism is playing a key role in a particular reaction involves the application of chemical kinetics.
Chemical kinetics of E1cB-elimination mechanisms
When trying to determine whether or not a reaction follows the E1cB mechanism, chemical kinetics are essential. The best way to identify the E1cB mechanism involves the use of rate laws and the kinetic isotope effect. These techniques can also help further differentiate between E1cB, E1, and E2-elimination reactions.
Rate law
When trying to experimentally determine whether or not a reaction follows the E1cB mechanism, chemical kinetics are essential. The best ways to identify the E1cB mechanism involves the use of rate laws and the kinetic isotope effect.
The rate law that governs E1cB mechanisms is relatively simple to determine. Consider the following reaction scheme.
Assuming that there is a steady-state carbanion concentration in the mechanism, the rate law for an E1cB mechanism.
From this equation, it is clear the second order kinetics will be exhibited.
E1cB mechanisms kinetics can vary slightly based on the rate of each step. As a result, the E1cB mechanism can be broken down into three categories:
E1cBanion is when the carbanion is stable and/or a strong base is used in excess of the substrate, making deprotonation irreversible, followed by rate-determining loss of the leaving group (k1[base] ≫ k2).
E1cBrev is when the first step is reversible but the formation of product is slower than reforming the starting material, this again results from a slow second step (k−1[conjugate acid] ≫ k2).
E1cBirr is when the first step is slow, but once the anion is formed the product quickly follows (k2 ≫ k−1[conjugate acid]). This leads to an irreversible first step but unlike E1cBanion, deprotonation is rate determining.
Kinetic isotope effect
Deuterium
Deuterium exchange and a deuterium kinetic isotope effect can help distinguish among E1cBrev, E1cBanion, and E1cBirr. If the solvent is protic and contains deuterium in place of hydrogen (e.g., CH3OD), then the exchange of protons into the starting material can be monitored. If the recovered starting material contains deuterium, then the reaction is most likely undergoing an E1cBrev type mechanism. Recall, in this mechanism protonation of the carbanion (either by the conjugate acid or by solvent) is faster than loss of the leaving group. This means after the carbanion is formed, it will quickly remove a proton from the solvent to form the starting material.
If the reactant contains deuterium at the β position, a primary kinetic isotope effect indicates that deprotonation is rate determining. Of the three E1cB mechanisms, this result is only consistent with the E1cBirr mechanism, since the isotope is already removed in E1cBanion and leaving group departure is rate determining in E1cBrev.
Fluorine-19 and carbon-11
Another way that the kinetic isotope effect can help distinguish E1cB mechanisms involves the use of 19F. Fluorine is a relatively poor leaving group, and it is often employed in E1cB mechanisms. Fluorine kinetic isotope effects are also applied in the labeling of Radiopharmaceuticals and other compounds in medical research. This experiment is very useful in determining whether or not the loss of the leaving group is the rate-determining step in the mechanism and can help distinguish between E1cBirr and E2 mechanisms. 11C can also be used to probe the nature of the transition state structure. The use of 11C can be used to study the formation of the carbanion as well as study its lifetime which can not only show that the reaction is a two-step E1cB mechanism (as opposed to the concerted E2 mechanism), but it can also address the lifetime and stability of the transition state structure which can further distinguish between the three different types of E1cB mechanisms.
Aldol reactions
The most well known reaction that undergoes E1cB elimination is the aldol condensation reaction under basic conditions. This involves the deprotonation of a compound containing a carbonyl group that results in the formation of an enolate. The enolate is the very stable conjugate base of the starting material, and is one of the intermediates in the reaction. This enolate then acts as a nucleophile and can attack an electrophilic aldehyde. The Aldol product is then deprotonated forming another enolate followed by the elimination of water in an E1cB dehydration reaction. Aldol reactions are a key reaction in organic chemistry because they provide a means of forming carbon-carbon bonds, allowing for the synthesis of more complex molecules.
Photo-induced E1cB
A photochemical version of E1cB has been reported by Lukeman et al. In this report, a photochemically induced decarboxylation reaction generates a carbanion intermediate, which subsequently eliminates the leaving group. The reaction is unique from other forms of E1cB since it does not require a base to generate the carbanion. The carbanion formation step is irreversible, and should thus be classified as E1cBirr.
In biology
The E1cB-elimination reaction is an important reaction in biology. For example, the penultimate step of glycolysis involves an E1cB mechanism. This step involves the conversion of 2-phosphoglycerate to phosphoenolpyruvate, facilitated by the enzyme enolase.
| Physical sciences | Organic reactions | Chemistry |
3049753 | https://en.wikipedia.org/wiki/Porphyry%20copper%20deposit | Porphyry copper deposit | Porphyry copper deposits are copper ore bodies that are formed from hydrothermal fluids that originate from a voluminous magma chamber several kilometers below the deposit itself. Predating or associated with those fluids are vertical dikes of porphyritic intrusive rocks from which this deposit type derives its name. In later stages, circulating meteoric fluids may interact with the magmatic fluids. Successive envelopes of hydrothermal alteration typically enclose a core of disseminated ore minerals in often stockwork-forming hairline fractures and veins. Because of their large volume, porphyry orebodies can be economic from copper concentrations as low as 0.15% copper and can have economic amounts of by-products such as molybdenum, silver, and gold. In some mines, those metals are the main product.
The first mining of low-grade copper porphyry deposits from large open pits coincided roughly with the introduction of steam shovels, the construction of railroads, and a surge in market demand near the start of the 20th century. Some mines exploit porphyry deposits that contain sufficient gold or molybdenum, but little or no copper.
Porphyry copper deposits are currently the largest source of copper ore. Most of the known porphyry deposits are concentrated in: western South and North America and Southeast Asia and Oceania – along the Pacific Ring of Fire; the Caribbean; southern central Europe and the area around eastern Turkey; scattered areas in China, the Mideast, Russia, and the CIS states; and eastern Australia. Only a few are identified in Africa, in Namibia and Zambia; none are known in Antarctica. The greatest concentration of the largest copper porphyry deposits is in northern Chile. Almost all mines exploiting large porphyry deposits produce from open pits.
Geological overview
Geological background and economic significance
Porphyry copper deposits represent an important resource and the dominant source of copper that is mined today to satisfy global demand. Via compilation of geological data, it has been found that the majority of porphyry deposits are Phanerozoic in age and were emplaced at depths of approximately 1 to 6 kilometres with vertical thicknesses on average of 2 kilometres. Throughout the Phanerozoic an estimated 125,895 porphyry copper deposits were formed; however, 62% of them (78,106) have been removed by uplift and erosion. Thus, 38% (47,789) remain in the crust, of which there are 574 known deposits that are at the surface. It is estimated that the Earth's porphyry copper deposits contain approximately 1.7×1011 tonnes of copper, equivalent to more than 8,000 years of global mine production.
Porphyry deposits represent an important resource of copper; however, they are also important sources of gold and molybdenum – with porphyry deposits being the dominant source of the latter. In general, porphyry deposits are characterized by low grades of ore mineralization, a porphyritic intrusive complex that is surrounded by a vein stockwork and hydrothermal breccias. Porphyry deposits are formed in arc-related settings and are associated with subduction zone magmas. Porphyry deposits are clustered in discrete mineral provinces, which implies that there is some form of geodynamic control or crustal influence affecting the location of porphyry formation. Porphyry deposits tend to occur in linear, orogen-parallel belts (such as the Andes in South America).
There also appear to be discrete time periods in which porphyry deposit formation was concentrated or preferred. For copper-molybdenum porphyry deposits, formation is broadly concentrated in three time periods: Palaeocene-Eocene, Eocene-Oligocene, and middle Miocene-Pliocene. For both porphyry and epithermal gold deposits, they are generally from the time period ranging from the middle Miocene to the Recent period, however notable exceptions are known. Most large-scale porphyry deposits have an age of less than 20 million years, however there are notable exceptions, such as the 438 million-year-old Cadia-Ridgeway deposit in New South Wales. This relatively young age reflects the preservation potential of this type of deposit; as they are typically located in zones of highly active tectonic and geological processes, such as deformation, uplift, and erosion. It may be however, that the skewed distribution towards most deposits being less than 20 million years is at least partially an artifact of exploration methodology and model assumptions, as large examples are known in areas which were previously left only partially or under-explored partly due to their perceived older host rock ages, but which were then later found to contain large, world-class examples of much older porphyry copper deposits.
Magmas and mantle processes
In general, the majority of large porphyry deposits are associated with calc-alkaline intrusions, although some of the largest gold-rich deposits are associated with high-K calc-alkaline magma compositions. Numerous world-class porphyry copper-gold deposits are hosted by high-K or shoshonitic intrusions, such as Bingham copper-gold mine in USA, Grasberg copper-gold mine in Indonesia, Northparkes copper-gold mine in Australia, Oyu Tolgoi copper-gold mine in Mongolia and Peschanka copper-gold prospect in Russia.
The magmas responsible for porphyry formation are conventionally thought to be generated by the partial melting of the upper part of post-subduction, stalled slabs that are altered by seawater. Shallow subduction of young, buoyant slabs can result in the production of adakitic lavas via partial melting. Alternatively, metasomatised mantle wedges can produce highly oxidized conditions that results in sulfide minerals releasing ore minerals (copper, gold, molybdenum), which are then able to be transported to upper crustal levels. Mantle melting can also be induced by transitions from convergent to transform margins, as well as the steepening and trenchward retreat of the subducted slab. However, the latest belief is that dehydration that occurs at the blueschist-eclogite transition affects most subducted slabs, rather than partial melting.
After dehydration, solute-rich fluids are released from the slab and metasomatise the overlying mantle wedge of MORB-like asthenosphere, enriching it with volatiles and large ion lithophile elements (LILE). The current belief is that the generation of andesitic magmas is multistage, and involves crustal melting and assimilation of primary basaltic magmas, magma storage at the base of the crust (underplating by dense, mafic magma as it ascends), and magma homogenization. The underplated magma will add a lot of heat to the base of the crust, thereby inducing crustal melting and assimilation of lower-crustal rocks, creating an area with intense interaction of the mantle magma and crustal magma. This progressively evolving magma will become enriched in volatiles, sulfur, and incompatible elements – an ideal combination for the generation of a magma capable of generating an ore deposit. From this point forward in the evolution of a porphyry deposit, ideal tectonic and structural conditions are necessary to allow the transport of the magma and ensure its emplacement in upper-crustal levels.
Tectonic and structural controls
Although porphyry deposits are associated with arc volcanism, they are not the typical products in that environment. It is believed that tectonic change acts as a trigger for porphyry formation. There are five key factors that can give rise to porphyry development: 1) compression impeding magma ascent through crust, 2) a resultant larger shallow magma chamber, 3) enhanced fractionation of the magma along with volatile saturation and generation of magmatic-hydrothermal fluids, 4) compression restricts offshoots from developing into the surrounding rock, thus concentrating the fluid into a single stock, and 5) rapid uplift and erosion promotes decompression and efficient, eventual deposition of ore.
Porphyry deposits are commonly developed in regions that are zones of low-angle (flat-slab) subduction. A subduction zone that transitions from normal to flat and then back to normal subduction produces a series of effects that can lead to the generation of porphyry deposits. Initially, there will be decreased alkalic magmatism, horizontal shortening, hydration of the lithosphere above the flat-slab, and low heat flow. Upon a return to normal subduction, the hot asthenosphere will once again interact with the hydrated mantle, causing wet melting, crustal melting will ensue as mantle melts pass through, and lithospheric thinning and weakening due to the increased heat flow. The subducting slab can be lifted by aseismic ridges, seamount chains, or oceanic plateaus – which can provide a favourable environment for the development of a porphyry deposit. This interaction between subduction zones and the aforementioned oceanic features can explain the development of multiple metallogenic belts in a given region; as each time the subduction zone interacts with one of these features it can lead to ore genesis. Finally, in oceanic island arcs, ridge subduction can lead to slab flattening or arc reversal; whereas, in continental arcs it can lead to periods of flat slab subduction.
Arc reversal has been shown to slightly pre-date the formation of porphyry deposits in the south-west Pacific, after a collisional event. Arc reversal occurs due to collision between an island arc and either another island arc, a continent, or an oceanic plateau. The collision may result in the termination of subduction and thereby induce mantle melting.
Porphyry deposits do not generally have any requisite structural controls for their formation; although major faults and lineaments are associated with some. The presence of intra-arc fault systems are beneficial, as they can localize porphyry development. Furthermore, some authors have indicated that the occurrence of intersections between continent-scale traverse fault zones and arc-parallel structures are associated with porphyry formation. This is actually the case of Chile's Los Bronces and El Teniente porphyry copper deposits each of which lies at the intersection of two fault systems.
It has been proposed that "misoriented" deep-seated faults that were inactive during magmatism are important zones where porphyry copper-forming magmas stagnate allowing them to achieve their typical igneous differentiation. At a given time differentiated magmas would burst violently out of these fault-traps and head to shallower places in the crust where porphyry copper deposits would be formed.
Characteristics
Characteristics of porphyry copper deposits include:
The orebodies are associated with multiple intrusions and dikes of diorite to quartz monzonite composition with porphyritic textures.
Breccia zones with angular or locally rounded fragments are commonly associated with the intrusives. The sulfide mineralization typically occurs between or within fragments. These breccia zones are typically hydrothermal in nature, and may be manifested as pebble dikes.
The deposits typically have an outer epidote – chlorite mineral alteration zone.
A quartz – sericite alteration zone typically occurs closer to the center and may overprint.
A central potassic zone of secondary biotite and orthoclase alteration is commonly associated with most of the ore.
Fractures are often filled or coated by sulfides, or by quartz veins with sulfides. Closely spaced fractures of several orientations are usually associated with the highest grade ore.
The upper portions of porphyry copper deposits may be subjected to supergene enrichment. This involves the metals in the upper portion being dissolved and carried down to below the water table, where they precipitate.
Porphyry copper deposits are typically mined by open-pit methods.
Notable examples
Mexico
Cananea
La Caridad
Santo Tomas
Canada
Highland Valley
Gibraltar Mine
Chile
Cerro Colorado
Chuquicamata
Collahuasi
Escondida
El Abra
El Salvador
El Teniente
Los Pelambres
Radomiro Tomić
Peru
Toquepala
Cerro Verde, southeast of the city of Arequipa
United States
Ajo, Arizona
Bagdad, Arizona
Berkeley Pit, Butte, Montana
Bingham Canyon Mine, Utah
Lavender Pit, Bisbee, Arizona
Morenci, Arizona
Pebble Mine, Alaska
Safford Mine, Safford, Arizona
San Manuel, Arizona
Sierrita, Arizona
Resolution Copper, Superior, Arizona
El Chino, Santa Rita, New Mexico
Ely, Nevada
Ray Mine, Arizona
Indonesia
Batu Hijau, Sumbawa
Grasberg, West Papua at >3 billion tonnes at 1 ppm Au, is one of the world's largest and richest porphyry deposits of any type
Tujuh Bukit, Java, still under exploration, but likely to be bigger than Batu Hijau
Sungai Mak and Cabang Kiri, Gorontalo, at 292 million tonnes at 0.50 ppm gold and 0.47% copper
Australia
Cadia-Ridgeway Mine, New South Wales, copper-gold deposit mined by open pit and block caving.
Northparkes copper porphyry deposit, New South Wales, with 63 million tonnes at 1.1% Cu and 0.5 ppm Au.
Papua New Guinea
Ok Tedi
Panguna/Bougainville Copper
Wafi-Golpu project/Wafi-Golpu mine
Other
Coclesito, Panama
Majdanpek mine, Serbia
Oyu Tolgoi is one of the world's largest and richest Cu porphyry deposits, Mongolia
La Caridad, Sonora, Mexico
Dizon, Philippines
Saindak Copper Gold Project, Pakistan
Porphyry-type ore deposits for other metals
Copper is not the only metal that occurs in porphyry deposits. There are also porphyry ore deposits mined primarily for molybdenum, many of which contain very little copper. Examples of porphyry molybdenum deposits are the Climax, Urad, Mt. Emmons, and Henderson deposits in central Colorado; the White Pine and Pine Grove deposits in Utah; the Questa deposit in northern New Mexico; and Endako in British Columbia.
The US Geological Survey has classed the Chorolque and Catavi tin deposits in Bolivia as porphyry tin deposits.
Some porphyry copper deposits in oceanic crust environments, such as those in the Philippines, Indonesia, and Papua New Guinea, are sufficiently rich in gold that they are called copper-gold porphyry deposits.
| Physical sciences | Igneous rocks | Earth science |
7251905 | https://en.wikipedia.org/wiki/Principle%20of%20covariance | Principle of covariance | In physics, the principle of covariance emphasizes the formulation of physical laws using only those physical quantities the measurements of which the observers in different frames of reference could unambiguously correlate.
Mathematically, the physical quantities must transform covariantly, that is, under a certain representation of the group of coordinate transformations between admissible frames of reference of the physical theory. This group is referred to as the covariance group.
The principle of covariance does not require invariance of the physical laws under the group of admissible transformations although in most cases the equations are actually invariant. However, in the theory of weak interactions, the equations are not invariant under reflections (but are, of course, still covariant).
Covariance in Newtonian mechanics
In Newtonian mechanics the admissible frames of reference are inertial frames with relative velocities much smaller than the speed of light. Time is then absolute and the transformations between admissible frames of references are Galilean transformations which (together with rotations, translations, and reflections) form the Galilean group. The covariant physical quantities are Euclidean scalars, vectors, and tensors. An example of a covariant equation is Newton's second law,
where the covariant quantities are the mass of a moving body (scalar), the velocity of the body (vector), the force acting on the body, and the invariant time .
Covariance in special relativity
In special relativity the admissible frames of reference are all inertial frames. The transformations between frames are the Lorentz transformations which (together with the rotations, translations, and reflections) form the Poincaré group. The covariant quantities are four-scalars, four-vectors etc., of the Minkowski space (and also more complicated objects like bispinors and others). An example of a covariant equation is the Lorentz force equation of motion of a charged particle in an electromagnetic field (a generalization of Newton's second law)
where and are the mass and charge of the particle (invariant 4-scalars); is the invariant interval (4-scalar); is the 4-velocity (4-vector); and is the electromagnetic field strength tensor (4-tensor).
Covariance in general relativity
In general relativity, the admissible frames of reference are all reference frames. The transformations between frames are all arbitrary (invertible and differentiable) coordinate transformations. The covariant quantities are scalar fields, vector fields, tensor fields etc., defined on spacetime considered as a manifold. Main example of covariant equation is the Einstein field equations.
| Physical sciences | Theory of relativity | Physics |
7255802 | https://en.wikipedia.org/wiki/Macropodiformes | Macropodiformes | The Macropodiformes , also known as macropods, are one of the three suborders of the large marsupial order Diprotodontia. They may in fact be nested within one of the suborders, Phalangeriformes. Kangaroos, wallabies and allies, bettongs, potoroos and rat kangaroos are all members of this suborder.
Classification
Superfamily Macropodoidea
Family †Balbaridae: (basal quadrupedal kangaroos)
Genus †Galanarla
Genus †Nambaroo
Genus †Wururoo
Genus †Ganawamaya
Genus †Balbaroo
Family Hypsiprymnodontidae: (musky rat-kangaroo)
Subfamily Hypsiprymnodontinae
Genus Hypsiprymnodon
Musky rat-kangaroo, Hypsiprymnodon moschatus
†Hypsiprymnodon bartholomaii
†Hypsiprymnodon philcreaseri
†Hypsiprymnodon dennisi
†Hypsiprymnodon karenblackae
Subfamily †Propleopinae
Genus †Ekaltadeta
†Ekaltadeta ima
†Ekaltadeta jamiemulveneyi
Genus †Propleopus
†Propleopus oscillans
†Propleopus chillagoensis
†Propleopus wellingtonensis
Genus †Jackmahoneyi
†Jackmahoneyi toxoniensis
Family Potoroidae: (bettongs, potoroos, and rat-kangaroos)
Genus Wakiewakie
Genus Purtia
Genus ?†Palaeopotorous
Genus †Gumardee
Genus †Milliyowi
Genus †Ngamaroo
Subfamily Potoroinae
Genus Aepyprymnus
Rufous rat-kangaroo, Aepyprymnus rufescens
Genus Bettongia
Eastern bettong, Bettongia gaimardi
Boodie, Bettongia lesueur
Woylie, Bettongia penicillata
Northern bettong, Bettongia tropica
†Bettongia moyesi
Genus †Caloprymnus
†Desert rat-kangaroo, Caloprymnus campestris
Genus Potorous
Long-footed potoroo, Potorous longipes
†Broad-faced potoroo, Potorous platyops
Long-nosed potoroo, Potorous tridactylus
Gilbert's potoroo, Potorous gilbertii
Family Macropodidae: (kangaroos, wallabies and allies)
Genus †Wabularoo
Genus †Bulungamaya
Genus Ganguroo
Genus Cookeroo
Genus †Watutia
Subfamily Lagostrophinae
Genus Lagostrophus
Banded hare-wallaby, Lagostrophus fasciatus
Genus †Troposodon
Subfamily †Sthenurinae
Genus †Wanburoo
Genus †Rhizosthenurus
Genus Hadronomas
Tribe Sthenurini
Genus †Sthenurus
Genus Eosthenurus
Genus Metasthenurus
Tribe Simosthenurini
Genus Archaeosimos
Genus Simosthenurus
Genus †Procoptodon
Subfamily Macropodinae
Genus †Dorcopsoides
Genus †Kurrabi
Genus †Prionotemnus
Genus †Congruus
Genus Protemnodon
Genus †Baringa
Genus †Bohra
Genus †Synaptodon
Genus †Fissuridon
Genus †Silvaroo
Genus Dendrolagus: tree-kangaroos
Genus Dorcopsis: forest wallabies
Genus Dorcopsulus
Genus Lagorchestes: hare-wallabies
Genus Macropus
Genus Onychogalea
Genus Petrogale: rock-wallabies
Genus Setonix
Genus Thylogale
Genus Wallabia
| Biology and health sciences | Diprotodontia | Animals |
14591538 | https://en.wikipedia.org/wiki/Suction%20filtration | Suction filtration | Vacuum filtration is a fast filtration technique used to separate solids from liquids.
Principle
By flowing through the aspirator, water will suck out the air contained in the vacuum flask and the Büchner flask. There is therefore a difference in pressure between the exterior and the interior of the flasks : the contents of the Büchner funnel are sucked towards the vacuum flask. The filter, which is placed at the bottom of the Büchner funnel, separates the solids from the liquids.
The solid residue, which remains at the top of the Büchner funnel, is therefore recovered more efficiently: it is much drier than it would be with a simple filtration.
The rubber conical seal ensures the apparatus is hermetically closed, preventing the passage of air between the Büchner funnel and the vacuum flask. It maintains the vacuum in the apparatus and also avoids physical points of stress (glass against glass.)
Diagram annotations
Filter
Büchner funnel
Conic seal
Büchner flask
Air tube
Vacuum flask
Water tap
Aspirator
Uses
Filtration is a unit operation that is commonly used both in laboratory and production conditions. This apparatus, adapted for laboratory work, is often used to isolate the product of synthesis of a reaction when the product is a solid in suspension. The product of synthesis is then recovered faster, and the solid is drier than in the case of a simple filtration. Other than isolating a solid, filtration is also a stage of purification: the soluble impurities in the solvent are eliminated in the filtrate (liquid).
This apparatus is often used to purify a liquid. When a synthesised product is filtered, the insolubles (catalysers, impurities, sub-products of the reaction, salts, ...) remain in the filter. In this case, vacuum filtration is also more efficient that a simple filtration: there is more liquid recovered, and the yield is therefore better.
Practical aspects
It is often necessary to maintain the Büchner flask and, incidentally, the vacuum flask. The rigidity of the vacuum pipes and the difference in height between the different parts of the apparatus (as visible in the diagram) make such an apparatus relatively unstable.
Therefore, a three-pronged clamp should be used to maintain the Büchner flask. This clamp should be placed such that the two prongs surround the part of the flask connected to the vacuum tube, the lasting prong resting on the other side.
If it is also necessary to maintain the vacuum flask we use either a mandible clamp, or a three-pronged clamp, depending on the apparatus and its stability. The clamp to use is left to the judgement of the operator.
Before closing the tap, it is necessary to "break the vacuum" (letting in the air in through any area in the apparatus, by removing the funnel for example), otherwise water goes up the apparatus from the aspirator. The vacuum flask prevents the water from going up the Büchner flask.
Sources
Laboratory techniques
Analytical chemistry
Filters | Physical sciences | Other separations | Chemistry |
14591843 | https://en.wikipedia.org/wiki/Stadimeter | Stadimeter | A stadimeter is an optical device for estimating the range to an object of known height by measuring the angle between the top and bottom of the object as observed at the device. It is similar to a sextant, in that the device is using mirrors to measure an angle between two objects but differs in that one dials in the height of the object. It is one of several types of optical rangefinders, and does not require a large instrument, and so was ideal for hand-held implementations or installation in a submarine's periscope. A stadimeter is a type of analog computer.
Development and use
The hand held stadimeter was developed by Bradley Allen Fiske (1854–1942), an officer in the United States Navy. It was designed for gunnery purposes, but its first sea tests, conducted in 1895, showed that it was equally useful for fleet sailing and for navigation. It was normally kept on the bridge and used from there and on the bridge wings to keep warships at the proper distance from one another when steaming in formation and for use in convoys.
The United States Navy Bureau of Ships contracted on several occasions for orders of hand-held stadimeters starting shortly after its development in the late 1890s. By the early 1900s it along with the sextant, spyglass, maneuvering board, parallel motion protractor and other navigation tools were part of the standard gear for the navigation officer aboard US warships.
During World War II the Mark 5 version was developed to function more like a sextant, with a single pivot arm replacing the linear screw worm drive which set the height of the object. The primary benefit of this development was that multiple objects of differing heights could be measured much faster since it removed the slow moving worm drive, which would need to be adjusted for each object height before a sight was taken with a much faster adjustable arc arm to set the object's height.
Today it is still used aboard US Navy warships at times when using active radar is inadvisable.
| Technology | Surveying tools | null |
2208458 | https://en.wikipedia.org/wiki/Edaphology | Edaphology | Edaphology (from Greek , edaphos 'ground' + , -logia) is concerned with the influence of soils on living beings, particularly plants.
It is one of two main divisions of soil science, the other being pedology. Edaphology includes the study of how soil influences humankind's use of land for plant growth as well as people's overall use of the land. General subfields within edaphology are agricultural soil science (known by the term agrology in some regions) and environmental soil science. Pedology deals with pedogenesis, soil morphology, and soil classification.
History
The history of edaphology is not simple, as the two main alternative terms for soil science—pedology and edaphology—were initially poorly distinguished. Friedrich Albert Fallou originally conceived pedology in the 19th century as a fundamental science separate from the applied science of agrology, a predecessor term for edaphology, a distinction retained in the current understanding of edaphology. During the 20th century, the term edaphology was "driven out of [pedology-centric] soil science" but remained in use to address edaphic problems in other disciplines. In the case of Russian soil scientists, edaphology was used as an equivalent term to pedology, and in Spain, soil scientists adopted edaphology in preference to the term pedology. In the 21st century, edaphology is recognized by soil scientists as a branch of soil science necessary and complementary to the pedology branch.
Xenophon (431–355 BC), and Cato (234–149 BC), were early edaphologists. Xenophon noted the beneficial effect of turning a cover crop into the earth. Cato wrote De Agri Cultura ("On Farming"), which recommended tillage, crop rotation, and the use of legumes in the rotation to build soil nitrogen. He also devised the first soil capability classification for specific crops.
Jan Baptist van Helmont (1577–1644) performed a famous experiment, growing a willow tree in a pot of soil and supplying only rainwater for five years. The weight gained by the tree was greater than the weight loss of the soil. He concluded that the willow was made of water. Although only partly correct, his experiment reignited interest in edaphology.
Areas of study
Agricultural soil science
Agricultural soil science is the application of soil chemistry, physics, and biology dealing with the production of crops. In terms of soil chemistry, it places particular emphasis on plant nutrients of importance to farming and horticulture, especially with regard to soil fertility and fertilizer components.
Physical edaphology is strongly associated with crop irrigation and drainage.
Soil husbandry is a strong tradition within agricultural soil science. Beyond preventing soil erosion and degradation in cropland, soil husbandry seeks to sustain the agricultural soil resource though the use of soil conditioners and cover crops.
Environmental soil science
Environmental soil science studies our interaction with the pedosphere on beyond crop production. Fundamental and applied aspects of the field address vadose zone functions, septic drain field site assessment and function, land treatment of wastewater, stormwater, erosion control, soil contamination with metals and pesticides, remediation of contaminated soils, restoration of wetlands, soil degradation, and environmental nutrient management. It also studies soil in the context of land-use planning, global warming, and acid rain.
Industrialization and edaphology
Industrialization has impacted the way that soil interacts with plants in various ways. Increased mechanical production has led to higher amount of heavy metals within soils. These heavy metals have also been found in crops. While, the increased use of synthetic fertilizer and pesticides has decreased the nutrient availability of soils.
Changes in agricultural practices, such as monocropping and tilling, as a result of industrialization have also impacted aspects of edaphology. Monocropping techniques are efficient for harvesting and business strategies but lead to a decrease in biodiversity. Decreased biodiversity is shown to decrease the nutrients available in soils. Furthermore, monocropping leads to an increased dependency on chemical fertilizer. While intensive tilling disturbs the community of microorganism that live with in soil. These microorganisms help maintain soil moisture and air circulation which are critical to plant growth.
| Physical sciences | Soil science | Earth science |
2208839 | https://en.wikipedia.org/wiki/Shrubland | Shrubland | Shrubland, scrubland, scrub, brush, or bush is a plant community characterized by vegetation dominated by shrubs, often also including grasses, herbs, and geophytes. Shrubland may either occur naturally or be the result of human activity. It may be the mature vegetation type in a particular region and remain stable over time, or it may be a transitional community that occurs temporarily as the result of a disturbance, such as fire. A stable state may be maintained by regular natural disturbance such as fire or browsing.
Shrubland may be unsuitable for human habitation because of the danger of fire. The term was coined in 1903.
Shrubland species generally show a wide range of adaptations to fire, such as heavy seed production, lignotubers, and fire-induced germination.
Botanical structural form
In botany and ecology a shrub is defined as a much-branched woody plant less than 8 m high, usually with many stems. Tall shrubs are mostly 2–8 m high, small shrubs 1–2 m high and subshrubs less than 1 m high.
There is a descriptive system widely adopted in Australia to describe different types of vegetation is based on structural characteristics based on plant life-form, as well as the height and foliage cover of the tallest stratum or dominant species.
For shrubs that are high, the following structural forms are categorized:
dense foliage cover (70–100%) — closed-shrubs
mid-dense foliage cover (30–70%) — open-shrubs
sparse foliage cover (10–30%) — tall shrubland
very sparse foliage cover (<10%) — tall open shrubland
For shrubs less than high, the following structural forms are categorized:
dense foliage cover (70–100%) — closed-heath or closed low shrubland—(North America)
mid-dense foliage cover (30–70%) — open-heath or mid-dense low shrubland—(North America)
sparse foliage cover (10–30%) — low shrubland
very sparse foliage cover (<10%) — low open shrubland
Biome plant group
Similarly, shrubland is a category that is used to describe a type of biome plant group. In this context, shrublands are dense thickets of evergreen sclerophyll shrubs and small trees, called:
Chaparral in California
Matorral in Chile, Mexico, and Spain
Maquis in France and elsewhere around the Mediterranean
Macchia in Italy
Fynbos in South Africa
Eastern Suburbs Banksia Scrub in Sydney
Kwongan in Southwest Australia
Cedar scrub in Texas Hill Country
Caatinga in northeastern Brazil
In some places, shrubland is the mature vegetation type. In other places, it is the result of degradation of former forest or woodland by logging or overgrazing, or disturbance by major fires.
A number of World Wildlife Fund biomes are characterized as shrublands, including the following:
Desert scrublands
Xeric or desert scrublands occur in the world's deserts and xeric shrublands ecoregions or in fast-draining sandy soils in more humid regions. These scrublands are characterized by plants with adaptations to the dry climate, which include small leaves to limit water loss, thorns to protect them from grazing animals, succulent leaves or stems, storage organs to store water, and long taproots to reach groundwater.
Mediterranean scrublands
Mediterranean scrublands occur naturally in the Mediterranean scrub biome, located in the five Mediterranean climate regions of the world. Scrublands are most common near the seacoast and have often adapted to the wind and salt air of the ocean. Low, soft-leaved scrublands around the Mediterranean Basin are known as garrigue in France, phrygana in Greece, tomillares in Spain, and batha in Israel. Northern coastal scrub and coastal sage scrub occur along the California coast, strandveld in the Western Cape of South Africa, coastal matorral in central Chile, and sand-heath and kwongan in Southwest Australia.
Interior scrublands
Interior scrublands occur naturally in semi-arid areas with nutrient-poor soils, such as on the matas of Portugal, which are underlain by Cambrian and Silurian schists. Florida scrub is another example of interior scrublands.
Dwarf shrubs
Some vegetation types are formed of dwarf-shrubs, low-growing or creeping shrubs. They include the maquis and the garrigues of Mediterranean climates and the acid-loving dwarf shrubs of heathland and moorland.
| Physical sciences | Biomes | null |
2208941 | https://en.wikipedia.org/wiki/Hydrogen%20selenide | Hydrogen selenide | Hydrogen selenide is an inorganic compound with the formula H2Se. This hydrogen chalcogenide is the simplest and most commonly encountered hydride of selenium. H2Se is a colorless, flammable gas under standard conditions. It is the most toxic selenium compound with an exposure limit of 0.05 ppm over an 8-hour period. Even at extremely low concentrations, this compound has a very irritating smell resembling that of decayed horseradish or "leaking gas", but smells of rotten eggs at higher concentrations.
Structure and properties
H2Se adopts a bent structure with a H−Se−H bond angle of 91°. Consistent with this structure, three IR-active vibrational bands are observed: 2358, 2345, and 1034 cm−1.
The properties of H2S and H2Se are similar, although the selenide is more acidic with pKa = 3.89 and the second pKa = 11, or 15.05 ± 0.02 at 25 °C.
Preparation
Industrially, it is produced by treating elemental selenium at T > 300 °C with hydrogen gas. A number of routes to H2Se have been reported, which are suitable for both large and small scale preparations. In the laboratory, H2Se is usually prepared by the action of water on Al2Se3, concomitant with formation of hydrated alumina. A related reaction involves the acid hydrolysis of FeSe.
Al2Se3 + 6 H2O ⇌ 2 Al(OH)3 + 3 H2Se
H2Se can also be prepared by means of different methods based on the in situ generation in aqueous solution using boron hydride, Marsh test and Devarda's alloy. According to the Sonoda method, H2Se is generated from the reaction of H2O and CO on Se in the presence of Et3N. H2Se can be purchased in cylinders.
Reactions
Elemental selenium can be recovered from H2Se through a reaction with aqueous sulfur dioxide (SO2).
2 H2Se + SO2 ⇌ 2 H2O + 2 Se + S
Its decomposition is used to prepare the highly pure element.
Applications
H2Se is commonly used in the synthesis of Se-containing compounds. It adds across alkenes. Illustrative is the synthesis of selenoureas from cyanamides:
H2Se gas is used to dope semiconductors with selenium.
Safety
Hydrogen selenide is hazardous, being the most toxic selenium compound and far more toxic than its congener hydrogen sulfide. The threshold limit value is 0.05 ppm. The gas acts as an irritant at concentrations higher than 0.3 ppm, which is the main warning sign of exposure; below 1 ppm, this is "insufficient to prevent exposure", while at 1.5 ppm the irritation is "intolerable". Exposure at high concentrations, even for less than a minute, causes the gas to attack the eyes and mucous membranes; this causes cold-like symptoms for at least a few days afterwards. In Germany, the limit in drinking water is 0.008 mg/L, and the US EPA recommends a maximum contamination of 0.01 mg/L.
Despite being extremely toxic, no human fatalities have yet been reported. It is suspected that this is due to the gas' tendency to oxidise to form red selenium in mucous membranes; elemental selenium is less toxic than selenides are.
| Physical sciences | Hydrogen compounds | Chemistry |
2209688 | https://en.wikipedia.org/wiki/Einstein%20coefficients | Einstein coefficients | In atomic, molecular, and optical physics, the Einstein coefficients are quantities describing the probability of absorption or emission of a photon by an atom or molecule. The Einstein A coefficients are related to the rate of spontaneous emission of light, and the Einstein B coefficients are related to the absorption and stimulated emission of light. Throughout this article, "light" refers to any electromagnetic radiation, not necessarily in the visible spectrum.
These coefficients are named after Albert Einstein, who proposed them in 1916.
Spectral lines
In physics, one thinks of a spectral line from two viewpoints.
An emission line is formed when an atom or molecule makes a transition from a particular discrete energy level of an atom, to a lower energy level , emitting a photon of a particular energy and wavelength. A spectrum of many such photons will show an emission spike at the wavelength associated with these photons.
An absorption line is formed when an atom or molecule makes a transition from a lower, , to a higher discrete energy state, , with a photon being absorbed in the process. These absorbed photons generally come from background continuum radiation (the full spectrum of electromagnetic radiation) and a spectrum will show a drop in the continuum radiation at the wavelength associated with the absorbed photons.
The two states must be bound states in which the electron is bound to the atom or molecule, so the transition is sometimes referred to as a "bound–bound" transition, as opposed to a transition in which the electron is ejected out of the atom completely ("bound–free" transition) into a continuum state, leaving an ionized atom, and generating continuum radiation.
A photon with an energy equal to the difference between the energy levels is released or absorbed in the process. The frequency at which the spectral line occurs is related to the photon energy by Bohr's frequency condition where denotes the Planck constant.
Emission and absorption coefficients
An atomic spectral line refers to emission and absorption events in a gas in which is the density of atoms in the upper-energy state for the line, and is the density of atoms in the lower-energy state for the line.
The emission of atomic line radiation at frequency may be described by an emission coefficient with units of energy/(time × volume × solid angle). ε dt dV dΩ is then the energy emitted by a volume element in time into solid angle . For atomic line radiation,
where is the Einstein coefficient for spontaneous emission, which is fixed by the intrinsic properties of the relevant atom for the two relevant energy levels.
The absorption of atomic line radiation may be described by an absorption coefficient with units of 1/length. The expression κ' dx gives the fraction of intensity absorbed for a light beam at frequency while traveling distance dx. The absorption coefficient is given by
where and are the Einstein coefficients for photon absorption and induced emission respectively. Like the coefficient , these are also fixed by the intrinsic properties of the relevant atom for the two relevant energy levels. For thermodynamics and for the application of Kirchhoff's law, it is necessary that the total absorption be expressed as the algebraic sum of two components, described respectively by and , which may be regarded as positive and negative absorption, which are, respectively, the direct photon absorption, and what is commonly called stimulated or induced emission.
The above equations have ignored the influence of the spectroscopic line shape. To be accurate, the above equations need to be multiplied by the (normalized) spectral line shape, in which case the units will change to include a 1/Hz term.
Under conditions of thermodynamic equilibrium, the number densities and , the Einstein coefficients, and the spectral energy density provide sufficient information to determine the absorption and emission rates.
Equilibrium conditions
The number densities and are set by the physical state of the gas in which the spectral line occurs, including the local spectral radiance (or, in some presentations, the local spectral radiant energy density). When that state is either one of strict thermodynamic equilibrium, or one of so-called "local thermodynamic equilibrium", then the distribution of atomic states of excitation (which includes and ) determines the rates of atomic emissions and absorptions to be such that Kirchhoff's law of equality of radiative absorptivity and emissivity holds. In strict thermodynamic equilibrium, the radiation field is said to be black-body radiation and is described by Planck's law. For local thermodynamic equilibrium, the radiation field does not have to be a black-body field, but the rate of interatomic collisions must vastly exceed the rates of absorption and emission of quanta of light, so that the interatomic collisions entirely dominate the distribution of states of atomic excitation. Circumstances occur in which local thermodynamic equilibrium does not prevail, because the strong radiative effects overwhelm the tendency to the Maxwell–Boltzmann distribution of molecular velocities. For example, in the atmosphere of the Sun, the great strength of the radiation dominates. In the upper atmosphere of the Earth, at altitudes over 100 km, the rarity of intermolecular collisions is decisive.
In the cases of thermodynamic equilibrium and of local thermodynamic equilibrium, the number densities of the atoms, both excited and unexcited, may be calculated from the Maxwell–Boltzmann distribution, but for other cases, (e.g. lasers) the calculation is more complicated.
Einstein coefficients
In 1916, Albert Einstein proposed that there are three processes occurring in the formation of an atomic spectral line. The three processes are referred to as spontaneous emission, stimulated emission, and absorption. With each is associated an Einstein coefficient, which is a measure of the probability of that particular process occurring. Einstein considered the case of isotropic radiation of frequency and spectral energy density . Paul Dirac derived the coefficients in a 1927 paper titled "The Quantum Theory of the Emission and Absorption of Radiation".
Various formulations
Hilborn has compared various formulations for derivations for the Einstein coefficients, by various authors. For example, Herzberg works with irradiance and wavenumber; Yariv works with energy per unit volume per unit frequency interval, as is the case in the more recent (2008) formulation. Mihalas & Weibel-Mihalas work with radiance and frequency, as does Chandrasekhar, and Goody & Yung; Loudon uses angular frequency and radiance.
Spontaneous emission
Spontaneous emission is the process by which an electron "spontaneously" (i.e. without any outside influence) decays from a higher energy level to a lower one. The process is described by the Einstein coefficient A21 (s−1), which gives the probability per unit time that an electron in state 2 with energy will decay spontaneously to state 1 with energy , emitting a photon with an energy . Due to the energy-time uncertainty principle, the transition actually produces photons within a narrow range of frequencies called the spectral linewidth. If is the number density of atoms in state i , then the change in the number density of atoms in state 2 per unit time due to spontaneous emission will be
The same process results in an increase in the population of state 1:
Stimulated emission
Stimulated emission (also known as induced emission) is the process by which an electron is induced to jump from a higher energy level to a lower one by the presence of electromagnetic radiation at (or near) the frequency of the transition. From the thermodynamic viewpoint, this process must be regarded as negative absorption. The process is described by the Einstein coefficient (m3 J−1 s−2), which gives the probability per unit time per unit energy density of the radiation field per unit frequency that an electron in state 2 with energy will decay to state 1 with energy , emitting a photon with an energy . The change in the number density of atoms in state 1 per unit time due to induced emission will be
where denotes the spectral energy density of the isotropic radiation field at the frequency of the transition (see Planck's law).
Stimulated emission is one of the fundamental processes that led to the development of the laser. Laser radiation is, however, very far from the present case of isotropic radiation.
Photon absorption
Absorption is the process by which a photon is absorbed by the atom, causing an electron to jump from a lower energy level to a higher one. The process is described by the Einstein coefficient (m3 J−1 s−2), which gives the probability per unit time per unit energy density of the radiation field per unit frequency that an electron in state 1 with energy will absorb a photon with an energy and jump to state 2 with energy . The change in the number density of atoms in state 1 per unit time due to absorption will be
Detailed balancing
The Einstein coefficients are fixed probabilities per time associated with each atom, and do not depend on the state of the gas of which the atoms are a part. Therefore, any relationship that we can derive between the coefficients at, say, thermodynamic equilibrium will be valid universally.
At thermodynamic equilibrium, we will have a simple balancing, in which the net change in the number of any excited atoms is zero, being balanced by loss and gain due to all processes. With respect to bound-bound transitions, we will have detailed balancing as well, which states that the net exchange between any two levels will be balanced. This is because the probabilities of transition cannot be affected by the presence or absence of other excited atoms. Detailed balance (valid only at equilibrium) requires that the change in time of the number of atoms in level 1 due to the above three processes be zero:
Along with detailed balancing, at temperature we may use our knowledge of the equilibrium energy distribution of the atoms, as stated in the Maxwell–Boltzmann distribution, and the equilibrium distribution of the photons, as stated in Planck's law of black body radiation to derive universal relationships between the Einstein coefficients.
From Boltzmann distribution we have for the number of excited atomic species i:
where n is the total number density of the atomic species, excited and unexcited, k is the Boltzmann constant, T is the temperature, is the degeneracy (also called the multiplicity) of state i, and Z is the partition function. From Planck's law of black-body radiation at temperature we have for the spectral radiance (radiance is energy per unit time per unit solid angle per unit projected area, when integrated over an appropriate spectral interval) at frequency
where
where is the speed of light and is the Planck constant.
Substituting these expressions into the equation of detailed balancing and remembering that yields
or
The above equation must hold at any temperature, so from one gets
and from
Therefore, the three Einstein coefficients are interrelated by
and
When this relation is inserted into the original equation, one can also find a relation between and , involving Planck's law.
Oscillator strengths
The oscillator strength is defined by the following relation to the cross section for absorption:
where is the electron charge, is the electron mass, and and are normalized distribution functions in frequency and angular frequency respectively.
This allows all three Einstein coefficients to be expressed in terms of the single oscillator strength associated with the particular atomic spectral line:
Dipole approximation
The value of A and B coefficients can be calculated using quantum mechanics where dipole approximations in time dependent perturbation theory is used. While the calculation of B coefficient can be done easily, that of A coefficient requires using results of second quantization. This is because the theory developed by dipole approximation and time dependent perturbation theory gives a semiclassical description of electronic transition which goes to zero as perturbing fields go to zero. The A coefficient which governs spontaneous emission should not go to zero as perturbing fields go to zero. The result for transition rates of different electronic levels as a result of spontaneous emission is given as (in SI units):
For B coefficient, straightforward application of dipole approximation in time dependent perturbation theory yields (in SI units):
Note that the rate of transition formula depends on dipole moment operator. For higher order approximations, it involves quadrupole moment and other similar terms.
Here, the B coefficients are chosen to correspond to energy distribution function. Often these different definitions of B coefficients are distinguished by superscript, for example, where term corresponds to frequency distribution and term corresponds to distribution. The formulas for B coefficients varies inversely to that of the energy distribution chosen, so that the transition rate is same regardless of convention.
Hence, AB coefficients are calculated using dipole approximation as:
where and B coefficients correspond to energy distribution function.
Hence the following ratios are also derived:
and
Derivation of Planck's law
It follows from theory that:
where and are number of occupied energy levels of and respectively, where . Note that from time dependent perturbation theory application, the fact that only radiation whose is close to value of can produce respective stimulated emission or absorption, is used.
Where Maxwell distribution involving and ensures
Solving for for equilibrium condition using the above equations and ratios while generalizing to , we get:
which is the angular frequency energy distribution from Planck's law.
| Physical sciences | Atomic physics | Physics |
2210398 | https://en.wikipedia.org/wiki/Chrysopelea | Chrysopelea | Chrysopelea, commonly known as the flying snake or gliding snake, is a genus of snakes that belongs to the family Colubridae. They are found in Southeast Asia, and are known for their ability to glide between trees. Flying snakes are mildly venomous, though the venom is dangerous only to their small prey. There are five species within the genus.
Gliding
Chrysopelea climbs using ridge scales along its underside, pushing against the rough bark of tree trunks, allowing it to move vertically up a tree. Upon reaching the end of a branch, the snake continues moving until its tail dangles from the end of the branch. It then makes a J-shape bend, leans forward to select the level of inclination it wishes to use to control its glide path, and selects a desired landing area. Once it decides on a destination, it propels itself by thrusting its body up and away from the tree, sucking in its abdomen and flaring out its ribs to turn its body into a "pseudo concave wing", all the while making a continual serpentine motion of lateral undulation parallel to the ground to stabilise its direction in midair in order to land safely.
The combination of forming a C-shape, flattening its abdomen and making a motion of lateral undulation in the air makes it possible for the snake to glide in the air, where it also manages to save energy compared to travel on the ground and dodge earth-bound predators. The concave wing that the snake creates in flattening itself nearly doubles the width of its body from the back of the head to the anal vent, which is close to the end of the snake's tail, causing the cross section of the snake's body to resemble the cross section of a frisbee or flying disc. When a flying disc spins in the air, the designed cross sectional concavity causes increased air pressure under the centre of the disc, causing lift for the disc to fly, and the snake continuously moves in lateral undulation to create the same effect of increased air pressure underneath its arched body to glide.
Flying snakes are able to glide better than flying squirrels and other gliding animals, despite the lack of limbs, wings, or any other wing-like projections, gliding as far as 100 meters through the forests and jungles they inhabit. Their destination is mostly predicted by ballistics; however, they can exercise some in-flight attitude control by "slithering" in the air.
Their ability to glide has been an object of interest for physicists and the United States Department of Defense in recent years, and studies continue to be made on what other, more subtle, factors contribute to their gliding. According to recent research conducted by the University of Chicago, scientists discovered a negative correlation between size and gliding ability, in which smaller flying snakes were able to glide longer distances horizontally.
According to research performed by Professor Jake Socha at Virginia Tech, these snakes can change the shape of their body in order to produce aerodynamic forces so they can glide in the air. Scientists are hopeful that this research will lead to the design of robots that can glide in the air from one place to another.
Distribution
Their range is in Southeast Asia (the mainland (Vietnam, Cambodia, Thailand, Myanmar, and Laos), Indonesia, and the Philippines), southernmost China, India, and Sri Lanka.
Diet
Chrysopelea are diurnal, which means they hunt during the day. Their diets are variable depending on their range, but they are known to eat lizards, rodents, frogs, birds, and bats. They are mildly venomous snakes, but their tiny, fixed rear fangs make them dangerous only to their small prey.
Venom
The genus is considered mildly venomous, with a few confirmed cases of medically significant envenomation. Chrysopelea species are not included in lists of snakes considered venomous to people.
Taxonomy
Chrysopelea is one of five genera belonging to the vine snake subfamily Ahaetuliinae, of which Chrysopelea is most closely related to Dendrelaphis, as shown in the cladogram below:
Species
There are five recognized species of flying snake, found from western India to the Indonesian archipelago. Knowledge of their behavior in the wild is limited, but they are thought to be highly arboreal, rarely descending from the canopy. The smallest species reach about in length and the largest grow to .
| Biology and health sciences | Snakes | Animals |
2210759 | https://en.wikipedia.org/wiki/Finite%20strain%20theory | Finite strain theory | In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically deforming materials and other fluids and biological soft tissue.
Displacement field
Deformation gradient tensor
The deformation gradient tensor is related to both the reference and current configuration, as seen by the unit vectors and , therefore it is a two-point tensor.
Two types of deformation gradient tensor may be defined.
Due to the assumption of continuity of , has the inverse , where is the spatial deformation gradient tensor. Then, by the implicit function theorem, the Jacobian determinant must be nonsingular, i.e.
The material deformation gradient tensor is a second-order tensor that represents the gradient of the mapping function or functional relation , which describes the motion of a continuum. The material deformation gradient tensor characterizes the local deformation at a material point with position vector , i.e., deformation at neighbouring points, by transforming (linear transformation) a material line element emanating from that point from the reference configuration to the current or deformed configuration, assuming continuity in the mapping function , i.e. differentiable function of and time , which implies that cracks and voids do not open or close during the deformation. Thus we have,
Relative displacement vector
Consider a particle or material point with position vector in the undeformed configuration (Figure 2). After a displacement of the body, the new position of the particle indicated by in the new configuration is given by the vector position . The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience.
Consider now a material point neighboring , with position vector . In the deformed configuration this particle has a new position given by the position vector . Assuming that the line segments and joining the particles and in both the undeformed and deformed configuration, respectively, to be very small, then we can express them as and . Thus from Figure 2 we have
where is the relative displacement vector, which represents the relative displacement of with respect to in the deformed configuration.
Taylor approximation
For an infinitesimal element , and assuming continuity on the displacement field, it is possible to use a Taylor series expansion around point , neglecting higher-order terms, to approximate the components of the relative displacement vector for the neighboring particle as
Thus, the previous equation can be written as
Time-derivative of the deformation gradient
Calculations that involve the time-dependent deformation of a body often require a time derivative of the deformation gradient to be calculated. A geometrically consistent definition of such a derivative requires an excursion into differential geometry but we avoid those issues in this article.
The time derivative of is
where is the (material) velocity. The derivative on the right hand side represents a material velocity gradient. It is common to convert that into a spatial gradient by applying the chain rule for derivatives, i.e.,
where is the spatial velocity gradient and where is the spatial (Eulerian) velocity at . If the spatial velocity gradient is constant in time, the above equation can be solved exactly to give
assuming at . There are several methods of computing the exponential above.
Related quantities often used in continuum mechanics are the rate of deformation tensor and the spin tensor defined, respectively, as:
The rate of deformation tensor gives the rate of stretching of line elements while the spin tensor indicates the rate of rotation or vorticity of the motion.
The material time derivative of the inverse of the deformation gradient (keeping the reference configuration fixed) is often required in analyses that involve finite strains. This derivative is
The above relation can be verified by taking the material time derivative of and noting that .
Polar decomposition of the deformation gradient tensor
The deformation gradient , like any invertible second-order tensor, can be decomposed, using the polar decomposition theorem, into a product of two second-order tensors (Truesdell and Noll, 1965): an orthogonal tensor and a positive definite symmetric tensor, i.e., where the tensor is a proper orthogonal tensor, i.e., and , representing a rotation; the tensor is the right stretch tensor; and the left stretch tensor. The terms right and left means that they are to the right and left of the rotation tensor , respectively. and are both positive definite, i.e. and for all non-zero , and symmetric tensors, i.e. and , of second order.
This decomposition implies that the deformation of a line element in the undeformed configuration onto in the deformed configuration, i.e., , may be obtained either by first stretching the element by , i.e. , followed by a rotation , i.e., ; or equivalently, by applying a rigid rotation first, i.e., , followed later by a stretching , i.e., (See Figure 3).
Due to the orthogonality of
so that and have the same eigenvalues or principal stretches, but different eigenvectors or principal directions and , respectively. The principal directions are related by
This polar decomposition, which is unique as is invertible with a positive determinant, is a corollary of the singular-value decomposition.
Transformation of a surface and volume element
To transform quantities that are defined with respect to areas in a deformed configuration to those relative to areas in a reference configuration, and vice versa, we use Nanson's relation, expressed as where is an area of a region in the deformed configuration, is the same area in the reference configuration, and is the outward normal to the area element in the current configuration while is the outward normal in the reference configuration, is the deformation gradient, and .
The corresponding formula for the transformation of the volume element is
Fundamental strain tensors
A strain tensor is defined by the IUPAC as:
"A symmetric tensor that results when a deformation gradient tensor is factorized into a rotation tensor followed or preceded by a symmetric tensor".
Since a pure rotation should not induce any strains in a deformable body, it is often convenient to use rotation-independent measures of deformation in continuum mechanics. As a rotation followed by its inverse rotation leads to no change () we can exclude the rotation by multiplying the deformation gradient tensor by its transpose.
Several rotation-independent deformation gradient tensors (or "deformation tensors", for short) are used in mechanics. In solid mechanics, the most popular of these are the right and left Cauchy–Green deformation tensors.
Cauchy strain tensor (right Cauchy–Green deformation tensor)
In 1839, George Green introduced a deformation tensor known as the right Cauchy–Green deformation tensor or Green's deformation tensor (the IUPAC recommends that this tensor be called the Cauchy strain tensor), defined as:
Physically, the Cauchy–Green tensor gives us the square of local change in distances due to deformation, i.e.
Invariants of are often used in the expressions for strain energy density functions. The most commonly used invariants are
where is the determinant of the deformation gradient and are stretch ratios for the unit fibers that are initially oriented along the eigenvector directions of the right (reference) stretch tensor (these are not generally aligned with the three axis of the coordinate systems).
Finger strain tensor
The IUPAC recommends that the inverse of the right Cauchy–Green deformation tensor (called the Cauchy strain tensor in that document), i. e., , be called the Finger strain tensor. However, that nomenclature is not universally accepted in applied mechanics.
Green strain tensor (left Cauchy–Green deformation tensor)
Reversing the order of multiplication in the formula for the right Cauchy-Green deformation tensor leads to the left Cauchy–Green deformation tensor which is defined as:
The left Cauchy–Green deformation tensor is often called the Finger deformation tensor, named after Josef Finger (1894).
The IUPAC recommends that this tensor be called the Green strain tensor.
Invariants of are also used in the expressions for strain energy density functions. The conventional invariants are defined as
where is the determinant of the deformation gradient.
For compressible materials, a slightly different set of invariants is used:
Piola strain tensor (Cauchy deformation tensor)
Earlier in 1828, Augustin-Louis Cauchy introduced a deformation tensor defined as the inverse of the left Cauchy–Green deformation tensor, . This tensor has also been called the Piola strain tensor by the IUPAC and the Finger tensor in the rheology and fluid dynamics literature.
Spectral representation
If there are three distinct principal stretches , the spectral decompositions of and is given by
Furthermore,
Observe that
Therefore, the uniqueness of the spectral decomposition also implies that . The left stretch () is also called the spatial stretch tensor while the right stretch () is called the material stretch tensor.
The effect of acting on is to stretch the vector by and to rotate it to the new orientation , i.e.,
In a similar vein,
Examples
Uniaxial extension of an incompressible material
This is the case where a specimen is stretched in 1-direction with a stretch ratio of . If the volume remains constant, the contraction in the other two directions is such that or . Then:
Simple shear
Rigid body rotation
Derivatives of stretch
Derivatives of the stretch with respect to the right Cauchy–Green deformation tensor are used to derive the stress-strain relations of many solids, particularly hyperelastic materials. These derivatives are
and follow from the observations that
Physical interpretation of deformation tensors
Let be a Cartesian coordinate system defined on the undeformed body and let be another system defined on the deformed body. Let a curve in the undeformed body be parametrized using . Its image in the deformed body is .
The undeformed length of the curve is given by
After deformation, the length becomes
Note that the right Cauchy–Green deformation tensor is defined as
Hence,
which indicates that changes in length are characterized by .
Finite strain tensors
The concept of strain is used to evaluate how much a given displacement differs locally from a rigid body displacement. One of such strains for large deformations is the Lagrangian finite strain tensor, also called the Green-Lagrangian strain tensor or Green–St-Venant strain tensor, defined as
or as a function of the displacement gradient tensor
or
The Green-Lagrangian strain tensor is a measure of how much differs from .
The Eulerian finite strain tensor, or Eulerian-Almansi finite strain tensor, referenced to the deformed configuration (i.e. Eulerian description) is defined as
or as a function of the displacement gradients we have
Seth–Hill family of generalized strain tensors
B. R. Seth from the Indian Institute of Technology Kharagpur was the first to show that the Green and Almansi strain tensors are special cases of a more general strain measure. The idea was further expanded upon by Rodney Hill in 1968. The Seth–Hill family of strain measures (also called Doyle-Ericksen tensors) can be expressed as
For different values of we have:
Green-Lagrangian strain tensor
Biot strain tensor
Logarithmic strain, Natural strain, True strain, or Hencky strain
Almansi strain
The second-order approximation of these tensors is
where is the infinitesimal strain tensor.
Many other different definitions of tensors are admissible, provided that they all satisfy the conditions that:
vanishes for all rigid-body motions
the dependence of on the displacement gradient tensor is continuous, continuously differentiable and monotonic
it is also desired that reduces to the infinitesimal strain tensor as the norm
An example is the set of tensors
which do not belong to the Seth–Hill class, but have the same 2nd-order approximation as the Seth–Hill measures at for any value of .
Physical interpretation of the finite strain tensor
The diagonal components of the Lagrangian finite strain tensor are related to the normal strain, e.g.
where is the normal strain or engineering strain in the direction .
The off-diagonal components of the Lagrangian finite strain tensor are related to shear strain, e.g.
where is the change in the angle between two line elements that were originally perpendicular with directions and , respectively.
Under certain circumstances, i.e. small displacements and small displacement rates, the components of the Lagrangian finite strain tensor may be approximated by the components of the infinitesimal strain tensor
Compatibility conditions
The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on bodies. These allowable conditions leave the body without unphysical gaps or overlaps after a deformation. Most such conditions apply to simply-connected bodies. Additional conditions are required for the internal boundaries of multiply connected bodies.
Compatibility of the deformation gradient
The necessary and sufficient conditions for the existence of a compatible field over a simply connected body are
Compatibility of the right Cauchy–Green deformation tensor
The necessary and sufficient conditions for the existence of a compatible field over a simply connected body are
We can show these are the mixed components of the Riemann–Christoffel curvature tensor. Therefore, the necessary conditions for -compatibility are that the Riemann–Christoffel curvature of the deformation is zero.
Compatibility of the left Cauchy–Green deformation tensor
General sufficiency conditions for the left Cauchy–Green deformation tensor in three-dimensions were derived by Amit Acharya. Compatibility conditions for two-dimensional fields were found by Janet Blume.
| Physical sciences | Solid mechanics | Physics |
16112368 | https://en.wikipedia.org/wiki/Earth%20ellipsoid | Earth ellipsoid | An Earth ellipsoid or Earth spheroid is a mathematical figure approximating the Earth's form, used as a reference frame for computations in geodesy, astronomy, and the geosciences. Various different ellipsoids have been used as approximations.
It is a spheroid (an ellipsoid of revolution) whose minor axis (shorter diameter), which connects the geographical North Pole and South Pole, is approximately aligned with the Earth's axis of rotation. The ellipsoid is defined by the equatorial axis () and the polar axis (); their radial difference is slightly more than 21 km, or 0.335% of (which is not quite 6,400 km).
Many methods exist for determination of the axes of an Earth ellipsoid, ranging from meridian arcs up to modern satellite geodesy or the analysis and interconnection of continental geodetic networks. Amongst the different set of data used in national surveys are several of special importance: the Bessel ellipsoid of 1841, the international Hayford ellipsoid of 1924, and (for GPS positioning) the WGS84 ellipsoid.
Types
There are two types of ellipsoid: mean and reference.
A data set which describes the global average of the Earth's surface curvature is called the mean Earth Ellipsoid. It refers to a theoretical coherence between the geographic latitude and the meridional curvature of the geoid. The latter is close to the mean sea level, and therefore an ideal Earth ellipsoid has the same volume as the geoid.
While the mean Earth ellipsoid is the ideal basis of global geodesy, for regional networks a so-called reference ellipsoid may be the better choice. When geodetic measurements have to be computed on a mathematical reference surface, this surface should have a similar curvature as the regional geoid; otherwise, reduction of the measurements will get small distortions.
This is the reason for the "long life" of former reference ellipsoids like the Hayford or the Bessel ellipsoid, despite the fact that their main axes deviate by several hundred meters from the modern values. Another reason is a judicial one: the coordinates of millions of boundary stones should remain fixed for a long period. If their reference surface changes, the coordinates themselves also change.
However, for international networks, GPS positioning, or astronautics, these regional reasons are less relevant. As knowledge of the Earth's figure is increasingly accurate, the International Geoscientific Union IUGG usually adapts the axes of the Earth ellipsoid to the best available data.
Reference ellipsoid
In geodesy, a reference ellipsoid is a mathematically defined surface that approximates the geoid, which is the truer, imperfect figure of the Earth, or other planetary body, as opposed to a perfect, smooth, and unaltered sphere, which factors in the undulations of the bodies' gravity due to variations in the composition and density of the interior, as well as the subsequent flattening caused by the centrifugal force from the rotation of these massive objects (for planetary bodies that do rotate).
Because of their relative simplicity, reference ellipsoids are used as a preferred surface on which geodetic network computations are performed and point coordinates such as latitude, longitude, and elevation are defined.
In the context of standardization and geographic applications, a geodesic reference ellipsoid is the mathematical model used as foundation by spatial reference system or geodetic datum definitions.
Ellipsoid parameters
In 1687 Isaac Newton published the Principia in which he included a proof that a rotating self-gravitating fluid body in equilibrium takes the form of a flattened ("oblate") ellipsoid of revolution, generated by an ellipse rotated around its minor diameter; a shape which he termed an oblate spheroid.
In geophysics, geodesy, and related areas, the word 'ellipsoid' is understood to mean 'oblate ellipsoid of revolution', and the older term 'oblate spheroid' is hardly used. For bodies that cannot be well approximated by an ellipsoid of revolution a triaxial (or scalene) ellipsoid is used.
The shape of an ellipsoid of revolution is determined by the shape parameters of that ellipse. The semi-major axis of the ellipse, , becomes the equatorial radius of the ellipsoid: the semi-minor axis of the ellipse, , becomes the distance from the centre to either pole. These two lengths completely specify the shape of the ellipsoid.
In geodesy publications, however, it is common to specify the semi-major axis (equatorial radius) and the flattening , defined as:
That is, is the amount of flattening at each pole, relative to the radius at the equator. This is often expressed as a fraction 1/; then being the "inverse flattening". A great many other ellipse parameters are used in geodesy but they can all be related to one or two of the set , and .
A great many ellipsoids have been used to model the Earth in the past, with different assumed values of and as well as different assumed positions of the center and different axis orientations relative to the solid Earth. Starting in the late twentieth century, improved measurements of satellite orbits and star positions have provided extremely accurate determinations of the Earth's center of mass and of its axis of revolution; and those parameters have been adopted also for all modern reference ellipsoids.
The ellipsoid WGS-84, widely used for mapping and satellite navigation has close to 1/300 (more precisely, 1/298.257223563, by definition), corresponding to a difference of the major and minor semi-axes of approximately (more precisely, 21.3846857548205 km). For comparison, Earth's Moon is even less elliptical, with a flattening of less than 1/825, while Jupiter is visibly oblate at about 1/15 and one of Saturn's triaxial moons, Telesto, is highly flattened, with between 1/3 and 1/2 (meaning that the polar diameter is between 50% and 67% of the equatorial.
Determination
Arc measurement is the historical method of determining the ellipsoid.
Two meridian arc measurements will allow the derivation of two parameters required to specify a reference ellipsoid.
For example, if the measurements were hypothetically performed exactly over the equator plane and either geographical pole, the radii of curvature so obtained would be related to the equatorial radius and the polar radius, respectively a and b (see: Earth polar and equatorial radius of curvature). Then, the flattening would readily follow from its definition:
.
For two arc measurements each at arbitrary average latitudes , , the solution starts from an initial approximation for the equatorial radius and for the flattening . The theoretical Earth's meridional radius of curvature can be calculated at the latitude of each arc measurement as:
where .
Then discrepancies between empirical and theoretical values of the radius of curvature can be formed as . Finally, corrections for the initial equatorial radius and the flattening can be solved by means of a system of linear equations formulated via linearization of :
where the partial derivatives are:
Longer arcs with multiple intermediate-latitude determinations can completely determine the ellipsoid that best fits the surveyed region. In practice, multiple arc measurements are used to determine the ellipsoid parameters by the method of least squares adjustment. The parameters determined are usually the semi-major axis, , and any of the semi-minor axis, , flattening, or eccentricity.
Regional-scale systematic effects observed in the radius of curvature measurements reflect the geoid undulation and the deflection of the vertical, as explored in astrogeodetic leveling.
Gravimetry is another technique for determining Earth's flattening, as per Clairaut's theorem.
Modern geodesy no longer uses simple meridian arcs or ground triangulation networks, but the methods of satellite geodesy, especially satellite gravimetry.
Geodetic coordinates
Historical Earth ellipsoids
The reference ellipsoid models listed below have had utility in geodetic work and many are still in use. The older ellipsoids are named for the individual who derived them and the year of development is given. In 1887 the English surveyor Colonel Alexander Ross Clarke CB FRS RE was awarded the Gold Medal of the Royal Society for his work in determining the figure of the Earth. The international ellipsoid was developed by John Fillmore Hayford in 1910 and adopted by the International Union of Geodesy and Geophysics (IUGG) in 1924, which recommended it for international use.
At the 1967 meeting of the IUGG held in Lucerne, Switzerland, the ellipsoid called GRS-67 (Geodetic Reference System 1967) in the listing was recommended for adoption. The new ellipsoid was not recommended to replace the International Ellipsoid (1924), but was advocated for use where a greater degree of accuracy is required. It became a part of the GRS-67 which was approved and adopted at the 1971 meeting of the IUGG held in Moscow. It is used in Australia for the Australian Geodetic Datum and in the South American Datum 1969.
The GRS-80 (Geodetic Reference System 1980) as approved and adopted by the IUGG at its Canberra, Australia meeting of 1979 is based on the equatorial radius (semi-major axis of Earth ellipsoid) , total mass , dynamic form factor and angular velocity of rotation , making the inverse flattening a derived quantity. The minute difference in seen between GRS-80 and WGS-84 results from an unintentional truncation in the latter's defining constants: while the WGS-84 was designed to adhere closely to the GRS-80, incidentally the WGS-84 derived flattening turned out to differ slightly from the GRS-80 flattening because the normalized second degree zonal harmonic gravitational coefficient, that was derived from the GRS-80 value for , was truncated to eight significant digits in the normalization process.
An ellipsoidal model describes only the ellipsoid's geometry and a normal gravity field formula to go with it. Commonly an ellipsoidal model is part of a more encompassing geodetic datum. For example, the older ED-50 (European Datum 1950) is based on the Hayford or International Ellipsoid. WGS-84 is peculiar in that the same name is used for both the complete geodetic reference system and its component ellipsoidal model. Nevertheless, the two concepts—ellipsoidal model and geodetic reference system—remain distinct.
Note that the same ellipsoid may be known by different names. It is best to mention the defining constants for unambiguous identification.
| Physical sciences | Earth science basics: General | Earth science |
9428033 | https://en.wikipedia.org/wiki/Chinese%20giant%20salamander | Chinese giant salamander | The Chinese giant salamander (Andrias davidianus) is one of the largest salamanders and one of the largest amphibians in the world. It is fully aquatic, and is endemic to rocky mountain streams and lakes in the Yangtze river basin of central China. It has also been introduced to Kyoto Prefecture in Japan, and possibly to Taiwan. It is considered critically endangered in the wild due to habitat loss, pollution, and overcollection, as it is considered a delicacy and used in traditional Chinese medicine. On farms in central China, it is extensively farmed and sometimes bred, although many of the salamanders on the farms are caught in the wild. It has been listed as one of the top-10 "focal species" in 2008 by the Evolutionarily Distinct and Globally Endangered project.
The Chinese giant salamander is considered to be a "living fossil". Although protected under Chinese law and CITES Appendix I, the wild population has declined by more than an estimated 80% since the 1950s. Although traditionally recognized as one of two living species of Andrias salamander in Asia, the other being the Japanese giant salamander, evidence indicates that the Chinese giant salamander may be composed of at least five cryptic species, further compounding each individual species' endangerment.
Taxonomy
The correct scientific name of this species has been argued to be Andrias scheuchzeri (in which case Andrias davidianus would be a junior synonym) – a name otherwise restricted to an extinct species described from Swiss fossils. It has also been given the moniker of "living fossil" for being part of the family Cryptobranchidae which dates back 170 million years. It is one of only five to six known extant species of the family, the others being the slightly smaller, but otherwise very similar Japanese giant salamander (Andrias japonicus), the slightly larger South China giant salamander (A. sligoi), the Jiangxi giant salamander (Andrias jiangxiensis), the Qimen giant salamander (Andrias cheni), and the far smaller North American hellbender (Cryptobranchus alleganiensis).
A 2018 study of mitochondrial DNA revealed that there are five wild clades of the Chinese giant salamander, as well as two only known from captives (their possible wild range was previously unknown). They diverged from each other 4.71–10.25 million years ago and should possibly be recognized as cryptic species. Despite this deep divergence, they can hybridize among each other, and also with the Japanese giant salamander. One of these clades was identified in 2019 as Andrias sligoi, a species described in 1924 by Edward George Boulenger and later synonymized with A. davidianus, with the study supporting its revival as a distinct taxon. Another then-undescribed species was also identified that formerly inhabited rivers originating from the Huangshan mountains in eastern China; this was described as Andrias cheni in 2023. In 2022, one of the captive-only clades was described as Andrias jiangxiensis, and was found to have maintained genetically pure wild populations in Jiangxi Province, in contrast to most of the other clades.
Description
It has a large head, small eyes and dark wrinkly skin. Its flat, broad head has a wide mouth, round, lidless eyes, and a line of paired tubercles that run around its head and throat. Its color is typically dark brown with a mottled or speckled pattern, but it can also be other brownish tones, dark reddish, or black. Albinos, which are white or orange, have been recorded. All species of giant salamanders produce a sticky, white skin secretion that repels predators.
The average adult salamander weighs and is in length. It can reach up to in weight and in length, making it the second-largest amphibian species, after the South China giant salamander (Andrias sligoi). The longest recently documented Chinese giant salamander, kept at a farm in Zhangjiajie, was in 2007. At , both this individual, and a long, individual found in a remote cave in Chongqing in December 2015, surpassed the species' typically reported maximum weight.
The giant salamander is known to vocalize, making barking, whining, hissing, or crying sounds. Some of these vocalizations bear a striking resemblance to the crying of a young human child, and as such, it is known in the Chinese language as the "infant fish" (娃娃鱼 / 鲵 - Wáwáyú/ ní).
Behavior
Diet
The Chinese giant salamander has been recorded feeding on insects, millipedes, horsehair worms, amphibians (both frogs and salamanders), freshwater crabs, shrimp, fish (such as Saurogobio and Cobitis ), and Asiatic water shrew. Presumably ingested by mistake, plant material and gravel have also been found in their stomachs. Cannibalism is frequent; in a study of 79 specimens from the Qinling–Dabashan range, the stomach content of five included remains of other Chinese giant salamanders and this made up 28% of the combined weight of all food items in the study. The most frequent items in the same study were freshwater crabs (found in 19 specimens), which made up 23% of the combined weight of all food items.
It has very poor eyesight, so it depends on special sensory nodes that run in a line on the body from head to tail. It is capable of sensing the slightest vibrations around it with the help of these nodes. Based on a captive study, most activity is from the earlier evening to the early night. Most individuals stop feeding at water temperatures above and feeding ceases almost entirely at . Temperatures of are lethal to Chinese giant salamanders.
Adult Chinese giant salamanders and maturing Chinese giant salamanders with nonexistent or shrinking gill slits have developed a system for bidirectional flow suction feeding under water. They start by moving to their prey very slowly, then once close enough to them the Chinese giant salamander abruptly gapes its mouth open. The gaping motion of their mouth causes a great increase in the velocity of the water straight ahead of them compared to water coming in from the sides of their mouth. This is possible because of their large, wide, and flat upper and lower jaws. This process causes the prey to shoot back into their mouths as well as a copious amount of water. They then close their mouths, but leave a small gap between their upper and lower lips so that the captured water can escape.
The Chinese giant salamander catches its prey on land with an asymmetrical bite, in such a way that the force created by their jaws will be maximized in the anterior region where their prey is located. After capture they use their bite to subdue and kill their prey, both on land and in water. They are missing a bone which usually lies along the upper cheek region of most salamanders, which gives them a much stronger bite force. The bite force of the adult Chinese giant salamander is much stronger than the bite force of the maturing Chinese giant salamander due to differences in cranial structure.
Chinese giant salamanders esophaguses are made up of four different layers, one of which being a strong muscular tissue used to help move food through to the stomach. The outer most layer has ciliated cells that move mucous from mucous glands over the surface of the esophagus to lubricate it and reduce friction from large foods such as whole crabs. The ciliated structure and flexibility of the Chinese giant salamander's esophagus is hypothesized to be the reason why it is capable of swallowing such large foods.
Chinese giant salamanders are also capable of fasting for several years if they need to. This is possible because of their metabolic reserves as well as their liver, which is capable of up regulating and down regulating certain proteins according to how long they have been fasting for.
Breeding and lifecycle
Both sexes maintain a territory, averaging for males and for females. The reproductive cycle is initiated when the water temperature reaches and mating occurs between July and September. The female lays 400–500 eggs in an underwater breeding cavity, which is guarded by the male until the eggs hatch after 50–60 days. They have a variety of different courtship displays including knocking bellies, leaning side-to-side, riding, mouth-to-mouth posturing, chasing, rolling over, inviting, and cohabiting. When laid, the eggs measure in diameter, but they increase to about double that size by absorbing water. When hatching, the larvae are about long and external gills remain until a length of about at an age of 3 years. The external gills start to slowly decrease in size around 9 to 16 months, the rate of this phenomenon occurs in relation to the rate of dissolved oxygen, breeding density, water temperatures, and individual differences. Maturity is reached at an age of 5 to 6 years and a length of . The maximum age reached by Chinese giant salamanders is unknown, but it is at least 60 years based on captive individuals. Undocumented claims have been made of 200-year-old Chinese giant salamanders, but these are considered unreliable.
Distribution and habitat
The Chinese giant salamander species complex comprises five clades, with multiple possibly worthy of species recognition. Their native ranges differ, but release of Chinese giant salamanders from captivity has complicated this picture. They were widespread in central, south-western, and southern China, although their range is now highly fragmented. Their range spans the area from Qinghai east to Jiangsu and south to Sichuan, Guangxi, and Guangdong; notably in the basins of the Yangtze, Yellow, and Pearl Rivers. One clade is from the Pearl River basin (at least in Guangxi), two from the Yellow River basin, one from the Yangtze River basin (at least in Chongqing and Guizhou), and the final from the Qiantang River (at least in Anhui). Two additional clades were only known from captivity (their wild range is unknown) and no samples are available for the population in the Tibetan Plateau. A 2019 study has identified that the Yangtze River clade comprises the "true" A. davidianus, the Pearl River clade comprises A. sligoi, and the Qiantang clade comprises the Huangshan Mountains species (described as A. cheni in 2023). A 2022 study identified one of the two clades known only from captivity as A. jiangxiensis, found in the wild only in Jiangxi Province.
Finds in Taiwan may be the result of introduction, though their exact taxonomic identity is unknown. Chinese giant salamanders have been introduced to the Kyoto Prefecture in Japan where they present a threat to the native Japanese giant salamander, as the two hybridize. A 2024 genetic study confirmed that in spite of the recent taxonomic changes within the genus, the Chinese Andrias species introduced to Japan and hybridizing with A. japonicus is the "true" A. davidianus (the Yangtze River clade, or lineage B), although at least one genetically pure individual of the captive-only lineage U1 was also detected in the wild.
The Chinese giant salamander is entirely aquatic and lives in rocky hill streams and lakes with clear water. It typically lives in dark, muddy, or rocky crevices along the banks. It is usually found in forested regions at altitudes of , with most records between . There is an isolated population at an altitude of in Qinghai (Tibetan Plateau), but its taxonomic position is uncertain and the site likely does not support giant salamanders anymore due to pollution.
The salamanders prefer to live in streams of small width (on average, across), quick flow, and little depth (on average, deep). Water temperature varies depending on season, with typical range at low elevation sites being from and at high elevation sites from . Although they prefer to have quick flow in the stream, the burrows in which they lay their eggs often have much slower flow. Furthermore, their habitat often possesses very rocky, irregular stream beds with a lot of gravel and small rocks as well as some vegetation. Chinese giant salamanders are also known from subterranean rivers. As populations in aboveground rivers and lakes are more vulnerable to poaching, there are some parts of China where only the subterranean populations remain.
In captivity
Farming
Very large numbers are being farmed in China, but most of the breeding stock are either wild-caught or first-generation captive-bred. This is partially explained by the fact that the industry is relatively new, but some farms have also struggled to produce second-generation captive-bred offspring. Registrations showed that 2.6 million Chinese giant salamanders were kept in farms in 2011 in Shaanxi alone, far surpassing the entire countrywide wild population estimated at less than 50,000 individuals. Shaanxi farms (mainly in the Qinling Mountain region) accounted for about 70% of the total output in China in 2012, but there are also many farms in Guizhou and several in other provinces. Among 43 south Shaanxi farms surveyed, 38 bred the species in 2010 and each produced an average of c. 10,300 larvae that year. Farming of Chinese giant salamanders, herbs, and mushrooms are the three most important economic activities in Shaanxi's Qinling Mountain region, and many thousands of families rely on the giant salamander farms for income. The giant salamander farming mainly supplies the food market, but whether this can be achieved to an extent where the pressure on the wild populations is reduced is doubtful. Release of captive-bred Chinese giant salamanders is supported by the government (8,000 were released in Shaanxi in 2011 alone), but represent a potential risk to the remaining wild population, as diseases such as Ranavirus are known from many farms. The vast majority of the farmed Chinese giant salamanders, almost 80% based on a study published in 2018, are of Yellow River origin (the so-called haplotype B), although those from other regions also occur. Farms have generally not considered this issue when releasing giant salamanders and Yellow River animals now dominate in some regions outside their original range, further endangering the native types. Additionally, release of untreated wastewater from farms may spread diseases to wild Chinese giant salamanders.
In zoos and aquariums
As of early 2008, Species360 records show only five individuals held in US zoos (Zoo Atlanta, Cincinnati Zoo, and Saint Louis Zoological Park), and an additional four in European zoos (Dresden Zoo and Rotterdam Zoo); as well as one in the State Museum of Natural History Karlsruhe, where it is also the museum's mascot.
As of 2019, London Zoo holds four individuals (one of them on display) that were seized from an illegal importation of amphibians in 2016. A medium-sized individual, approximately long, was kept for several years at the Steinhart Aquarium in San Francisco, California, and is now on display again in the "Water Planet" section of the new California Academy of Sciences building. There are also two in residence at the Los Angeles Zoo. Additional individuals are likely kept in non-Species360 zoos and animals parks in its native China, such as Shanghai Zoo. Several of them are kept in the aquaria of Shanghai and Xi'an. The Osaka Aquarium Kaiyukan in Japan has both a Chinese and a Japanese giant salamander on display, as does the Saitama aquarium in Hanyū, Saitama. The Ueno Zoological Gardens also has a Chinese giant salamander on display.
Since May 2014, 33 Chinese giant salamanders, including three adults, have been held in Prague Zoo. The main attraction is the largest individual in Europe, which is long.
Decline in population
In the past, the Chinese giant salamander was fairly common and widespread in China. Since the 1950s, the population has declined rapidly due to habitat destruction and overhunting. It has been listed as Critically Endangered in the Chinese Red Book of Amphibians and Reptiles. Despite the Chinese Government listing the salamander as a Class II Protected Species, 100 salamanders are hunted illegally every year in the Hupingshan Natural Nature Reserve alone. Since the 1980s, 14 nature reserves have been established as an effort to conserve the species. Despite this, the population continues to decline with the salamanders becoming increasingly difficult to find. In a recent survey of the species in the Qinghai Province, none were found indicating the population size is at a significantly low number or the species is locally extinct in the province. This is believed to be due to the increased mining in the region.
In recent years populations have also declined with an epizootic Ranavirus infection. The disease causes severe hemorrhaging in both juveniles and adult salamanders. The virus was named the Chinese giant salamander iridovirus (GSIV).
Its natural range has suffered in the past few decades due to habitat loss and overharvesting. Consequently, many salamanders are now farmed in mesocosms across China. Furthermore, previously built concrete dams that destroyed the salamander's habitat are now fitted with stairs so that the animal can easily navigate the dam and make it back to its niche.
The Chinese giant salamander is listed as a critically endangered species. It has experienced a drastic population decline, which is estimated to be more than 80% in the last 3 generations and due to human causes. Human consumption is the main threat to the Chinese giant salamander. They are considered to be a luxury food item and source of traditional medicines in China.
Habitat destruction
According to a recent study, 90% of the Chinese giant salamanders' habitat was destroyed by the year 2000, and there are many human-related causes of such massive destruction. Because the salamander dwells in free-flowing streams, industrialization is a large problem for many stream-dwelling species. The construction of dams greatly disturbs their habitat by either causing these streams to dry up or to stand still, thus making it uninhabitable by the salamanders. Siltation also contributes to the degradation of their habitats by soiling the water. Deforestation in areas near the streams can worsen soil erosion and create runoff into the streams as well, which reduces the water quality to a great extent. The reduced water quality makes it much more difficult for the salamanders to absorb oxygen through their skin and can often bring death to those within the species.
Water pollution is also a great factor in the habitat destruction of the Chinese giant salamander; the immense decline in their population can be traced to, among the other major problems of over-hunting and failed conservation efforts, the tainting of the water that they live in. Mining activity in particular in areas near their streams often causes runoff that sullies the water, and farming—and all of the pesticides and chemicals that affect the soil that come with it—has a vastly negative effect on the areas near the streams as well. The presence of macronutrients in the streams can also cause algal blooms, which cloud the water and force the temperature to rise. The salamanders reside primarily in very cold underwater cavities and follow a specific nesting requirement, which means that they will only reproduce and care for their eggs in areas such as these, so changes in temperature are incredibly detrimental to their health and well-being as well as to their perpetuation as a species. These algal blooms also deplete the levels of oxygen in the water, and a lesser supply of oxygen can quite easily hold the potential to kill off many members of the dwindling species.
Many efforts have been undertaken to create reserves and faux habitats for the Chinese giant salamander so that they can reproduce without worry of soiled water, but many of these reserves have failed in having a great impact overall due to the massive overhunting of the species. No matter how many members of the species they manage to save through the reserves, the poachers still manage to capture and kill that many more. Although habitat destruction is certainly not assisting in the perpetuation of the species, it is certainly not the biggest obstacle that the Chinese giant salamander faces in its quest to avoid extinction.
Climate change
Like other amphibians, the Chinese giant salamander is ectothermic. Most Chinese giant salamanders stop feeding at water temperatures above and feeding ceases almost entirely at . Temperatures of are lethal to Chinese giant salamanders. As a consequence, the species is vulnerable to global warming.
Overhunting
One of the main reasons that the Chinese giant salamander, Andrias davidianus, has been placed on the critically endangered list by the International Union for Conservation of Nature is overhunting. 75% of native species in China are harvested for food. The salamander is also used for traditional medicinal purposes. In 1989, the Chinese government placed legal protection on the salamander (category II due to its population decline by The Wild Animal Protection Law of China and Appendix I in the Convention of Endangered Species of Wild Fauna.).
But the salamander populations have continued to decline. The domestic demand for salamander meat and body parts greatly exceeds what can sustainably be harvested from the wild. Commercial captive breeding operations so far still rely on the regular introduction of new wild-caught breeding adults, because captive-bred animals have proven difficult to mate. In addition, salamander farms would need to increase their yield manifold before the black-market price of poached salamander drop significantly, meaning that a stricter enforcement of anti-poaching law is still very much the future for the Chinese giant salamander.
China's penalty for illegally hunting these creatures is very low and only comes to 50 yuan, or about US$6, which is less than one hundred times the black-market price. Establishments such as restaurants can charge up to US$250–US$400 per kilogram.
A hunting tool known as a bow hook is one of the preferred methods used by hunters to catch the salamander. This hunting tool is made with a combination of bamboo and sharp hooks baited with frogs or smaller fish. This is used to capture the salamander and keep it alive. Some hunters use pesticides to kill the salamander. Farmers often poach wild salamanders to stock their breeding programs, while others are hunted as food.
In a 2018 study, the Zoological Society of London and the Kunming Institute of Zoology in China reported on their surveys for giant salamanders in 16 Chinese provinces over four years. The researchers had been unable to confirm survival of wild salamanders at any of the 97 sites they surveyed. The study also brought up worries that commercial farms and conservation programs were crossbreeding what they described as five distinct species of Chinese giant salamanders. All the wild populations studied were found "critically depleted or extirpated" by the study. A related study found that some of the five distinct genetic lineages were probably already extinct in the wild. However, the exhaustiveness of these surveys was questioned in a 2022 study by Chai et al., who noted that over a third of the surveys had been performed only in Guizhou Province, and another third of the surveys had been performed in provinces that were only selected by habitat suitability modeling and had no actual historic records of giant salamanders. Based on this, the extent of extirpation of Chinese Andrias remains uncertain, especially as a natural population of Andrias jiangxiensis was discovered during the Chai et al. study.
| Biology and health sciences | Salamanders and newts | Animals |
596383 | https://en.wikipedia.org/wiki/Slow%20cooker | Slow cooker | A slow cooker, also known as a crock-pot (after a trademark owned by Sunbeam Products but sometimes used generically in the English-speaking world), is a countertop electrical cooking appliance used to simmer at a lower temperature than other cooking methods, such as baking, boiling, and frying. This facilitates unattended cooking for many hours of dishes that would otherwise be boiled: pot roast, soups, stews and other dishes (including beverages, desserts and dips).
History
Slow cookers achieved popularity in the US during the 1940s, when many women began to work outside the home. They could start dinner cooking in the morning before going to work and finish preparing the meal in the evening when they came home.
The Naxon Utilities Corporation of Chicago, under the leadership of electrical engineer Irving Naxon (born Irving Nachumsohn), developed the Naxon Beanery All-Purpose Cooker for the purposes of cooking a bean meal. Naxon was inspired by a story from his mother which told how back in her native Lithuanian town, his grandmother made a traditional Jewish stew called cholent which took several hours to cook in an oven. A 1950 advertisement shows a slow cooker called the "Simmer Crock" made by the Industrial Radiant Heat Corp. of Gladstone, NJ.
The Rival Company from Sedalia, Missouri, bought Naxon in 1970, acquiring Naxon's 1940 patent for the bean simmer cooker. Rival asked inventor Alex MacMaster, from Boonville, Missouri, to develop Naxon's bean cooker into a large scale production model which could cook an entire family meal, going further than just cooking a bean meal. Alex also designed and produced the mass-production machines for Rival's manufacturing line of the Crock-Pot. The cooker was then reintroduced under the name "Crock-Pot" in 1971. In 1974, Rival introduced removable stoneware inserts, making the appliance easier to clean. The Crock-Pot brand now belongs to Newell Brands.
Other brands of this appliance include Cuisinart, GE, Hamilton Beach, KitchenAid, Magic Chef, West Bend Housewares, and the now defunct American Electric Corporation.
Design
A basic slow cooker consists of a lidded round or oval cooking pot made of glazed ceramic or porcelain, surrounded by a housing, usually metal, containing an electric heating element. The lid itself is often made of glass, and seated in a groove in the pot edge; condensed vapor collects in the groove and provides a low-pressure seal to the atmosphere. The contents of a crock pot are effectively at atmospheric pressure, despite the water vapor generated inside the pot. A slow cooker is quite different from a pressure cooker and presents no danger of an abrupt pressure release.
The "crock", or ceramic pot, itself acts as both a cooking container and a heat reservoir. Slow cookers come in capacities from to . Because the heating elements are generally located at the bottom and often also partway up the sides, most slow cookers have a minimum recommended liquid level to avoid uncontrolled heating. Some newer models have coated aluminum or steel "crocks" which, while not as efficient as ceramic at retaining heat, do allow for quicker heating and cooling as well as the ability to use the "crock" on the stove top to brown meat prior to cooking.
Many slow cookers have two or more heat settings (e.g., low, medium, high, and sometimes a "keep warm" setting); some have continuously variable power. In the past, most slow cookers had no temperature control and deliver a constant heat to the contents. The temperature of the contents rises until it reaches boiling point, at which point the energy goes into gently boiling the liquid closest to the hot surface. At a lower setting, it may just simmer at a temperature below the boiling point. While many basic slow cookers still operate in this manner, newer models have computerized controls for precise temperature control, delayed cooking starts and control via a computer or mobile device.
Operation
To use a slow cooker, the cook places raw food and a liquid, such as stock, water, or wine, in the slow cooker. Some recipes call for pre-heated liquid. The cook puts the lid on the slow cooker and turns it on. Some cookers automatically switch from cooking to warming (maintaining the temperature at ) after a fixed time or after the internal temperature of the food, as determined by a probe, reaches a specified value.
The heating element heats the contents to a steady temperature in the range. The contents are enclosed by the crock and the lid, and attain an essentially constant temperature. The vapor that is produced at this temperature condenses on the bottom of the lid and returns as liquid, into which some water-soluble vitamins are leached.
The liquid transfers heat from the pot walls to its contents, and also distributes flavors. The slow cooker's lid is essential to prevent the warm vapor from escaping, taking heat with it and cooling the contents.
Basic cookers, which have only high, medium, low, or keep warm settings, must be turned on and off manually. More advanced cookers have computerized timing devices that let a cook program the cooker to perform multiple operations (e.g., two hours high, followed by two hours low, followed by warm) and to delay the start of cooking.
Because food cooked in a slow cooker stays warm for a long time after it is switched off, people can use the slow cookers to take food elsewhere to eat without reheating. Some slow cookers have lids that seal to prevent their contents from spilling during transport.
Recipes
Recipes intended for other cooking methods must be modified for slow cookers. Quantities of liquids may need adjustment, as there is a little evaporation, but there should be enough liquid to cover the food. Many published recipes for slow cookers are designed primarily for convenience and use few ingredients, and often use prepared sauces or seasonings. The long, moist cooking is particularly suitable for tough and cheap cuts of meat including pork shoulder, beef chuck and brisket. For many slow-cooked dishes, these cuts give better results than more expensive ones. They are also often used to cook while unattended, meaning the cook can fill the pot with its ingredients and come back several hours later to a ready meal.
Advantages
Cheaper cuts of meat with connective tissue and lean muscle fibers are suitable for stewing, and produce tastier stews than those using expensive cuts, as long slow cooking softens connective tissue without toughening the muscle. Slow cooking leaves gelatinized tissue in the meat, so that it may be advantageous to start with a richer liquid.
The low temperature of slow-cooking makes it almost impossible to burn even food that has been cooked too long. However, some meats and most vegetables become nearly tasteless or "raggy" if over-cooked.
Food can be set to slow-cook before leaving for the day so it is ready on return. Many homeowners with rooftop solar panels switch to slow cooking because it draws under 1 kW of power and can therefore be powered entirely by 1–2 kW panels during the day. Some models include timers or thermostats that bring food to a given temperature and then lower it. With a timerless cooker it is possible to use an external timer to stop cooking after a set time, or both to start and stop.
Cooking the meal in a single pot reduces water waste resulting from cleaning multiple dishes, and the low cooking temperature and glazed pot make cleaning easier than conventional high-heat pots.
Disadvantages
Some vitamins and other trace nutrients are lost, particularly from vegetables, partially by enzyme action during cooking and partially due to heat degradation. When vegetables are cooked at higher temperatures these enzymes are rapidly denatured and have less time to act during cooking. Since slow cookers work at temperatures well below boiling point and do not rapidly denature enzymes, vegetables tend to lose trace nutrients. Blanched vegetables, having been exposed to very hot water, have already had these enzymes rendered largely ineffective, so a blanching or sauteing pre-cook stage leaves more vitamins intact. This is often a smaller nutrient loss than over-boiling and can be lessened to an extent by not removing the lid until the food is done.
Slow cookers do not provide sufficient heat to compensate for loss of moisture and heat due to frequent removal of the lid, e.g., to add and remove food in perpetual stews, (pot-au-feu, olla podrida). Added ingredients must be given time to cook before the food can be eaten.
Hazards
Scalding
Slow cookers are less dangerous than ovens or stove tops due to their lower operating temperatures and closed lids. However, they still contain a large amount of foods and liquids at temperatures close to boiling, and they can cause serious scalds if spilled.
Poisoning concerns
Slow cookers should not be used to cook dried kidney beans and other legume seeds. These foods contain the highly toxic lectin phytohemagglutinin, making as few as four raw beans toxic. This lectin is only deactivated by long soaking, then boiling in fresh water at for at least thirty minutes. Information published by the United States Food and Drug Administration states that slow cookers should not be used to cook bean containing dishes. Commercially canned beans are fully cooked and are safe to use. Pressure cooking also deactivates the lectins.
| Technology | Household appliances | null |
596405 | https://en.wikipedia.org/wiki/Collider | Collider | A collider is a type of particle accelerator that brings two opposing particle beams together such that the particles collide. Compared to other particle accelerators in which the moving particles collide with a stationary matter target, colliders can achieve higher collision energies. Colliders may either be ring accelerators or linear accelerators.
Colliders are used as a research tool in particle physics by accelerating particles to very high kinetic energy and letting them impact other particles. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. These may become apparent only at high energies and for extremely short periods of time, and therefore may be hard or impossible to study in other ways.
Explanation
In particle physics one gains knowledge about elementary particles by accelerating particles to very high kinetic energy and guiding them to colide with other particles. For sufficiently high energy, a reaction occurs that transforms the particles into other particles. Detecting these products gives insight into the physics involved.
To do such experiments there are two possible setups:
Fixed target setup: A beam of particles (the projectiles) is accelerated with a particle accelerator, and as collision partner, one puts a stationary target into the path of the beam.
Collider: Two beams of particles are accelerated and the beams are directed against each other, so that the particles collide while flying in opposite directions.
The collider setup is harder to construct but has the great advantage that according to special relativity the energy of an inelastic collision between two particles approaching each other with a given velocity is not just 4 times as high as in the case of one particle resting (as it would be in non-relativistic physics); it can be orders of magnitude higher if the collision velocity is near the speed of light.
In the case of a collider where the collision point is at rest in the laboratory frame (i.e. ), the center of mass energy (the energy available for producing new particles in the collision) is simply , where and is the total energy of a particle from each beam.
For a fixed target experiment where particle 2 is at rest, .
History
The first serious proposal for a collider originated with a group at the Midwestern Universities Research Association (MURA). This group proposed building two tangent radial-sector FFAG accelerator rings. Tihiro Ohkawa, one of the authors of the first paper, went on to develop a radial-sector FFAG accelerator design that could accelerate two counterrotating particle beams within a single ring of magnets. The third FFAG prototype built by the MURA group was a 50 MeV electron machine built in 1961 to demonstrate the feasibility of this concept.
Gerard K. O'Neill proposed using a single accelerator to inject particles into a pair of tangent storage rings. As in the original MURA proposal, collisions would occur in the tangent section. The benefit of storage rings is that the storage ring can accumulate a high beam flux from an injection accelerator that achieves a much lower flux.
The first electron-positron colliders were built in late 1950s-early 1960s in Italy, at the Istituto Nazionale di Fisica Nucleare in Frascati near Rome, by the Austrian-Italian physicist Bruno Touschek and in the US, by the Stanford-Princeton team that included William C.Barber, Bernard Gittelman, Gerry O’Neill, and Burton Richter. Around the same time, the VEP-1 electron-electron collider was independently developed and built under supervision of Gersh Budker in the Institute of Nuclear Physics in Novosibirsk, USSR. The first observations of particle reactions in the colliding beams were reported almost simultaneously by the three teams in mid-1964 - early 1965.
In 1966, work began on the Intersecting Storage Rings at CERN, and in 1971, this collider was operational. The ISR was a pair of storage rings that accumulated and collided protons injected by the CERN Proton Synchrotron. This was the first hadron collider, as all of the earlier efforts had worked with electrons or with electrons and positrons.
In 1968 construction began on the highest energy proton accelerator complex at Fermilab. It was eventually upgraded to become the Tevatron collider and in October 1985 the first proton-antiproton collisions were recorded at a center of mass energy of 1.6 TeV, making it the highest energy collider in the world, at the time. The energy had later reached 1.96 TeV and at the end of the operation in 2011 the collider luminosity exceeded 430 times its original design goal.
Since 2009, the most high-energetic collider in the world is the Large Hadron Collider (LHC) at CERN. It currently operates at 13 TeV center of mass energy in proton-proton collisions. More than a dozen future particle collider projects of various types - circular and linear, colliding hadrons (proton-proton or ion-ion), leptons (electron-positron or muon-muon), or electrons and ions/protons - are currently under consideration for detail exploration of the Higgs/electroweak physics and discoveries at the post-LHC energy frontier.
Operating colliders
Sources: Information was taken from the website Particle Data Group.
| Physical sciences | Devices | Physics |
596419 | https://en.wikipedia.org/wiki/Amphicyonidae | Amphicyonidae | Amphicyonidae is an extinct family of terrestrial carnivorans belonging to the suborder Caniformia. They first appeared in North America in the middle Eocene (around 45 mya), spread to Europe by the late Eocene (35 mya), and further spread to Asia and Africa by the early Miocene (23 mya). They had largely disappeared worldwide by the late Miocene (5 mya), with the latest recorded species at the end of the Miocene in Africa. They were among the first carnivorans to evolve large body size. Amphicyonids are colloquially referred to as "bear-dogs".
Taxonomy
The family was erected by Haeckel in 1866 (also attributed to Trouessart 1885). Their exact position has long been disputed. Early paleontologists usually defined them as members of Canidae (the dog family) or Ursidae (the bear family), but the modern consensus is that they form their own family. Some researchers have defined it as the sister clade to ursids, based on morphological analysis of the ear region. However, cladistic analysis and reclassification of several species of early carnivore as amphicyonids has strongly suggested that they may be basal caniforms, a lineage older than the origin of both bears and dogs.
Amphicyonids should not be confused with the similar looking (and similarly nicknamed) "dog-bears", a more derived group of caniforms that is sometimes classified as a family (Hemicyonidae), but is more often considered a primitive subfamily of ursids (Hemicyoninae). They should also not be confused with Amphicynodontidae (another family of extinct caniforms which were related to bears or pinnipeds) or Arctocyonidae (a family of "condylarths" which literally translates to "bear-dogs").
Description
Amphicyonids ranged in size from as small as and as large as and evolved from wolf-like to bear-like body forms.
Skull
Amphicyonids tended to have relatively large skulls, with the snout shorter than the rear portion of the cranium. In some large members of the family, such as Amphicyon, the back of the skull develops a sharp sagittal crest which defines attachment points for large jaw muscles.
Amphicyonids had a relatively rudimentary form of auditory bulla, a bony sheath which encases the middle ear cavity. The bulla is small, mostly formed by the crescent-shaped ectotympanic bone below the middle ear. The entotympanics only make a minor contribution whenever they are ossified, which only becomes commonplace in Miocene amphicyonids. In these regards, amphicyonids are similar to living bears, otters, walruses, eared seals, and the red panda. The bulla also helps to distinguish the evolutionary trajectory of amphicyonids: early bears such as Cephalogale have large bullae which are reduced through the course of their evolution, while dogs start out with large bullae which persist through their entire existence. Amphicyonids differ from both dogs and bears in that they start with a small bulla which gradually becomes more strongly developed later in their evolution.
Teeth
Like most carnivorans, amphicyonid teeth were adapted for carnivory, with large canines near the front and shearing carnassials at the back of the jaw. Amphicyonids were typically mesocarnivorous (majority meat-eating, like dogs) or hypercarnivorous (entirely meat-eating, like cats), and some were adapted for tough abrasive food. Only two small Miocene amphicyonines, Pseudarctos and Ictiocyon, show any evidence for a hypocarnivorous (majority plant-eating) diet.
At the start of their evolution, amphicyonids retained the typical placental dental formula of , but each subfamily follows their own trend in modifying their teeth. Daphoenines, for example, have dog-like teeth, with substantial premolars and reduced second and third molars. Temnocyonines and haplocyonines take this approach even further, with massive crushing premolars akin to hyenas. Amphicyonines follow the opposite path, reducing most premolars and greatly enlarging and strengthening the carnassials and second molar. Bears also have large molars, but their teeth are modified into wide rectangular forms for grinding plant material. Amphicyonids did not pursue the same adaptations; their upper molars always maintain a roughly triangular profile for shearing and crushing meat. Thaumastocyonines were the most specialized for hypercarnivory, emphasizing massive blade-like carnassials at the expense of the rest of their postcanine teeth.
Fossils of juvenile Agnotherium, Ischyrocyon, and Magericyon all show an unusual type of tooth eruption in which there is a vulnerable stage at about two or three years of age where the subadult animal has no functional molar or carnassial teeth, the only functional cheek teeth being several milk premolars. This period is suggested to be short and would have left the animal somewhat vulnerable.
Postcrania
Many amphicyonids had cat-like bodies, with a long tail and relatively short, strong limbs suitable for stalking and pouncing on their prey. Later and larger species tended to be plantigrade or semiplantigrade, walking with most or all of the surface of the foot against the ground like bears. This was the norm for amphicyonines, thaumastocyonines, and most daphoenines. It is entirely possible that the largest amphicyonids were capable of both bear-style hunting (chasing down and mauling their prey with teeth and claws) and cat-style hunting (a quick ambush where the prey is killed with a bite to the neck).
Many amphicyonid lineages instead adopted a digitigrade posture and locomotion (walking on their toes) and long legs specialized for running with a primarily front-to-back arc of movement. These cursorial wolf- or hyena-like forms included temnocyonines, haplocyonines, and some species of the large daphoenine Daphoenodon.
Evolution
It has long been uncertain where amphicyonids originated. It was thought that they may have crossed from Europe to North America during the Miocene epoch, but recent research suggests a possible North American origin from the miacids Miacis cognitus and M. australis (now renamed as the genera Gustafsonia and Angelarctocyon, respectively). As these are of North American origin, but appear to be early amphicyonids, it may be that the Amphicyonidae actually originates in North America.
Other New World amphicyonids include the oldest known amphicyonid, Daphoenus (37–16 Mya).
Amphicyonids began to decline in the late Miocene, and disappeared by the end of the epoch. The exact reasons for this are unclear. The most recent known amphicyonid remains are teeth known from the Dhok Pathan horizon, northern Pakistan, dating to 7.4-5.3 mya. The species is classically named Arctamphicyon lydekkeri, which may actually be synonymous with a species of Amphicyon.
Ecology
Amphicyonids are suggested to have ranged in ecology from omnivores to hypercarnivores, with some amphicyonids suggested to have engaged in bone-crushing like some modern hyenas. At least some amphicyonids are suggested to have been solitary hunters.
Classification
Family Amphicyonidae
| Biology and health sciences | Other carnivora | Animals |
596706 | https://en.wikipedia.org/wiki/Gas%20chromatography | Gas chromatography | Gas chromatography (GC) is a common type of chromatography used in analytical chemistry for separating and analyzing compounds that can be vaporized without decomposition. Typical uses of GC include testing the purity of a particular substance, or separating the different components of a mixture. In preparative chromatography, GC can be used to prepare pure compounds from a mixture.
Gas chromatography is also sometimes known as vapor-phase chromatography (VPC), or gas–liquid partition chromatography (GLPC). These alternative names, as well as their respective abbreviations, are frequently used in scientific literature.
Gas chromatography is the process of separating compounds in a mixture by injecting a gaseous or liquid sample into a mobile phase, typically called the carrier gas, and passing the gas through a stationary phase. The mobile phase is usually an inert gas or an unreactive gas such as helium, argon, nitrogen or hydrogen. The stationary phase can be solid or liquid, although most GC systems today use a polymeric liquid stationary phase. The stationary phase is contained inside of a separation column. Today, most GC columns are fused silica capillaries with an inner diameter of and a length of . The GC column is located inside an oven where the temperature of the gas can be controlled and the effluent coming off the column is monitored by a suitable detector.
Operating principle
A gas chromatograph is made of a narrow tube, known as the column, through which the vaporized sample passes, carried along by a continuous flow of inert or nonreactive gas. Components of the sample pass through the column at different rates, depending on their chemical and physical properties and the resulting interactions with the column lining or filling, called the stationary phase. The column is typically enclosed within a temperature controlled oven. As the chemicals exit the end of the column, they are detected and identified electronically.
History
Background
Chromatography dates to 1903 in the work of the Russian scientist, Mikhail Semenovich Tswett, who separated plant pigments via liquid column chromatography.
Invention
The invention of gas chromatography is generally attributed to Anthony T. James and Archer J.P. Martin. Their gas chromatograph used partition chromatography as the separating principle, rather than adsorption chromatography. The popularity of gas chromatography quickly rose after the development of the flame ionization detector.
Martin and another one of their colleagues, Richard Synge, with whom he shared the 1952 Nobel Prize in Chemistry, had noted in an earlier paper that chromatography might also be used to separate gases. Synge pursued other work while Martin continued his work with James.
Gas adsorption chromatography precursors
German physical chemist Erika Cremer in 1947 together with Austrian graduate student Fritz Prior developed what could be considered the first gas chromatograph that consisted of a carrier gas, a column packed with silica gel, and a thermal conductivity detector. They exhibited the chromatograph at ACHEMA in Frankfurt, but nobody was interested in it.
N.C. Turner with the Burrell Corporation introduced in 1943 a massive instrument that used a charcoal column and mercury vapors. Stig Claesson of Uppsala University published in 1946 his work on a charcoal column that also used mercury.
Gerhard Hesse, while a professor at the University of Marburg/Lahn decided to test the prevailing opinion among German chemists that molecules could not be separated in a moving gas stream. He set up a simple glass column filled with starch and successfully separated bromine and iodine using nitrogen as the carrier gas. He then built a system that flowed an inert gas through a glass condenser packed with silica gel and collected the eluted fractions.
Courtenay S.G Phillips of Oxford University investigated separation in a charcoal column using a thermal conductivity detector. He consulted with Claesson and decided to use displacement as his separating principle. After learning about the results of James and Martin, he switched to partition chromatography.
Column technology
Early gas chromatography used packed columns, made of block 1–5 m long, 1–5 mm diameter, and filled with particles. The resolution of packed columns was improved by the invention of capillary column, in which the stationary phase is coated on the inner wall of the capillary.
Physical components
Autosamplers
The autosampler provides the means to introduce a sample automatically into the inlets. Manual insertion of the sample is possible but is no longer common. Automatic insertion provides better reproducibility and time-optimization.Different kinds of autosamplers exist. Autosamplers can be classified in relation to sample capacity (auto-injectors vs. autosamplers, where auto-injectors can work a small number of samples), to robotic technologies (XYZ robot vs. rotating robot – the most common), or to analysis:
Liquid
Static head-space by syringe technology
Dynamic head-space by transfer-line technology
Solid phase microextraction (SPME)
Inlets
The column inlet (or injector) provides the means to introduce a sample into a continuous flow of carrier gas. The inlet is a piece of hardware attached to the column head.
Common inlet types are:
S/SL (split/splitless) injector – a sample is introduced into a heated small chamber via a syringe through a septum – the heat facilitates volatilization of the sample and sample matrix. The carrier gas then either sweeps the entirety (splitless mode) or a portion (split mode) of the sample into the column. In split mode, a part of the sample/carrier gas mixture in the injection chamber is exhausted through the split vent. Split injection is preferred when working with samples with high analyte concentrations (>0.1%) whereas splitless injection is best suited for trace analysis with low amounts of analytes (<0.01%). In splitless mode the split valve opens after a pre-set amount of time to purge heavier elements that would otherwise contaminate the system. This pre-set (splitless) time should be optimized, the shorter time (e.g., 0.2 min) ensures less tailing but loss in response, the longer time (2 min) increases tailing but also signal.
On-column inlet – the sample is here introduced directly into the column in its entirety without heat, or at a temperature below the boiling point of the solvent. The low temperature condenses the sample into a narrow zone. The column and inlet can then be heated, releasing the sample into the gas phase. This ensures the lowest possible temperature for chromatography and keeps samples from decomposing above their boiling point.
PTV injector – Temperature-programmed sample introduction was first described by Vogt in 1979. Originally Vogt developed the technique as a method for the introduction of large sample volumes (up to 250 μL) in capillary GC. Vogt introduced the sample into the liner at a controlled injection rate. The temperature of the liner was chosen slightly below the boiling point of the solvent. The low-boiling solvent was continuously evaporated and vented through the split line. Based on this technique, Poy developed the programmed temperature vaporising injector; PTV. By introducing the sample at a low initial liner temperature many of the disadvantages of the classic hot injection techniques could be circumvented.
Gas source inlet or gas switching valve – gaseous samples in collection bottles are connected to what is most commonly a six-port switching valve. The carrier gas flow is not interrupted while a sample can be expanded into a previously evacuated sample loop. Upon switching, the contents of the sample loop are inserted into the carrier gas stream.
P/T (purge-and-trap) system – An inert gas is bubbled through an aqueous sample causing insoluble volatile chemicals to be purged from the matrix. The volatiles are 'trapped' on an absorbent column (known as a trap or concentrator) at ambient temperature. The trap is then heated and the volatiles are directed into the carrier gas stream. Samples requiring preconcentration or purification can be introduced via such a system, usually hooked up to the S/SL port.
The choice of carrier gas (mobile phase) is important. Hydrogen has a range of flow rates that are comparable to helium in efficiency. However, helium may be more efficient and provide the best separation if flow rates are optimized. Helium is non-flammable and works with a greater number of detectors and older instruments. Therefore, helium is the most common carrier gas used. However, the price of helium has gone up considerably over recent years, causing an increasing number of chromatographers to switch to hydrogen gas. Historical use, rather than rational consideration, may contribute to the continued preferential use of helium.
Detectors
Commonly used detectors are the flame ionization detector (FID) and the thermal conductivity detector (TCD). While TCDs are beneficial in that they are non-destructive, its low detection limit for most analytes inhibits widespread use. FIDs are sensitive primarily to hydrocarbons, and are more sensitive to them than TCD. FIDs cannot detect water or carbon dioxide which make them ideal for environmental organic analyte analysis. FID is two to three times more sensitive to analyte detection than TCD.
The TCD relies on the thermal conductivity of matter passing around a thin wire of tungsten-rhenium with a current traveling through it. In this set up helium or nitrogen serve as the carrier gas because of their relatively high thermal conductivity which keep the filament cool and maintain uniform resistivity and electrical efficiency of the filament. When analyte molecules elute from the column, mixed with carrier gas, the thermal conductivity decreases while there is an increase in filament temperature and resistivity resulting in fluctuations in voltage ultimately causing a detector response. Detector sensitivity is proportional to filament current while it is inversely proportional to the immediate environmental temperature of that detector as well as flow rate of the carrier gas.
In a flame ionization detector (FID), electrodes are placed adjacent to a flame fueled by hydrogen / air near the exit of the column, and when carbon containing compounds exit the column they are pyrolyzed by the flame. This detector works only for organic / hydrocarbon containing compounds due to the ability of the carbons to form cations and electrons upon pyrolysis which generates a current between the electrodes. The increase in current is translated and appears as a peak in a chromatogram. FIDs have low detection limits (a few picograms per second) but they are unable to generate ions from carbonyl containing carbons. FID compatible carrier gasses include helium, hydrogen, nitrogen, and argon.
In FID, sometimes the stream is modified before entering the detector. A methanizer converts carbon monoxide and carbon dioxide into methane so that it can be detected. A different technology is the polyarc, by Activated Research Inc, that converts all compounds to methane.
Alkali flame detector (AFD) or alkali flame ionization detector (AFID) has high sensitivity to nitrogen and phosphorus, similar to NPD. However, the alkaline metal ions are supplied with the hydrogen gas, rather than a bead above the flame. For this reason AFD does not suffer the "fatigue" of the NPD, but provides a constant sensitivity over long period of time. In addition, when alkali ions are not added to the flame, AFD operates like a standard FID. A catalytic combustion detector (CCD) measures combustible hydrocarbons and hydrogen. Discharge ionization detector (DID) uses a high-voltage electric discharge to produce ions.
Flame photometric detector (FPD) uses a photomultiplier tube to detect spectral lines of the compounds as they are burned in a flame. Compounds eluting off the column are carried into a hydrogen fueled flame which excites specific elements in the molecules, and the excited elements (P,S, Halogens, Some Metals) emit light of specific characteristic wavelengths. The emitted light is filtered and detected by a photomultiplier tube. In particular, phosphorus emission is around 510–536 nm and sulfur emission is at 394 nm. With an atomic emission detector (AED), a sample eluting from a column enters a chamber which is energized by microwaves that induce a plasma. The plasma causes the analyte sample to decompose and certain elements generate an atomic emission spectra. The atomic emission spectra is diffracted by a diffraction grating and detected by a series of photomultiplier tubes or photo diodes.
Electron capture detector (ECD) uses a radioactive beta particle (electron) source to measure the degree of electron capture. ECD are used for the detection of molecules containing electronegative / withdrawing elements and functional groups like halogens, carbonyl, nitriles, nitro groups, and organometalics. In this type of detector either nitrogen or 5% methane in argon is used as the mobile phase carrier gas. The carrier gas passes between two electrodes placed at the end of the column, and adjacent to the cathode (negative electrode) resides a radioactive foil such as 63Ni. The radioactive foil emits a beta particle (electron) which collides with and ionizes the carrier gas to generate more ions resulting in a current. When analyte molecules with electronegative / withdrawing elements or functional groups electrons are captured which results in a decrease in current generating a detector response.
Nitrogen–phosphorus detector (NPD), a form of thermionic detector where nitrogen and phosphorus alter the work function on a specially coated bead and a resulting current is measured.
Dry electrolytic conductivity detector (DELCD) uses an air phase and high temperature (v. Coulsen) to measure chlorinated compounds.
Mass spectrometer (MS), also called GC-MS; highly effective and sensitive, even in a small quantity of sample. This detector can be used to identify the analytes in chromatograms by their mass spectrum. Some GC-MS are connected to an NMR spectrometer which acts as a backup detector. This combination is known as GC-MS-NMR. Some GC-MS-NMR are connected to an infrared spectrophotometer which acts as a backup detector. This combination is known as GC-MS-NMR-IR. It must, however, be stressed this is very rare as most analyses needed can be concluded via purely GC-MS.
Vacuum ultraviolet (VUV) represents the most recent development in gas chromatography detectors. Most chemical species absorb and have unique gas phase absorption cross sections in the approximately 120–240 nm VUV wavelength range monitored. Where absorption cross sections are known for analytes, the VUV detector is capable of absolute determination (without calibration) of the number of molecules present in the flow cell in the absence of chemical interferences.
Olfactometric detector, also called GC-O, uses a human assessor to analyse the odour activity of compounds. With an odour port or a sniffing port, the quality of the odour, the intensity of the odour and the duration of the odour activity of a compound can be assessed.
Other detectors include the Hall electrolytic conductivity detector (ElCD), helium ionization detector (HID), infrared detector (IRD), photo-ionization detector (PID), pulsed discharge ionization detector (PDD), and thermionic ionization detector (TID).
Methods
The method is the collection of conditions in which the GC operates for a given analysis. Method development is the process of determining what conditions are adequate and/or ideal for the analysis required.
Conditions which can be varied to accommodate a required analysis include inlet temperature, detector temperature, column temperature and temperature program, carrier gas and carrier gas flow rates, the column's stationary phase, diameter and length, inlet type and flow rates, sample size and injection technique. Depending on the detector(s) (see below) installed on the GC, there may be a number of detector conditions that can also be varied. Some GCs also include valves which can change the route of sample and carrier flow. The timing of the opening and closing of these valves can be important to method development.
Carrier gas selection and flow rates
Typical carrier gases include helium, nitrogen, argon, and hydrogen. Which gas to use is usually determined by the detector being used, for example, a DID requires helium as the carrier gas. When analyzing gas samples the carrier is also selected based on the sample's matrix, for example, when analyzing a mixture in argon, an argon carrier is preferred because the argon in the sample does not show up on the chromatogram. Safety and availability can also influence carrier selection.
The purity of the carrier gas is also frequently determined by the detector, though the level of sensitivity needed can also play a significant role. Typically, purities of 99.995% or higher are used. The most common purity grades required by modern instruments for the majority of sensitivities are 5.0 grades, or 99.999% pure meaning that there is a total of 10 ppm of impurities in the carrier gas that could affect the results. The highest purity grades in common use are 6.0 grades, but the need for detection at very low levels in some forensic and environmental applications has driven the need for carrier gases at 7.0 grade purity and these are now commercially available. Trade names for typical purities include "Zero Grade", "Ultra-High Purity (UHP) Grade", "4.5 Grade" and "5.0 Grade".
The carrier gas linear velocity affects the analysis in the same way that temperature does (see above). The higher the linear velocity the faster the analysis, but the lower the separation between analytes. Selecting the linear velocity is therefore the same compromise between the level of separation and length of analysis as selecting the column temperature. The linear velocity will be implemented by means of the carrier gas flow rate, with regards to the inner diameter of the column.
With GCs made before the 1990s, carrier flow rate was controlled indirectly by controlling the carrier inlet pressure, or "column head pressure". The actual flow rate was measured at the outlet of the column or the detector with an electronic flow meter, or a bubble flow meter, and could be an involved, time consuming, and frustrating process. It was not possible to vary the pressure setting during the run, and thus the flow was essentially constant during the analysis. The relation between flow rate and inlet pressure is calculated with Poiseuille's equation for compressible fluids.
Many modern GCs, however, electronically measure the flow rate, and electronically control the carrier gas pressure to set the flow rate. Consequently, carrier pressures and flow rates can be adjusted during the run, creating pressure/flow programs similar to temperature programs.
Stationary compound selection
The polarity of the solute is crucial for the choice of stationary compound, which in an optimal case would have a similar polarity as the solute. Common stationary phases in open tubular columns are cyanopropylphenyl dimethyl polysiloxane, carbowax polyethyleneglycol, biscyanopropyl cyanopropylphenyl polysiloxane and diphenyl dimethyl polysiloxane. For packed columns more options are available.
Inlet types and flow rates
The choice of inlet type and injection technique depends on if the sample is in liquid, gas, adsorbed, or solid form, and on whether a solvent matrix is present that has to be vaporized. Dissolved samples can be introduced directly onto the column via a COC injector, if the conditions are well known; if a solvent matrix has to be vaporized and partially removed, a S/SL injector is used (most common injection technique); gaseous samples (e.g., air cylinders) are usually injected using a gas switching valve system; adsorbed samples (e.g., on adsorbent tubes) are introduced using either an external (on-line or off-line) desorption apparatus such as a purge-and-trap system, or are desorbed in the injector (SPME applications).
Sample size and injection technique
Sample injection
The real chromatographic analysis starts with the introduction of the sample onto the column. The development of capillary gas chromatography resulted in many practical problems with the injection technique. The technique of on-column injection, often used with packed columns, is usually not possible with capillary columns. In the injection system in the capillary gas chromatograph the amount injected should not overload the column and
the width of the injected plug should be small compared to the spreading due to the chromatographic process. Failure to comply with this latter requirement will reduce the separation capability of the column. As a general rule, the volume injected, Vinj, and the volume of the detector cell, Vdet, should be about 1/10 of the volume occupied by the portion of sample containing the molecules of interest (analytes) when they exit the column.
Some general requirements which a good injection technique should fulfill are that it should be possible to obtain the column's optimum separation efficiency, it should allow accurate and reproducible injections of small amounts of representative samples, it should induce no change in sample composition, it should not exhibit discrimination based on differences in boiling point, polarity, concentration or thermal/catalytic stability, and it should be applicable for trace analysis as well as for undiluted samples.
However, there are a number of problems inherent in the use of syringes for injection. Even the best syringes claim an accuracy of only 3%, and in unskilled hands, errors are much larger. The needle may cut small pieces of rubber from the septum as it injects sample through it. These can block the needle and prevent the syringe filling the next time it is used. It may not be obvious that this has happened. A fraction of the sample may get trapped in the rubber, to be released during subsequent injections. This can give rise to ghost peaks in the chromatogram. There may be selective loss of the more volatile components of the sample by evaporation from the tip of the needle.
Column selection
The choice of column depends on the sample and the active measured. The main chemical attribute regarded when choosing a column is the polarity of the mixture, but functional groups can play a large part in column selection. The polarity of the sample must closely match the polarity of the column stationary phase to increase resolution and separation while reducing run time. The separation and run time also depends on the film thickness (of the stationary phase), the column diameter and the column length.
Column temperature and temperature program
The column(s) in a GC are contained in an oven, the temperature of which is precisely controlled electronically. (When discussing the "temperature of the column," an analyst is technically referring to the temperature of the column oven. The distinction, however, is not important and will not subsequently be made in this article.)
The rate at which a sample passes through the column is directly proportional to the temperature of the column. The higher the column temperature, the faster the sample moves through the column. However, the faster a sample moves through the column, the less it interacts with the stationary phase, and the less the analytes are separated.
In general, the column temperature is selected to compromise between the length of the analysis and the level of separation.
A method which holds the column at the same temperature for the entire analysis is called "isothermal". Most methods, however, increase the column temperature during the analysis, the initial temperature, rate of temperature increase (the temperature "ramp"), and final temperature are called the temperature program.
A temperature program allows analytes that elute early in the analysis to separate adequately, while shortening the time it takes for late-eluting analytes to pass through the column.
Data reduction and analysis
Qualitative analysis
Generally, chromatographic data is presented as a graph of detector response (y-axis) against retention time (x-axis), which is called a chromatogram. This provides a spectrum of peaks for a sample representing the analytes present in a sample eluting from the column at different times. Retention time can be used to identify analytes if the method conditions are constant. Also, the pattern of peaks will be constant for a sample under constant conditions and can identify complex mixtures of analytes. However, in most modern applications, the GC is connected to a mass spectrometer or similar detector that is capable of identifying the analytes represented by the peaks.
Quantitative analysis
The area under a peak is proportional to the amount of analyte present in the chromatogram. By calculating the area of the peak using the mathematical function of integration, the concentration of an analyte in the original sample can be determined. Concentration can be calculated using a calibration curve created by finding the response for a series of concentrations of analyte, or by determining the relative response factor of an analyte. The relative response factor is the expected ratio of an analyte to an internal standard (or external standard) and is calculated by finding the response of a known amount of analyte and a constant amount of internal standard (a chemical added to the sample at a constant concentration, with a distinct retention time to the analyte).
In most modern GC-MS systems, computer software is used to draw and integrate peaks, and match MS spectra to library spectra.
Applications
In general, substances that vaporize below 300 °C (and therefore are stable up to that temperature) can be measured quantitatively. The samples are also required to be salt-free; they should not contain ions. Very minute amounts of a substance can be measured, but it is often required that the sample must be measured in comparison to a sample containing the pure, suspected substance known as a reference standard.
Various temperature programs can be used to make the readings more meaningful; for example to differentiate between substances that behave similarly during the GC process.
Professionals working with GC analyze the content of a chemical product, for example in assuring the quality of products in the chemical industry; or measuring chemicals in soil, air or water, such as soil gases. GC is very accurate if used properly and can measure picomoles of a substance in a 1 ml liquid sample, or parts-per-billion concentrations in gaseous samples.
In practical courses at colleges, students sometimes get acquainted to the GC by studying the contents of lavender oil or measuring the ethylene that is secreted by Nicotiana benthamiana plants after artificially injuring their leaves. These GC analyse hydrocarbons (C2-C40+). In a typical experiment, a packed column is used to separate the light gases, which are then detected with a TCD. The hydrocarbons are separated using a capillary column and detected with a FID. A complication with light gas analyses that include H2 is that He, which is the most common and most sensitive inert carrier (sensitivity is proportional to molecular mass) has an almost identical thermal conductivity to hydrogen (it is the difference in thermal conductivity between two separate filaments in a Wheatstone Bridge type arrangement that shows when a component has been eluted). For this reason, dual TCD instruments used with a separate channel for hydrogen that uses nitrogen as a carrier are common. Argon is often used when analysing gas phase chemistry reactions such as F-T synthesis so that a single carrier gas can be used rather than two separate ones. The sensitivity is reduced, but this is a trade off for simplicity in the gas supply.
Gas chromatography is used extensively in forensic science. Disciplines as diverse as solid drug dose (pre-consumption form) identification and quantification, arson investigation, paint chip analysis, and toxicology cases, employ GC to identify and quantify various biological specimens and crime-scene evidence.
| Physical sciences | Chromatography | Chemistry |
596745 | https://en.wikipedia.org/wiki/Lipophilicity | Lipophilicity | Lipophilicity (from Greek λίπος "fat" and φίλος "friendly") is the ability of a chemical compound to dissolve in fats, oils, lipids, and non-polar solvents such as hexane or toluene. Such compounds are called lipophilic (translated as "fat-loving" or "fat-liking"). Such non-polar solvents are themselves lipophilic, and the adage "like dissolves like" generally holds true. Thus lipophilic substances tend to dissolve in other lipophilic substances, whereas hydrophilic ("water-loving") substances tend to dissolve in water and other hydrophilic substances.
Lipophilicity, hydrophobicity, and non-polarity may describe the same tendency towards participation in the London dispersion force, as the terms are often used interchangeably. However, the terms "lipophilic" and "hydrophobic" are not synonymous, as can be seen with silicones and fluorocarbons, which are hydrophobic but not lipophilic.
Surfactants
Hydrocarbon-based surfactants are compounds that are amphiphilic (or amphipathic), having a hydrophilic, water interactive "end", referred to as their "head group", and a lipophilic "end", usually a long chain hydrocarbon fragment, referred to as their "tail". They congregate at low energy surfaces, including the air-water interface (lowering surface tension) and the surfaces of the water-immiscible droplets found in oil/water emulsions (lowering interfacial tension). At these surfaces they naturally orient themselves with their head groups in water and their tails either sticking up and largely out of water (as at the air-water interface) or dissolved in the water-immiscible phase that the water is in contact with (e.g. as the emulsified oil droplet). In both these configurations the head groups strongly interact with water while the tails avoid all contact with water. Surfactant molecules also aggregate in water as micelles with their head groups sticking out and their tails bunched together. Micelles draw oily substances into their hydrophobic cores, explaining the basic action of soaps and detergents used for personal cleanliness and for laundering clothes. Micelles are also biologically important for the transport of fatty substances in the small intestine surface in the first step that leads to the absorption of the components of fats (largely fatty acids and 2-monoglycerides).
Cell membranes are bilayer structures principally formed from phospholipids, molecules which have a highly water interactive, ionic phosphate head groups attached to two long alkyl tails.
By contrast, fluorosurfactants are not amphiphilic or detergents because fluorocarbons are not lipophilic.
Oxybenzone, a common cosmetic ingredient often used in sunscreens, penetrates the skin particularly well because it is not very lipophilic. Anywhere from 0.4% to 8.7% of oxybenzone can be absorbed after one topical sunscreen application, as measured in urine excretions.
| Physical sciences | Concepts_2 | Chemistry |
596833 | https://en.wikipedia.org/wiki/Tully%E2%80%93Fisher%20relation | Tully–Fisher relation | In astronomy, the Tully–Fisher relation (TFR) is a widely verified empirical relationship between the mass or intrinsic luminosity of a spiral galaxy and its asymptotic rotation velocity or emission line width. Since the observed brightness of a galaxy is distance-dependent, the relationship can be used to estimate distances to galaxies from measurements of their rotational velocity.
History
The connection between rotational velocity measured spectroscopically and distance was first used in 1922 by Ernst Öpik to estimate the distance to the Andromeda Galaxy. In the 1970s, Balkowski, C., et al. measured 13 galaxies but focused on using the data to distinguish galaxy shapes rather than extract distances.
The relationship was first published in 1977 by astronomers R. Brent Tully and J. Richard Fisher. The luminosity is calculated by multiplying the galaxy's apparent brightness by , where is its distance from Earth, and the spectral-line width is measured using long-slit spectroscopy.
A series of collaborative catalogs of galaxy peculiar velocity values called CosmicFlow uses Tully–Fisher analysis; the Cosmicflow-4 catalog has reached 10000 galaxies. Many values of the Hubble constant have been derived from Tully–Fisher analysis, starting with the first paper and continuing through 2024.
Subtypes
Several different forms of the TFR exist, depending on which precise measures of mass, luminosity or rotation velocity one takes it to relate. Tully and Fisher used optical luminosity, but subsequent work showed the relation to be tighter when defined using microwave to infrared (K band) radiation (a good proxy for stellar mass), and even tighter when luminosity is replaced by the galaxy's total stellar mass. The relation in terms of stellar mass is dubbed the "stellar mass Tully Fisher relation" (STFR), and its scatter only shows correlations with the galaxy's kinematic morphology, such that more dispersion-supported systems scatter below the relation. The tightest correlation is recovered when considering the total baryonic mass (the sum of its mass in stars and gas). This latter form of the relation is known as the baryonic Tully–Fisher relation (BTFR), and states that baryonic mass is proportional to velocity to the power of roughly 3.5–4.
The TFR can be used to estimate the distance to spiral galaxies by allowing the luminosity of a galaxy to be derived from its directly measurable line width. The distance can then be found by comparing the luminosity to the apparent brightness. Thus the TFR constitutes a rung of the cosmic distance ladder, where it is calibrated using more direct distance measurement techniques and used in turn to calibrate methods extending to larger distance.
In the dark matter paradigm, a galaxy's rotation velocity (and hence line width) is primarily determined by the mass of the dark matter halo in which it lives, making the TFR a manifestation of the connection between visible and dark matter mass. In Modified Newtonian dynamics (MOND), the BTFR (with power-law index exactly 4) is a direct consequence of the gravitational force law effective at low acceleration.
The analogues of the TFR for non-rotationally-supported galaxies, such as ellipticals, are known as the Faber–Jackson relation and the fundamental plane.
| Physical sciences | Galaxy classification | Astronomy |
597244 | https://en.wikipedia.org/wiki/Carbon%20star | Carbon star | A carbon star (C-type star) is typically an asymptotic giant branch star, a luminous red giant, whose atmosphere contains more carbon than oxygen. The two elements combine in the upper layers of the star, forming carbon monoxide, which consumes most of the oxygen in the atmosphere, leaving carbon atoms free to form other carbon compounds, giving the star a "sooty" atmosphere and a strikingly ruby red appearance. There are also some dwarf and supergiant carbon stars, with the more common giant stars sometimes being called classical carbon stars to distinguish them.
In most stars (such as the Sun), the atmosphere is richer in oxygen than carbon. Ordinary stars not exhibiting the characteristics of carbon stars but cool enough to form carbon monoxide are therefore called oxygen-rich stars.
Carbon stars have quite distinctive spectral characteristics, and they were first recognized by their spectra by Angelo Secchi in the 1860s, a pioneering time in astronomical spectroscopy.
Spectra
By definition carbon stars have dominant spectral Swan bands from the molecule C2. Many other carbon compounds may be present at high levels, such as CH, CN (cyanogen), C3 and SiC2. Carbon is formed in the core and circulated into its upper layers, dramatically changing the layers' composition. In addition to carbon, S-process elements such as barium, technetium, and zirconium are formed in the shell flashes and are "dredged up" to the surface.
When astronomers developed the spectral classification of the carbon stars, they had considerable difficulty when trying to correlate the spectra to the stars' effective temperatures. The trouble was with all the atmospheric carbon hiding the absorption lines normally used as temperature indicators for the stars.
Carbon stars also show a rich spectrum of molecular lines at millimeter wavelengths and submillimeter wavelengths. In the carbon star CW Leonis more than 50 different circumstellar molecules have been detected. This star is often used to search for new circumstellar molecules.
Secchi
Carbon stars were discovered already in the 1860s when spectral classification pioneer Angelo Secchi erected the Secchi class IV for the carbon stars, which in the late 1890s were reclassified as N class stars.
Harvard
Using this new Harvard classification, the N class was later enhanced by an R class for less deeply red stars sharing the characteristic carbon bands of the spectrum. Later correlation of this R to N scheme with conventional spectra, showed that the R-N sequence approximately run in parallel with c:a G7 to M10 with regards to star temperature.
Morgan–Keenan C system
The later N classes correspond less well to the counterparting M types, because the Harvard classification was only partially based on temperature, but also carbon abundance; so it soon became clear that this kind of carbon star classification was incomplete. Instead a new dual number star class C was erected so to deal with temperature and carbon abundance. Such a spectrum measured for Y Canum Venaticorum, was determined to be C54, where 5 refers to temperature dependent features, and 4 to the strength of the C2 Swan bands in the spectrum. (C54 is very often alternatively written C5,4). This Morgan–Keenan C system classification replaced the older R-N classifications from 1960 to 1993.
The Revised Morgan–Keenan system
The two-dimensional Morgan–Keenan C classification failed to fulfill the creators' expectations:
it failed to correlate to temperature measurements based on infrared,
originally being two-dimensional it was soon enhanced by suffixes, CH, CN, j and other features making it impractical for en-masse analyses of foreign galaxies' carbon star populations,
and it gradually occurred that the old R and N stars actually were two distinct types of carbon stars, having real astrophysical significance.
A new revised Morgan–Keenan classification was published in 1993 by Philip Keenan, defining the classes: C-N, C-R and C-H. Later the classes C-J and C-Hd were added. This constitutes the established classification system used today.
Astrophysical mechanisms
Carbon stars can be explained by more than one astrophysical mechanism. Classical carbon stars are distinguished from non-classical ones on the grounds of mass, with classical carbon stars being the more massive.
In the classical carbon stars, those belonging to the modern spectral types C-R and C-N, the abundance of carbon is thought to be a product of helium fusion, specifically the triple-alpha process within a star, which giants reach near the end of their lives in the asymptotic giant branch (AGB). These fusion products have been brought to the stellar surface by episodes of convection (the so-called third dredge-up) after the carbon and other products were made. Normally this kind of AGB carbon star fuses hydrogen in a hydrogen burning shell, but in episodes separated by 104–105 years, the star transforms to burning helium in a shell, while the hydrogen fusion temporarily ceases. In this phase, the star's luminosity rises, and material from the interior of the star (notably carbon) moves up. Since the luminosity rises, the star expands so that the helium fusion ceases, and the hydrogen shell burning restarts. During these shell helium flashes, the mass loss from the star is significant, and after many shell helium flashes, an AGB star is transformed into a hot white dwarf and its atmosphere becomes material for a planetary nebula.
The non-classical kinds of carbon stars, belonging to the types C-J and C-H, are believed to be binary stars, where one star is observed to be a giant star (or occasionally a red dwarf) and the other a white dwarf. The star presently observed to be a giant star accreted carbon-rich material when it was still a main-sequence star from its companion (that is, the star that is now the white dwarf) when the latter was still a classical carbon star. That phase of stellar evolution is relatively brief, and most such stars ultimately end up as white dwarfs. These systems are now being observed a comparatively long time after the mass transfer event, so the extra carbon observed in the present red giant was not produced within that star. This scenario is also accepted as the origin of the barium stars, which are also characterized as having strong spectral features of carbon molecules and of barium (an s-process element). Sometimes the stars whose excess carbon came from this mass transfer are called "extrinsic" carbon stars to distinguish them from the "intrinsic" AGB stars which produce the carbon internally. Many of these extrinsic carbon stars are not luminous or cool enough to have made their own carbon, which was a puzzle until their binary nature was discovered.
The enigmatic hydrogen deficient carbon stars (HdC), belonging to the spectral class C-Hd, seems to have some relation to R Coronae Borealis variables (RCB), but are not variable themselves and lack a certain infrared radiation typical for RCB:s. Only five HdC:s are known, and none is known to be binary, so the relation to the non-classical carbon stars is not known.
Other less convincing theories, such as CNO cycle unbalancing and core helium flash have also been proposed as mechanisms for carbon enrichment in the atmospheres of smaller carbon stars.
Other characteristics
Most classical carbon stars are variable stars of the long period variable types.
Observing carbon stars
Due to the insensitivity of night vision to red and a slow adaption of the red sensitive eye rods to the light of the stars, astronomers making magnitude estimates of red variable stars, especially carbon stars, have to know how to deal with the Purkinje effect in order not to underestimate the magnitude of the observed star.
Generation of interstellar dust
Owing to its low surface gravity, as much as half (or more) of the total mass of a carbon star may be lost by way of powerful stellar winds. The star's remnants, carbon-rich "dust" similar to graphite, therefore become part of the interstellar dust. This dust is believed to be a significant factor in providing the raw materials for the creation of subsequent generations of stars and their planetary systems. The material surrounding a carbon star may blanket it to the extent that the dust absorbs all visible light.
Silicon carbide outflow from carbon stars was accreted in the early solar nebula and survived in the matrices of relatively unaltered chondritic meteorites. This allows for direct isotopic analysis of the circumstellar environment of 1-3 M☉ carbon stars. Stellar outflow from carbon stars is the source of the majority of presolar silicon carbide found in meteorites.
Other classifications
Other types of carbon stars include:
CCS – Cool Carbon Star
CEMP – Carbon-Enhanced Metal-Poor
CEMP-no – Carbon-Enhanced Metal-Poor star with no enhancement of elements produced by the r-process or s-process nucleosynthesis
CEMP-r – Carbon-Enhanced Metal-Poor star with an enhancement of elements produced by r-process nucleosynthesis
CEMP-s – Carbon-Enhanced Metal-Poor star with an enhancement of elements produced by s-process nucleosynthesis
CEMP-r/s – Carbon-Enhanced Metal-Poor star with an enhancement of elements produced by both r-process and s-process nucleosynthesis
CGCS – Cool Galactic Carbon Star
Use as standard candles
Classical carbon stars are very luminous, especially in the near-infrared, so they can be detected in nearby galaxies. Because of the strong absorption features in their spectra, carbon stars are redder in the near-infrared than oxygen-rich stars are, and they can be identified by their photometric colors. While individual carbon stars do not all have the same luminosity, a large sample of carbon stars will have a luminosity probability density function (PDF) with nearly the same median value, in similar galaxies. So the median value of that function can be used as a standard candle for the determination of the distance to a galaxy. The shape of the PDF may vary depending upon the average metallicity of the AGB stars within a galaxy, so it is important to calibrate this distance indicator using several nearby galaxies for which the distances are known through other means.
| Physical sciences | Stellar astronomy | Astronomy |
598423 | https://en.wikipedia.org/wiki/Female%20reproductive%20system | Female reproductive system | The human female reproductive system is made up of the internal and external sex organs that function in the reproduction of new offspring. The reproductive system is immature at birth and develops at puberty to be able to release matured ova from the ovaries, facilitate their fertilization, and create a protective environment for the developing fetus during pregnancy. The female reproductive tract is made of several connected internal sex organs—the vagina, uterus, and fallopian tubes—and is prone to infections. The vagina allows for sexual intercourse, and is connected to the uterus at the cervix. The uterus (or womb) accommodates the embryo by developing the uterine lining.
The uterus also produces secretions which help the transit of sperm to the fallopian tubes, where sperm fertilize the ova. During the menstrual cycle, the ovaries release an ovum, which transits through the fallopian tube into the uterus. If an egg cell meets with sperm on its way to the uterus, a single sperm cell can enter and merge with it, creating a zygote. If no fertilization occurs, menstruation is the process by which the uterine lining is shed as blood, mucus, and tissue.
Fertilization usually occurs in the fallopian tubes and marks the beginning of embryogenesis. The zygote will then divide over enough generations of cells to form a blastocyst, which implants itself in the wall of the uterus. This begins the period of gestation and the embryo will continue to develop until full-term. When the fetus has developed enough to survive outside the uterus, the cervix dilates, and contractions of the uterus propel it through the birth canal (the vagina), where it becomes a newborn. The breasts are not part of the reproductive system, but mammary glands were essential to nourishing infants until the modern advent of infant formula.
Later in life, a woman goes through menopause and menstruation halts. The ovaries stop releasing eggs and the uterus stops preparing for pregnancy.
The external sex organs are also known as the genitals, and these are the organs of the vulva, including the labia, clitoris, and vestibule. The corresponding equivalent among males is the male reproductive system.
External genitalia
Vulva
The vulva is of all of the external parts and tissues and includes the following:
Clitoris: an organ located at the top of the vulva. It consists of the body and its pea-shaped glans that is protected by the clitoral hood. The corpora cavernosa are tissues of the clitoris that aid in erection by filling with blood during sexual arousal.
Labia: two types of vertical folds of skin called the labia majora (thick and large outer folds that protect other parts of the vulva) and the labia minora (thin and small inner folds that protect the vestibule from dryness, infections and irritation).
Mons pubis: a mass of fatty tissue where the pubic hair grows.
Vulval vestibule: an almond-shaped area between the labia minora that contains the openings.
Urinary meatus: the opening of the urethra for urine to pass through.
Vaginal opening: entrance to the vagina.
Hymen: connective tissue that covers the vaginal opening.
Vestibular gland openings: two pairs of openings in the vulval vestibule for the Bartholin's and Skene's glands.
Internal genitalia
Vagina
The vagina is a fibromuscular (made up of fibrous and muscular tissue) canal leading from the outside of the body to the cervix of the uterus. It is also referred to as the birth canal in the context of pregnancy. The vagina accommodates a penis during sexual intercourse. Semen containing spermatozoa is ejaculated from the penis at orgasm, into the vagina potentially enabling fertilization of the egg cell (ovum) to take place.
Cervix
The cervix is the neck of the uterus, the lower, narrow portion where it joins with the upper part of the vagina. It is cylindrical or conical in shape and protrudes through the upper anterior vaginal wall. Approximately half its length is visible, the remainder lies above the vagina beyond view. The vagina has a thick layer outside and it is the opening where the fetus emerges during delivery.
Uterus
The uterus or womb is the major female reproductive organ. The uterus provides mechanical protection, nutritional support, and waste removal for the developing embryo (weeks 1 to 8) and fetus (from week 9 until the delivery). In addition, contractions in the muscular wall of the uterus are important in pushing out the fetus at the time of birth.
The uterus contains three suspensory ligaments that help stabilize the position of the uterus and limits its range of movement. The uterosacral ligaments keep the body from moving inferiorly and anteriorly. The round ligaments restrict posterior movement of the uterus. The cardinal ligaments also prevent the inferior movement of the uterus.
The uterus is a pear-shaped muscular organ. Its major function is to accept a fertilized ovum, which becomes implanted into the endometrium, and derives nourishment from blood vessels, which develop exclusively for this purpose. The fertilized ovum becomes an embryo, develops into a fetus and gestates until childbirth. If the egg does not embed in the wall of the uterus, the female begins menstruation.
Fallopian tubes
The fallopian tubes are two tubes leading from the ovaries into the uterus. On maturity of an ovum, the follicle and the ovary's wall rupture, allowing the ovum to escape and enter the fallopian tube. There it travels toward the uterus, pushed along by movements of cilia on the inner lining of the tubes. This trip takes hours or days. If the ovum is fertilized while in the fallopian tube, then it normally implants in the endometrium when it reaches the uterus, which signals the beginning of pregnancy.
Ovaries
The ovaries are small, paired gonads located near the lateral walls of the pelvic cavity. These organs are responsible for the production of the egg cells (ova) and the secretion of hormones. The process by which the egg cell (ovum) is released is called ovulation. The speed of ovulation is periodic and impacts the length of a menstrual cycle.
After ovulation, the egg cell travels through the fallopian tube toward the uterus. If fertilization is going to occur, it often happens in the fallopian tube; the fertilized egg can then implant on the uterus's lining. During fertilization the egg cell plays a role; it releases certain molecules that are essential to guiding the sperm and allows the surface of the egg to attach to the sperm's surface. The egg can then absorb the sperm and fertilization can begin.
Vestibular glands
The vestibular glands, also known as the female accessory glands, are the Bartholin's glands, which produce a mucous fluid for vaginal lubrication, and the Skene's glands for the ejaculation of fluid as well as for lubricating the meatus.
Function
The female reproductive system functions to produce offspring.
In the absence of fertilization, the ovum will eventually traverse the entire reproductive tract from the fallopian tube until exiting the vagina through menstruation.
The reproductive tract can be used for various transluminal procedures such as fertiloscopy, intrauterine insemination, and transluminal sterilization.
Oocytes residing in the primordial follicle of the ovary are in a non-growing prophase arrested state, but are capable of highly efficient homologous recombinational repair of DNA damages including double-strand breaks. This capability allows genome integrity to be maintained and offspring health to be protected.
Development
Chromosome characteristics determine the genetic sex of a fetus at conception. This is specifically based on the 23rd pair of chromosomes that is inherited. Since the mother's egg contains an X chromosome and the father's sperm contains either an X or Y chromosome, it is the male who determines the fetus' sex. If the fetus inherits the X chromosome from the father, the fetus will be a female. In this case, testosterone is not made and the Wolffian duct will degrade thus, the Müllerian duct will develop into female sex organs. The clitoris is the remnants of the Wolffian duct. On the other hand, if the fetus inherits the Y chromosome from the father, the fetus will be a male. The presence of testosterone will stimulate the Wolffian duct, which will bring about the development of the male sex organs and the Müllerian duct will degrade.
Clinical significance
Vaginitis
Vaginitis is inflammation of the vagina and largely caused by an infection. It is the most common gynaecological condition presented. It is difficult to determine any one organism most responsible for vaginitis because it varies from range of age, sexual activity, and method of microbial identification. Vaginitis is not necessarily caused by a sexually transmitted infection as there are many infectious agents that make use of the close proximity to mucous membranes and secretions. Vaginitis is usually diagnosed based on the presence of vaginal discharge, which can have a certain color, odor, or quality.
Bacterial vaginosis
This is a vaginal infection in women. It differs from vaginitis in that there is no inflammation. Bacterial vaginosis is polymicrobial, consisting of many bacteria species. The diagnosis for bacterial vaginosis is made if three of the following four criteria are present: (1) Homogenous, thin discharge, (2) a pH of 4.5 in the vagina, (3) epithelial cells in the vagina with bacteria attached to them, or (4) a fishy odor. It has been associated with an increased risk of other genital tract infections such as endometritis.
Yeast infection
This is a common cause of vaginal irritation and according to the Centers for Disease Control and Prevention at least 75% of adult women have experienced one at least once in their lifetime. Yeast infections are caused by an overgrowth of fungus in the vagina known as Candida. Yeast infections are usually caused by an imbalance of the pH in the vagina, which is usually acidic. Other factors such as pregnancy, diabetes, weakened immune systems, tight fitting clothing, or douching can also be a cause. Symptoms of yeast infections include itching, burning, irritation, and a white cottage-cheese-like discharge from the vagina. Women have also reported that they experience painful intercourse and urination as well. Taking a sample of the vaginal secretions and placing them under a microscope for evidence of yeast can diagnose a yeast infection. Treatment varies from creams that can be applied in or around the vaginal area to oral tablets that stop the growth of fungus.
Genital mutilation
There are many practices of mutilating female genitalia in different cultures. The most common two types of genital mutilation practiced are clitoridectomy, the circumcision of the clitoris and the excision of the clitoral prepuce. They can all involve a range of adverse health consequences such as bleeding, irreparable tissue damage, and sepsis, which can sometimes prove fatal.
Genital surgery
Genitoplasty refers to surgery that is carried out to repair damaged sex organs particularly following cancer and its treatment.
There are also elective surgical procedures, which change the appearance of the external genitals.
Birth control
There are many types of birth control available to females. Birth control can be hormonal or physical in nature. Oral contraception can assist with management of various medical conditions, such as menorrhagia. However, oral contraceptives can have a variety of side effects, including depression.
Reproductive rights
The International Federation of Gynaecology and Obstetrics was founded in 1954 to promote the well-being of women particularly in raising the standards of gynaecological practice and care. As of 2010, there were 124 countries involved.
Reproductive rights are legal rights related to reproduction and reproductive health. Women have the right to control matters involving their sexuality including their sexual and reproductive health. Violation of these rights include forced pregnancy, forced sterilization, forced abortion and genital mutilation. Female genital mutilation is the complete or partial removal of a female's external genitals.
History
It is claimed in the Hippocratic writings that both males and females contribute their seed to conception; otherwise, children would not resemble either or both of their parents. Four hundred years later, Galen identified the source of 'female semen' as the ovaries in female reproductive organs.
| Biology and health sciences | Reproductive system | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.