id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
2593831 | https://en.wikipedia.org/wiki/Skull%20fracture | Skull fracture | A skull fracture is a break in one or more of the eight bones that form the cranial portion of the skull, usually occurring as a result of blunt force trauma. If the force of the impact is excessive, the bone may fracture at or near the site of the impact and cause damage to the underlying structures within the skull such as the membranes, blood vessels, and brain.
While an uncomplicated skull fracture can occur without associated physical or neurological damage and is in itself usually not clinically significant, a fracture in healthy bone indicates that a substantial amount of force has been applied and increases the possibility of associated injury. Any significant blow to the head results in a concussion, with or without loss of consciousness.
A fracture in conjunction with an overlying laceration that tears the epidermis and the meninges, or runs through the paranasal sinuses and the middle ear structures, bringing the outside environment into contact with the cranial cavity is called a compound fracture. Compound fractures can either be clean or contaminated.
There are four major types of skull fractures: linear, depressed, diastatic, and basilar. Linear fractures are the most common, and usually require no intervention for the fracture itself. Depressed fractures are usually comminuted, with broken portions of bone displaced inward—and may require surgical intervention to repair underlying tissue damage. Diastatic fractures widen the sutures of the skull and usually affect children under three. Basilar fractures are in the bones at the base of the skull.
Types
Linear fracture
Linear skull fractures are breaks in the bone that transverse the full thickness of the skull from the outer to inner table. They are usually fairly straight with no bone displacement. The common cause of injury is blunt force trauma where the impact energy transferred over a wide area of the skull.
Linear skull fractures are usually of little clinical significance unless they parallel in close proximity or transverse a suture, or they involve a venous sinus groove or vascular channel. The resulting complications may include suture diastasis, venous sinus thrombosis, and epidural hematoma. In young children, although rare, the possibility exists of developing a growing skull fracture especially if the fracture occurs in the parietal bone.
Depressed fracture
A depressed skull fracture is a type of fracture usually resulting from blunt force trauma, such as getting struck with a hammer, rock or getting kicked in the head. These types of fractures—which occur in 11% of severe head injuries—are comminuted fractures in which broken bones displace inward. Depressed skull fractures present a high risk of increased pressure on the brain, or a hemorrhage to the brain that crushes the delicate tissue.
Compound depressed skull fractures occur when there is a laceration over the fracture, putting the internal cranial cavity in contact with the outside environment, increasing the risk of contamination and infection. In complex depressed fractures, the dura mater is torn. Depressed skull fractures may require surgery to lift the bones off the brain if they are pressing on it by making burr holes on the adjacent normal skull.
Diastatic fracture
Diastatic fractures occur when the fracture line transverses one or more sutures of the skull causing a widening of the suture. While this type of fracture is usually seen in infants and young children as the sutures are not yet fused it can also occur in adults. When a diastatic fracture occurs in adults it usually affects the lambdoidal suture as this suture does not fully fuse in adults until about the age of 60. Most adult diastatic fractures are caused by severe head injuries. Due to the trauma, diastatic fracture occurs with the collapse of the surrounding head bones. It crushes the delicate tissue, similarly to a depressed skull fracture.
Diastatic fractures can occur with different types of fractures and it is also possible for diastasis of the cranial sutures to occur without a concomitant fracture. Sutural diastasis may also occur in various congenital disorders such as cleidocranial dysplasia and osteogenesis imperfecta.
Basilar fracture
Basilar skull fractures are linear fractures that occur in the floor of the cranial vault (skull base), which require more force to cause than other areas of the neurocranium. Thus they are rare, occurring as the only fracture in only 4% of severe head injury patients.
Basilar fractures have characteristic signs: blood in the sinuses; cerebrospinal fluid rhinorrhea (CSF leaking from the nose) or from the ears (cerebrospinal fluid otorrhea); periorbital ecchymosis often called 'raccoon eyes' (bruising of the orbits of the eyes that result from blood collecting there as it leaks from the fracture site); and retroauricular ecchymosis known as "Battle's sign" (bruising over the mastoid process).
Growing fracture
A growing skull fracture (GSF) also known as a craniocerebral erosion or leptomeningeal cyst due to the usual development of a cystic mass filled with cerebrospinal fluid is a rare complication of head injury usually associated with linear skull fractures of the parietal bone in children under 3. It has been reported in older children in atypical regions of the skull such as the basioccipital and the base of the skull base and in association with other types of skull fractures. It is characterized by a diastatic enlargement of the fracture.
Various factors are associated with the development of a GSF. The primary causative factor is a tear in the dura mater. The skull fracture enlarges due, in part, to the rapid physiologic growth of the brain that occurs in young children, and brain cerebrospinal fluid (CSF) pulsations in the underlying leptomeningeal cystic mass.
Cranial burst fracture
A cranial burst skull fracture, usually occurring with severe injuries in infants less than 1 year of age, is a closed, diastatic skull fracture with cerebral extrusion beyond the outer table of the skull under the intact scalp.
Acute scalp swelling is associated with this type of fracture. In equivocal cases without immediate scalp swelling the diagnosis may be made via the use of magnetic resonance imaging thus insuring more prompt treatment and avoiding the development of a "growing skull fracture".
Compound fracture
A fracture in conjunction with an overlying laceration that tears the epidermis and the meninges—or runs through the paranasal sinuses and the middle ear structures, putting the outside environment in contact with the cranial cavity—is a compound fracture.
Compound fractures may either be clean or contaminated. Intracranial air (pneumocephalus) may occur in compound skull fractures.
The most serious complication of compound skull fractures is infection. Increased risk factors for infection include visible contamination, meningeal tear, loose bone fragments and presenting for treatment more than eight hours after initial injury.
Compound elevated fracture
A compound elevated skull fracture is a rare type of skull fracture where the fractured bone is elevated above the intact outer table of the skull. This type of skull fracture is always compound in nature. It can be caused during an assault with a weapon where the initial blow penetrates the skull and the underlying meninges and, on withdrawal, the weapon lifts the fractured portion of the skull outward. It can also be caused by the skull rotating while being struck in a case of blunt force trauma, the skull rotating while striking an object as in a fall, or it may occur during transfer of a patient after an initial compound head injury.
Anatomy
The human skull is anatomically divided into two parts: the neurocranium, formed by eight cranial bones that houses and protect the brain—and the facial skeleton (viscerocranium) composed of fourteen bones, not including the three ossicles of the inner ear. The term skull fracture typically means fractures to the neurocranium, while fractures of the facial portion of the skull are facial fractures, or if the jaw is fractured, a mandibular fracture.
The eight cranial bones are separated by sutures: one frontal bone, two parietal bones, two temporal bones, one occipital bone, one sphenoid bone, and one ethmoid bone.
The bones of the skull are in three layers: the hard compact layer of the external table (lamina externa), the diploë (a spongy layer of red bone marrow in the middle, and the compact layer of the inner table (Lamina interna).
Skull thickness is variable, depending on location. Thus the traumatic impact required to cause a fracture depends on the impact site. The skull is thick at the glabella, the external occipital protuberance, the mastoid processes, and the external angular process of the frontal bone. Areas of the skull that are covered with muscle have no underlying diploë formation between the internal and external lamina, which results in thin bone more susceptible to fractures.
Skull fractures occur more easily at the thin squamous temporal and parietal bones, the sphenoid sinus, the foramen magnum (the opening at the base of the skull that the spinal cord passes through), the petrous temporal ridge, and the inner portions of the sphenoid wings at the base of the skull. The middle cranial fossa, a depression at the base of the cranial cavity forms the thinnest part of the skull and is thus the weakest part. This area of the cranial floor is weakened further by the presence of multiple foramina; as a result this section is at higher risk for basilar skull fractures to occur. Other areas more susceptible to fractures are the cribriform plate, the roof of orbits in the anterior cranial fossa, and the areas between the mastoid and dural sinuses in the posterior cranial fossa.
Prognosis
Children with a simple skull fracture without other concerns are at low risk of a bad outcome and rarely require aggressive treatment.
The presence of a concussion or skull fracture in people after trauma without intracranial hemorrhage or focal neurologic deficits was indicated in long term cognitive impairments and emotional lability at nearly double the rate as those patients without either complication.
Those with a skull fracture were shown to have "neuropsychological dysfunction, even in the absence of intracranial pathology or more severe disturbance of consciousness on the GCS".
| Biology and health sciences | Types | Health |
15698614 | https://en.wikipedia.org/wiki/Stable%20roommates%20problem | Stable roommates problem | In mathematics, economics and computer science, particularly in the fields of combinatorics, game theory and algorithms, the stable-roommate problem (SRP) is the problem of finding a stable matching for an even-sized set. A matching is a separation of the set into disjoint pairs ("roommates"). The matching is stable if there are no two elements which are not roommates and which both prefer each other to their roommate under the matching. This is distinct from the stable-marriage problem in that the stable-roommates problem allows matches between any two elements, not just between classes of "men" and "women".
It is commonly stated as:
In a given instance of the stable-roommates problem (SRP), each of 2n participants ranks the others in strict order of preference. A matching is a set of n disjoint pairs of participants. A matching M in an instance of SRP is stable if there are no two participants x and y, each of whom prefers the other to their partner in M. Such a pair is said to block M, or to be a blocking pair with respect to M.
Solution
Unlike the stable marriage problem, a stable matching may fail to exist for certain sets of participants and their preferences. For a minimal example of a stable pairing not existing, consider 4 people , , , and , whose rankings are:
A:(B,C,D), B:(C,A,D), C:(A,B,D), D:(A,B,C)
In this ranking, each of A, B, and C is the most preferable person for someone. In any solution, one of A, B, or C must be paired with D and the other two with each other (for example AD and BC), yet for anyone who is partnered with D, another member will have rated them highest, and D's partner will in turn prefer this other member over D. In this example, AC is a more favorable pairing than AD, but the necessary remaining pairing of BD then raises the same issue, illustrating the absence of a stable matching for these participants and their preferences.
Algorithm
An efficient algorithm is the following. The algorithm will determine, for any instance of the problem, whether a stable matching exists, and if so, will find such a matching. Irving's algorithm has O(n2) complexity, provided suitable data structures are used to implement the necessary manipulation of the preference lists and identification of rotations.
The algorithm consists of two phases. In Phase 1, participants propose to each other, in a manner similar to that of the Gale-Shapley algorithm for the stable marriage problem. Each participant orders the other members by preference, resulting in a preference list—an ordered set of the other participants. Participants then propose to each person on their list, in order, continuing to the next person if and when their current proposal is rejected. A participant will reject a proposal if they already hold a proposal from someone they prefer. A participant will also reject a previously-accepted proposal if they later receive a proposal that they prefer. In this case, the rejected participant will then propose to the next person on their list, continuing until a proposal is again accepted. If any participant is eventually rejected by all other participants, this indicates that no stable matching is possible. Otherwise, Phase 1 will end with each person holding a proposal from one of the others.
Consider two participants, q and p. If q holds a proposal from p, then we remove from qs list all participants x after p, and symmetrically, for each removed participant x, we remove q from xs list, so that q is first in ps list; and p, last in qs, since q and any x cannot be partners in any stable matching. The resulting reduced set of preference lists together is called the Phase 1 table. In this table, if any reduced list is empty, then there is no stable matching. Otherwise, the Phase 1 table is a stable table. A stable table, by definition, is the set of preference lists from the original table after members have been removed from one or more of the lists, and the following three conditions are satisfied (where reduced list means a list in the stable table):
(i) p is first on qs reduced list if and only if q is last on ps
(ii) p is not on qs reduced list if and only if q is not on ps if and only if q prefers the last person on their list to p; or p, the last person on their list to q
(iii) no reduced list is empty
Stable tables have several important properties, which are used to justify the remainder of the procedure:
Any stable table must be a subtable of the Phase 1 table, where subtable is a table where the preference lists of the subtable are those of the supertable with some individuals removed from each other's lists.
In any stable table, if every reduced list contains exactly one individual, then pairing each individual with the single person on their list gives a stable matching.
If the stable roommates problem instance has a stable matching, then there is a stable matching contained in any one of the stable tables.
Any stable subtable of a stable table, and in particular any stable subtable that specifies a stable matching as in 2, can be obtained by a sequence of rotation eliminations on the stable table.
These rotation eliminations comprise Phase 2 of Irving's algorithm.
By 2, if each reduced list of the Phase 1 table contains exactly one individual, then this gives a matching.
Otherwise, the algorithm enters Phase 2. A rotation in a stable table T is defined as a sequence (x0, y0), (x1, y1), ..., (xk-1, yk-1) such that the xi are distinct, yi is first on xi's reduced list (or xi is last on yi's reduced list) and yi+1 is second on xi's reduced list, for i = 0, ..., k-1 where the indices are taken modulo k. It follows that in any stable table with a reduced list containing at least two individuals, such a rotation always exists. To find it, start at such a p0 containing at least two individuals in their reduced list, and define recursively qi+1 to be the second on pi's list and pi+1 to be the last on qi+1's list, until this sequence repeats some pj, at which point a rotation is found: it is the sequence of pairs starting at the first occurrence of (pj, qj) and ending at the pair before the last occurrence. The sequence of pi up until the pj is called the tail of the rotation. The fact that it's a stable table in which this search occurs guarantees that each pi has at least two individuals on their list.
To eliminate the rotation, yi rejects xi so that xi proposes to yi+1, for each i. To restore the stable table properties (i) and (ii), for each i, all successors of xi-1 are removed from yi's list, and yi is removed from their lists. If a reduced list becomes empty during these removals, then there is no stable matching. Otherwise, the new table is again a stable table, and either already specifies a matching since each list contains exactly one individual or there remains another rotation to find and eliminate, so the step is repeated.
Phase 2 of the algorithm can now be summarized as follows:
T = Phase 1 table;
while (true) {
identify a rotation r in T;
eliminate r from T;
if some list in T becomes empty,
return null; (no stable matching can exist)
else if (each reduced list in T has size 1)
return the matching M = {{x, y} | x and y are on each other's lists in T}; (this is a stable matching)
}
To achieve an O(n2) running time, a ranking matrix whose entry at row i and column j is the position of the jth individual in the ith's list; this takes O(n2) time. With the ranking matrix, checking whether an individual prefers one to another can be done in constant time by comparing their ranks in the matrix. Furthermore, instead of explicitly removing elements from the preference lists, the indices of the first, second and last on each individual's reduced list are maintained. The first individual that is unmatched, i.e. has at least two in their reduced lists, is also maintained. Then, in Phase 2, the sequence of pi "traversed" to find a rotation is stored in a list, and an array is used to mark individuals as having been visited, as in a standard depth-first search graph traversal. After the elimination of a rotation, we continue to store only its tail, if any, in the list and as visited in the array, and start the search for the next rotation at the last individual on the tail, and otherwise at the next unmatched individual if there is no tail. This reduces repeated traversal of the tail, since it is largely unaffected by the elimination of the rotation.
Example
The following are the preference lists for a Stable Roommates instance involving 6 participants: 1, 2, 3, 4, 5, 6.
1 : 3 4 2 6 5
2 : 6 5 4 1 3
3 : 2 4 5 1 6
4 : 5 2 3 6 1
5 : 3 1 2 4 6
6 : 5 1 3 4 2
A possible execution of Phase 1 consists of the following sequence of proposals and rejections, where → represents proposes to and × represents rejects.
1 → 3
2 → 6
3 → 2
4 → 5
5 → 3; 3 × 1
1 → 4
6 → 5; 5 × 6
6 → 1
So Phase 1 ends with the following reduced preference lists: (for example we cross out 5 for 1: because 1: gets at least 6)
1 : 4 2 6
2 : 6 5 4 1 3
3 : 2 4 5
4 : 5 2 3 6 1
5 : 3 2 4
6 : 1 4 2
In Phase 2, the rotation r1 = (1,4), (3,2) is first identified. This is because 2 is 1's second favorite, and 4 is the second favorite of 3. Eliminating r1 gives:
1 : 2 6
2 : 6 5 4 1
3 : 4 5
4 : 5 2 3
5 : 3 2 4
6 : 1 2
Next, the rotation r2 = (1,2), (2,6), (4,5) is identified, and its elimination yields:
1 : 6
2 : 5 4
3 : 4 5
4 : 2 3
5 : 3 2
6 : 1
Hence 1 and 6 are matched. Finally, the rotation r3 = (2,5), (3,4) is identified, and its elimination gives:
1 : 6
2 : 4
3 : 5
4 : 2
5 : 3
6 : 1
Hence the matching {1, 6}, {2,4}, {3, 5} is stable.
Implementation in software packages
Python: An implementation of Irving's algorithm is available as part of the matching library.
Java: A constraint programming model to find all stable matchings in the roommates problem with incomplete lists is available under the CRAPL licence.
R: The same constraint programming model is also available as part of the R matchingMarkets package.
API: The MatchingTools API provides a free application programming interface for the algorithm.
Web Application: The "Dyad Finder" website provides a free, web-based implementation of the algorithm, including source code for the website and solver written in JavaScript.
Matlab: The algorithm is implemented in the assignStableRoommates function that is part of the United States Naval Research Laboratory's free Tracker Component Library.
| Mathematics | Order theory | null |
20819040 | https://en.wikipedia.org/wiki/Hashtag | Hashtag | A hashtag is a metadata tag operator that is prefaced by the hash symbol, #. On social media, hashtags are used on microblogging and photo-sharing services–especially Twitter and Tumblr–as a form of user-generated tagging that enables cross-referencing of content by topic or theme. For example, a search within Instagram for the hashtag #bluesky returns all posts that have been tagged with that term. After the initial hash symbol, a hashtag may include letters, numerals or other punctuation.
The use of hashtags was first proposed by American blogger and product consultant Chris Messina in a 2007 tweet. Messina made no attempt to patent the use because he felt that "they were born of the internet, and owned by no one". Hashtags became entrenched in the culture of Twitter and soon emerged across Instagram, Facebook, and YouTube. In June 2014, hashtag was added to the Oxford English Dictionary as "a word or phrase with the symbol # in front of it, used on social media websites and apps so that you can search for all messages with the same subject".
Origin and acceptance
The number sign or hash symbol, #, has long been used in information technology to highlight specific pieces of text. In 1970, the number sign was used to denote immediate address mode in the assembly language of the PDP-11 when placed next to a symbol or a number, and around 1973, '#' was introduced in the C programming language to indicate special keywords that the C preprocessor had to process first. The pound sign was adopted for use within IRC (Internet Relay Chat) networks around 1988 to label groups and topics. Channels or topics that are available across an entire IRC network are prefixed with a hash symbol # (as opposed to those local to a server, which uses an ampersand '&').
The use of the pound sign in IRC inspired Chris Messina to propose a similar system on Twitter to tag topics of interest on the microblogging network. He proposed the usage of hashtags on Twitter:
According to Messina, he suggested use of the hashtag to make it easy for lay users without specialized knowledge of search protocols to find specific relevant content. Therefore, the hashtag "was created organically by Twitter users as a way to categorize messages".
The first published use of the term "hash tag" was in a blog post "Hash Tags = Twitter Groupings" by Stowe Boyd, on August 26, 2007, according to lexicographer Ben Zimmer, chair of the American Dialect Society's New Words Committee.
Messina's suggestion to use the hashtag was not immediately adopted by Twitter, but the convention gained popular acceptance when hashtags were used in tweets relating to the 2007 San Diego forest fires in Southern California. The hashtag gained international acceptance during the 2009–2010 Iranian election protests; Twitter users used both English- and Persian-language hashtags in communications during the events.
Hashtags have since played critical roles in recent social movements such as #jesuischarlie, #BLM, and #MeToo.
Beginning July 2, 2009, Twitter began to hyperlink all hashtags in tweets to Twitter search results for the hashtagged word (and for the standard spelling of commonly misspelled words). In 2010, Twitter introduced "Trending Topics" on the Twitter front page, displaying hashtags that are rapidly becoming popular, and the significance of trending hashtags has become so great that the company makes significant efforts to foil attempts to spam the trending list. During the 2010 World Cup, Twitter explicitly encouraged the use of hashtags with the temporary deployment of "hashflags", which replaced hashtags of three-letter country codes with their respective national flags.
Other platforms such as YouTube and Gawker Media followed in officially supporting hashtags, and real-time search aggregators such as Google Real-Time Search began supporting hashtags.
Format
A hashtag must begin with a hash (#) character followed by other characters, and is terminated by a space or the end of the line. Some platforms may require the # to be preceded with a space. Most or all platforms that support hashtags permit the inclusion of letters (without diacritics), numerals, and underscores. Other characters may be supported on a platform-by-platform basis. Some characters, such as "&", are generally not supported as they may already serve other search functions. Hashtags are not case sensitive (a search for "#hashtag" will match "#HashTag" as well), but the use of embedded capitals (i.e., CamelCase) increases legibility and improves accessibility.
Languages that do not use word dividers handle hashtags differently. In China, microblogs Sina Weibo and Tencent Weibo use a double-hashtag-delimited #HashName# format, since the lack of spacing between Chinese characters necessitates a closing tag. Twitter uses a different syntax for Chinese characters and orthographies with similar spacing conventions: the hashtag contains unspaced characters, separated from preceding and following text by spaces (e.g., '我 #爱 你' instead of '我#爱你') or by zero-width non-joiner characters before and after the hashtagged element, to retain a linguistically natural appearance (displaying as unspaced '我#爱你', but with invisible non-joiners delimiting the hashtag).
Etiquette and regulation
Some communities may limit, officially or unofficially, the number of hashtags permitted on a single post.
Misuse of hashtags can lead to account suspensions. Twitter warns that adding hashtags to unrelated tweets, or repeated use of the same hashtag without adding to a conversation can filter an account from search results, or suspend the account.
Individual platforms may deactivate certain hashtags either for being too generic to be useful, such as #photography on Instagram, or due to their use to facilitate illegal activities.
Alternate formats
In 2009, StockTwits began using ticker symbols preceded by the dollar sign (e.g., $XRX). In July 2012, Twitter began supporting the tag convention and dubbed it the "cashtag". The convention has extended to national currencies, and Cash App has implemented the cashtag to mark usernames.
Function
Hashtags are particularly useful in unmoderated forums that lack a formal ontological organization. Hashtags help users find content similar interest. Hashtags are neither registered nor controlled by any one user or group of users. They do not contain any set definitions, meaning that a single hashtag can be used for any number of purposes, and that the accepted meaning of a hashtag can change with time.
Hashtags intended for discussion of a particular event tend to use an obscure wording to avoid being caught up with generic conversations on similar subjects, such as a cake festival using #cakefestival rather than simply #cake. However, this can also make it difficult for topics to become "trending topics" because people often use different spelling or words to refer to the same topic. For topics to trend, there must be a consensus, whether silent or stated, that the hashtag refers to that specific topic.
Hashtags may be used informally to express context around a given message, with no intent to categorize the message for later searching, sharing, or other reasons. Hashtags may thus serve as a reflexive meta-commentary.
This can help express contextual cues or offer more depth to the information or message that appears with the hashtag. "My arms are getting darker by the minute. #toomuchfaketan". Another function of the hashtag can be used to express personal feelings and emotions. For example, with "It's Monday!! #excited #sarcasm" in which the adjectives are directly indicating the emotions of the speaker.
Verbal use of the word hashtag is sometimes used in informal conversations. Use may be humorous, such as "I'm hashtag confused!" By August 2012, use of a hand gesture, sometimes called the "finger hashtag", in which the index and middle finger both hands are extended and arranged perpendicularly to form the hash, was documented.
Co-optation by other industries
Companies, businesses, and advocacy organizations have taken advantage of hashtag-based discussions for promotion of their products, services or campaigns.
In the early 2010s, some television broadcasters began to employ hashtags related to programs in digital on-screen graphics, to encourage viewers to participate in a backchannel of discussion via social media prior to, during, or after the program. Television commercials have sometimes contained hashtags for similar purposes.
The increased usage of hashtags as brand promotion devices has been compared to the promotion of branded "keywords" by AOL in the late 1990s and early 2000s, as such keywords were also promoted at the end of television commercials and series episodes.
Organized real-world events have used hashtags and ad hoc lists for discussion and promotion among participants. Hashtags are used as beacons by event participants to find each other, both on Twitter and, in many cases, during actual physical events.
Since the 2012–13 season, the NBA has allowed fans to vote players in as All-Star Game starters on Twitter and Facebook using #NBAVOTE.
Hashtag-centered biomedical Twitter campaigns have shown to increase the reach, promotion, and visibility of healthcare-related open innovation platforms.
Non-commercial use
Political protests and campaigns in the early 2010s, such as #OccupyWallStreet and #LibyaFeb17, have been organized around hashtags or have made extensive usage of hashtags for the promotion of discussion. Hashtags are frequently employed to either show support or opposition towards political figures. For example, the hashtag #MakeAmericaGreatAgain signifies support for Trump, whereas #DisinfectantDonnie expresses ridicule of Trump. Hashtags have also been used to promote official events; the Finnish Ministry of Foreign Affairs officially titled the 2018 Russia–United States summit as the "#HELSINKI2018 Meeting".
Hashtags have been used to gather customer criticism of large companies. In January 2012, McDonald's created the #McDStories hashtag so that customers could share positive experiences about the restaurant chain, but the marketing effort was cancelled after two hours when critical tweets outnumbered praising ones.
In 2017, the #MeToo hashtag became viral in response to the sexual harassment accusations against Harvey Weinstein. The use of this hashtag can be considered part of hashtag activism, spreading awareness across eighty-five different countries with more than seventeen million Tweets using the hashtag #MeToo. This hashtag was not only used to spread awareness of accusations regarding Harvey Weinstein but allowed different women to share their experiences of sexual violence. Using this hashtag birthed multiple different hashtags in connection to #MeToo to encourage more women to share their stories, resulting in further spread of the phenomenon of hashtag activism. The use of hashtags, especially, in this case, allowed for better and easier access to search for content related to this social media movement.
Sentiment analysis
The use of hashtags also reveals what feelings or sentiment an author attaches to a statement. This can range from the obvious, where a hashtag directly describes the state of mind, to the less obvious. For example, words in hashtags are the strongest predictor of whether or not a statement is sarcastic—a difficult AI problem.
Professional development and education
Hashtags play an important role for employees and students in professional fields and education. In industry, individuals' engagement with a hashtags can provide opportunities for them develop and gain some professional knowledge in their fields.
In education, research on language teachers who engaged in the #MFLtwitterati hashtag demonstrates the uses of hashtags for creating community and sharing teaching resources. The majority of participants reported positive impact on their teaching strategies as inspired by many ideas shared by different individuals in the Hashtag.
Emerging research in communication and learning demonstrates how hashtag practices influence the teaching and development of students. An analysis of eight studies examined the use of hashtags in K–12 classrooms and found significant results. These results indicated that hashtags assisted students in voicing their opinions. In addition, hashtags also helped students understand self-organisation and the concept of space beyond place. Related research demonstrated how high school students engagement with hashtag communication practices allowed them to develop story telling skills and cultural awareness.
For young people at risk of poverty and social exclusion during the COVID-19 pandemic, Instagram hashtags were shown in a 2022 article to foster scientific education and promote remote learning.
In popular culture
During the April 2011 Canadian party leader debate, Jack Layton, then-leader of the New Democratic Party, referred to Conservative Prime Minister Stephen Harper's crime policies as " hashtag fail" (presumably #fail).
In 2010 Kanye West used the term "hashtag rap" to describe a style of rapping that, according to Rizoh of the Houston Press, uses "a metaphor, a pause, and a one-word punch line, often placed at the end of a rhyme". Rappers Nicki Minaj, Big Sean, Drake, and Lil Wayne are credited with the popularization of hashtag rap, while the style has been criticized by Ludacris, The Lonely Island, and various music writers.
On September 13, 2013, a hashtag, #TwitterIPO, appeared in the headline of a New York Times front-page article regarding Twitter's initial public offering.
In 2014 Bird's Eye foods released "Mashtags", a mashed potato product with pieces shaped either like
or .
In 2019, the British Ornithological Union included a hash character in the design of its new Janet Kear Union Medal, to represent "science communication and social media".
Linguistic analysis
Linguists argue that hashtagging is a morphological process and that hashtags function as words.
The popularity of a hashtag is influenced less by its conciseness and clarity, and more by the presence of preexisting popular hashtags with similar syntactic formats. This suggests that, similar to word formation, users may see the syntax of an existing viral hashtag as a blueprint for creating new ones. For instance, the viral hashtag #JeSuisCharlie gave rise to other popular indicative mood hashtags like #JeVoteMacron and #JeChoisisMarine.
| Technology | Internet | null |
20825705 | https://en.wikipedia.org/wiki/New%20Zealand%20parrot | New Zealand parrot | The New Zealand parrot family, Strigopidae, consists of at least three genera of parrots – Nestor, Strigops, the fossil Nelepsittacus, and probably the fossil Heracles. The genus Nestor consists of the kea, kākā, Norfolk kākā and Chatham kākā, while the genus Strigops contains the iconic kākāpō. All extant species are endemic to New Zealand. The species of the genus Nelepsittacus were endemics of the main islands, while the two extinct species of the genus Nestor were found at the nearby oceanic islands such as Chatham Island of New Zealand, and Norfolk Island and adjacent Phillip Island.
The Norfolk kākā and the Chatham kākā have become extinct in recent times, while the species of the genus Nelepsittacus have been extinct for 16 million years. All extant species, the kākāpō, kea, and the two subspecies of the kākā, are threatened. Human activity caused the two extinctions and the decline of the other three species. Settlers introduced invasive species, such as pigs, cats, foxes, weasels, rats and possums, which eat the eggs of ground-nesting birds, and additional declines have been caused by hunting for food, killing as agricultural pests, habitat loss, and introduced wasps.
The family diverged from the other parrots around 82 million years ago when New Zealand broke off from Gondwana, while the ancestors of the genera Nestor and Strigops diverged from each other between 60 and 80 million years ago.
Systematics
No consensus existed regarding the taxonomy of Psittaciformes until recently. The placement of the Strigopoidea species has been variable in the past. The family belongs to its own superfamily Strigopoidea. This superfamily is one of three superfamilies in the order Psittaciformes; the other two families are Cacatuoidea (cockatoos) and Psittacoidea (true parrots). While some taxonomists include three genera (Nestor, Nelepsittacus, and Strigops) in the family Strigopidae, others place Nestor and Nelepsittacus in the Nestoridae and retain only Strigops in the Strigopidae. Traditionally, the species of the family Strigopoidea were placed in the superfamily Psittacoidea, but several studies confirmed the unique placement of this group at the base of the parrot tree.
Phylogeography
An unproven hypothesis for the phylogeography of this group has been proposed, providing an example of various speciation mechanisms at work. In this scenario, ancestors of this group became isolated from the remaining parrots when New Zealand broke away from Gondwana about 82 million years ago, resulting in a physical separation of the two groups. This mechanism is called allopatric speciation. Over time, ancestors of the two surviving genera, Nestor and Strigops, adapted to different ecological niches. This led to reproductive isolation, an example of ecological speciation. In the Pliocene, supposedly around five million years ago, the formation of the Southern Alps / Kā Tiritiri o te Moana diversified the landscape and provided new opportunities for speciation within the genus Nestor. Around three million years ago, two lineages may have adapted to high altitude and low altitude, respectively. The high-altitude lineage gave rise to the modern kea, while the low-altitude lineage gave rise to the various kākā species. Island species diverge rapidly from mainland species once a few vagrants arrive at a suitable island. Both the Norfolk kākā and the Chatham kākā are the result of migration of a limited number of individuals to islands and subsequent adaptation to the habitat of those islands. The lack of DNA material for the Chatham kākā makes it difficult to establish precisely when those speciation events occurred. Finally, in recent times, the kākā populations at the North Island and South Island became isolated from each other due to the rise in sea levels when the continental glaciers melted at the end of the Pleistocene.
Until modern times, New Zealand and the surrounding islands were not inhabited by four-legged mammals, an environment that enabled some birds to make nests on the ground and others to be flightless without fear of predation.
The parakeet species belonging to the genus Cyanoramphus (kākāriki) belong to the true parrot family Psittacidae and are closely related to the endemic genus Eunymphicus from New Caledonia. They may have reached New Zealand between 450,000 and 625,000 years ago from mainland Australia by way of New Caledonia, but this is disputed.
Species
Very little is known about the Chatham kākā. The genus Nelepsittacus consists of three described and one undescribed species recovered from early Miocene deposits in Otago. The genus Heracles consists of a giant species also described from the early Miocene of Otago.
Common names
All common names for species in this family are the same as the traditional Māori names. The Māori word kākā derives from the ancient Proto-Polynesian word meaning parrot. Kākāpō is a logical extension of that name, as pō means night, resulting in kākā of the night or night parrot, reflecting the species' nocturnal behaviour. (In modern orthography of the Māori language, the long versions of the vowels a and o are written with macrons; i.e., ā and ō. Note that a long ā in Maori should be pronounced like the a in English "father". The etymology of kea in Māori is less clear; it might be onomatopoeic of its kee-aah call.
Ecology
The isolated location of New Zealand has made it difficult for mammals to reach the island. This is reflected in the absence of land mammals other than bats. The main predators were birds: harriers, falcons, owls, and the massive, extinct Haast's eagle. Many of the adaptations found in the avifauna reflect the unique context in which they evolved. This unique balance was disrupted with the arrival of the Polynesians, who introduced the Polynesian rat and the kurī (Polynesian dog) to the island. Later, Europeans introduced many more species, including large herbivores and mammalian predators.
The three extant species of this family occupy rather different ecological niches, a result of the phylogeographical dynamics of this family. The kākāpō is a flightless, nocturnal species, well camouflaged to avoid the large diurnal birds of prey on the island, while the local owls are too small to prey on the kākāpō at night. The kākāpō is the only flightless bird in the world to use a lek-breeding system. Usually, they breed only every 3–5 years when certain podocarp trees like rimu (Dacrydium cupressinum) mast abundantly.
The kea is well adapted to life at high altitudes, and they are regularly observed in the snow at ski resorts. As trees are absent in the alpine zone, they breed in hollows in the ground instead of in tree hollows like most parrot species.
Relationship with humans
Importance to the Māori
The parrots were important to the Māori in various ways. They hunted them for food, kept them as pets, and used their feathers in weaving such items as their kahu huruhuru (feather cloak). Feathers were also used to decorate the head of the taiaha, a Māori weapon, but were removed prior to battle. The skins of the kākāpō with the feathers attached were used to make cloaks (kākahu) and dress capes (kahu kākāpō), especially for the wives and daughters of chiefs. Māori like to refer to the kākā in the tauparapara, the incantation to begin their mihi (tribute), because their voice (reo) is continuous.
Status
Of the five species, the Norfolk kākā and Chatham kākā became extinct in recent history. The last known Norfolk kākā died in captivity in London sometime after 1851, and only between seven and 20 skins survive. The Chatham kākā became extinct between 1500 and 1650 in pre-European times, after Polynesians arrived at the island, and is only known from subfossil bones. Of the surviving species, the kākāpō is critically endangered, with living individuals numbering only The mainland kākā is listed as endangered, alongside the kea.
Threats
The fauna of New Zealand evolved in the total absence of humans and other mammals. Only a few bat species and sea mammals were present prior to colonisation by humans, and the only predators were birds of prey that hunt by sight. These circumstances influence the design of New Zealand's parrots, for example, the flightlessness of the kākāpō and the ground breeding of the kea. Polynesians arrived at Aotearoa between 800 and 1300 AD, and introduced the kurī (dog) and kiore (Polynesian rat) to the islands. This was disastrous for the native fauna, because mammalian predators can locate prey by scent, and the native fauna had no defence against them.
The kākāpō was hunted for its meat, skin, and plumage. When the first European settlers arrived, the kākāpō was already declining, but still widespread. The large-scale clearance of forests and bush destroyed its habitat while introduced predators such as rats, cats, and stoats found the flightless, ground-nesting birds easy prey.
The New Zealand kākā needs large tracts of forest to thrive, and the continued fragmentation of forests due to agriculture and logging has a devastating effect on this species. Another threat comes from competition with introduced species for food, for example with possums for the endemic mistletoe and rata and with wasps for shimmering honeydew, an excretion of scale insects. Females, young, and eggs are particularly vulnerable in the tree hollows in which they nest.
The kea nests in holes in the ground, again making it vulnerable to introduced predators. Another major threat, resulting from development of the alpine zone, is their opportunistic reliance on human food sources as their natural food sources dwindle.
Conservation
Recovery programs for the kākāpō and the kākā have been established, while the kea is also closely monitored. The living kākāpō are all in a breeding and conservation program. Each one has been individually named.
| Biology and health sciences | Psittaciformes | Animals |
782890 | https://en.wikipedia.org/wiki/Semisynthesis | Semisynthesis | Semisynthesis, or partial chemical synthesis, is a type of chemical synthesis that uses chemical compounds isolated from natural sources (such as microbial cell cultures or plant material) as the starting materials to produce novel compounds with distinct chemical and medicinal properties. The novel compounds generally have a high molecular weight or a complex molecular structure, more so than those produced by total synthesis from simple starting materials. Semisynthesis is a means of preparing many medicines more cheaply than by total synthesis since fewer chemical steps are necessary.
Drugs derived from natural sources are commonly produced either by isolation from their natural source or, as described here, through semisynthesis of an isolated agent. From the perspective of chemical synthesis, living organisms act as highly efficient chemical factories, capable of producing structurally complex compounds through biosynthesis. In contrast, engineered chemical synthesis, although powerful, tends to be simpler and less chemically diverse than the complex biosynthetic pathways essential to life.
Biological vs engineered pathways
Due to these differences, certain functional groups are easier to synthesize using engineered chemical methods, such as acetylation. However, biological pathways are often able to generate complex groups and structures with minimal economic input, making certain biosynthetic processes far more efficient than total synthesis for producing complex molecules. This efficiency drives the preference for natural sources in the preparation of certain compounds, especially when synthesizing them from simpler molecules would be cost-prohibitive.
Applications
Plants, animals, fungi, and bacteria are all valuable sources of complex precursor molecules, with bioreactors representing an intersection of biological and engineered synthesis. In drug discovery, semisynthesis is employed to retain the medicinal properties of a natural compound while modifying other molecular characteristics—such as adverse effects or oral bioavailability—in just a few chemical steps. Semisynthesis contrasts with total synthesis, which constructs the target molecule entirely from inexpensive, low-molecular-weight precursors, often petrochemicals or minerals. While there is no strict boundary between total synthesis and semisynthesis, they differ primarily in the degree of engineered synthesis employed. Complex or fragile functional groups are often more cost-effective to extract directly from an organism than to prepare from simpler precursors, making semisynthesis the preferred approach for complex natural products.
Notable examples in drug development
Practical applications of semisynthesis include the groundbreaking isolation of the antibiotic chlortetracycline and the subsequent semisynthesis of antibiotics such as tetracycline, doxycycline, and tigecycline. Other notable examples include the early commercial production of the anti-cancer agent paclitaxel from 10-deacetylbaccatin, isolated from Taxus baccata (European yew), the semisynthesis of LSD from ergotamine (derived from fungal cultures of ergot), and the preparation of the antimalarial drug artemether from the naturally occurring compound artemisinin. As synthetic chemistry advances, transformations that were previously too costly or difficult to achieve become more feasible, influencing the economic viability of semisynthetic routes.
| Physical sciences | Synthetic strategies | Chemistry |
783563 | https://en.wikipedia.org/wiki/Total%20synthesis | Total synthesis | Total synthesis, a specialized area within organic chemistry, focuses on constructing complex organic compounds, especially those found in nature, using laboratory methods. It often involves synthesizing natural products from basic, commercially available starting materials. Total synthesis targets can also be organometallic or inorganic. While total synthesis aims for complete construction from simple starting materials, modifying or partially synthesizing these compounds is known as semisynthesis.
Natural product synthesis serves as a critical tool across various scientific fields. In organic chemistry, it tests new synthetic methods, validating and advancing innovative approaches. In medicinal chemistry, natural product synthesis is essential for creating bioactive compounds, driving progress in drug discovery and therapeutic development. Similarly, in chemical biology, it provides research tools for studying biological systems and processes. Additionally, synthesis aids natural product research by helping confirm and elucidate the structures of newly isolated compounds.
The field of natural product synthesis has progressed remarkably since the early 19th century, with improvements in synthetic techniques, analytical methods, and an evolving understanding of chemical reactivity. Today, modern synthetic approaches often combine traditional organic methods, biocatalysis, and chemoenzymatic strategies to achieve efficient and complex syntheses, broadening the scope and applicability of synthetic processes.
Key components of natural product synthesis include retrosynthetic analysis, which involves planning synthetic routes by working backward from the target molecule to design the most effective construction pathway. Stereochemical control is crucial to ensure the correct three-dimensional arrangement of atoms, critical for the molecule's functionality. Reaction optimization enhances yield, selectivity, and efficiency, making synthetic steps more practical. Finally, scale-up considerations allow researchers to adapt lab-scale syntheses for larger production, expanding the accessibility of synthesized products. This evolving field continues to fuel advancements in drug development, materials science, and our understanding of the diversity in natural compounds.
Scope and definitions
There are numerous classes of natural products for which total synthesis is applied to. These include (but are not limited to): terpenes, alkaloids, polyketides. and polyethers. Total synthesis targets are sometimes referred to by their organismal origin such as plant, marine, and fungal. The term total synthesis is less frequently but still accurately applied to the synthesis of natural polypeptides and polynucleotides. The peptide hormones oxytocin and vasopressin were isolated and their total syntheses first reported in 1954. It is not uncommon for natural product targets to feature multiple structural components of several natural product classes.
Aims
Although untrue from an historical perspective (see the history of the steroid, cortisone), total synthesis in the modern age has largely been an academic endeavor (in terms of manpower applied to problems). Industrial chemical needs often differ from academic focuses. Typically, commercial entities may pick up particular avenues of total synthesis efforts and expend considerable resources on particular natural product targets, especially if semi-synthesis can be applied to complex, natural product-derived drugs. Even so, for decades there has been a continuing discussion regarding the value of total synthesis as an academic enterprise. While there are some outliers, the general opinions are that total synthesis has changed in recent decades, will continue to change, and will remain an integral part of chemical research. Within these changes, there has been increasing focus on improving the practicality and marketability of total synthesis methods. The Phil S. Baran group at Scripps, a notable pioneer of practical synthesis have endeavored to create scalable and high efficiency syntheses that would have more immediate uses outside of academia.
History
Friedrich Wöhler discovered that an organic substance, urea, could be produced from inorganic starting materials in 1828. That was an important conceptual milestone in chemistry by being the first example of a synthesis of a substance that had been known only as a byproduct of living processes. Wöhler obtained urea by treating silver cyanate with ammonium chloride, a simple, one-step synthesis:
AgNCO + NH4Cl → (NH2)2CO + AgCl
Camphor was a scarce and expensive natural product with a worldwide demand. Haller and Blanc synthesized it from camphor acid; however, the precursor, camphoric acid, had an unknown structure. When Finnish chemist Gustav Komppa synthesized camphoric acid from diethyl oxalate and 3,3-dimethylpentanoic acid in 1904, the structure of the precursors allowed contemporary chemists to infer the complicated ring structure of camphor. Shortly thereafter, William Perkin published another synthesis of camphor. The work on the total chemical synthesis of camphor allowed Komppa to begin industrial production of the compound, in Tainionkoski, Finland, in 1907.
The American chemist Robert Burns Woodward was a pre-eminent figure in developing total syntheses of complex organic molecules, some of his targets being cholesterol, cortisone, strychnine, lysergic acid, reserpine, chlorophyll, colchicine, vitamin B12, and prostaglandin F-2a.
Vincent du Vigneaud was awarded the 1955 Nobel Prize in Chemistry for the total synthesis of the natural polypeptide oxytocin and vasopressin, which reported in 1954 with the citation "for his work on biochemically important sulphur compounds, especially for the first synthesis of a polypeptide hormone."
Another gifted chemist is Elias James Corey, who won the Nobel Prize in Chemistry in 1990 for lifetime achievement in total synthesis and for the development of retrosynthetic analysis.
List of notable total syntheses
Quinine total synthesis First synthesized by Robert Burns Woodward and William von Eggers Doering in 1944, this achievement was significant due to quinine's importance as an antimalarial drug.
Strychnine total synthesis First synthesized by Robert Burns Woodward in 1954, this synthesis was a landmark achievement due to the molecule's structural complexity.
Morphine: First synthesized by Marshall D. Gates in 1952, with subsequent more efficient syntheses developed by other chemists, including Toshiaki Fukuyama in 2017.
Cholesterol total synthesis Synthesized by Robert Burns Woodward in 1951, this was a significant achievement in steroid synthesis.
Cortisone: Another notable steroid synthesis by Robert Burns Woodward in 1951.
Lysergic acid: Synthesized by Robert Burns Woodward in 1954, this was an important precursor to LSD.
Reserpine: Completed by Robert Burns Woodward in 1956, this synthesis was notable for its complexity and the molecule's importance as an antihypertensive drug.
Chlorophyll: Synthesized by Robert Burns Woodward in 1960, this achievement was significant due to chlorophyll's crucial role in photosynthesis.
Colchicine: Another notable synthesis by Robert Burns Woodward, completed in 1963.
Prostaglandin F-2a: Synthesized by E.J. Corey in 1969, this was an important achievement in the synthesis of prostaglandins.
Vitamin B12 total synthesis Completed by Robert Burns Woodward and his team in 1972, this synthesis is considered one of the most complex ever achieved, involving over 100 steps.
Paclitaxel (Taxol) total synthesis: First synthesized by Robert A. Holton in 1994, and later by K. C. Nicolaou in 1995, this anticancer drug's synthesis was a major breakthrough in medicinal chemistry.
Brefeldin A: Synthesized by S. Raghavan in 2017, this complex macrolide has potential as an anticancer agent.
Ryanodine: Synthesized by Sarah E. Reisman in 2017, this complex diterpenoid has important biological activity.
| Physical sciences | Synthetic strategies | Chemistry |
784628 | https://en.wikipedia.org/wiki/Eyelid | Eyelid | An eyelid ( ) is a thin fold of skin that covers and protects an eye. The levator palpebrae superioris muscle retracts the eyelid, exposing the cornea to the outside, giving vision. This can be either voluntarily or involuntarily. "Palpebral" (and "blepharal") means relating to the eyelids. Its key function is to regularly spread the tears and other secretions on the eye surface to keep it moist, since the cornea must be continuously moist. They keep the eyes from drying out when asleep. Moreover, the blink reflex protects the eye from foreign bodies. A set of specialized hairs known as lashes grow from the upper and lower eyelid margins to further protect the eye from dust and debris.
The appearance of the human upper eyelid often varies between different populations. The prevalence of an epicanthic fold covering the inner corner of the eye account for the majority of East Asian and Southeast Asian populations, and is also found in varying degrees among other populations. Separately, but also similarly varying between populations, the crease of the remainder of the eyelid may form either a "single eyelid", a "double eyelid", or an intermediate form.
Eyelids can be found in other animals, some of which may have a third eyelid, or nictitating membrane. A vestige of this in humans survives as the plica semilunaris.
Structure
Layers
The eyelid is made up of several layers; from superficial to deep, these are: skin, subcutaneous tissue, orbicularis oculi, orbital septum and tarsal plates, and palpebral conjunctiva. The meibomian glands lie within the eyelid and secrete the lipid part of the tear film.
Skin
The skin is similar to areas elsewhere, but is relatively thin and has more pigment cells. In diseased persons, these may wander and cause a discoloration of the lids. It contains sweat glands and hairs, the latter becoming eyelashes as the border of the eyelid is met. The skin of the eyelid contains the greatest concentration of sebaceous glands found anywhere in the body.
Nerve supply
In humans, the sensory nerve supply to the upper eyelids is from the infratrochlear, supratrochlear, supraorbital and the lacrimal nerves from the ophthalmic branch (V1) of the trigeminal nerve (CN V). The skin of the lower eyelid is supplied by branches of the infratrochlear at the medial angle. The rest is supplied by branches of the infraorbital nerve of the maxillary branch (V2) of the trigeminal nerve.
Blood supply
In humans, the eyelids are supplied with blood by two arches on each upper and lower lid. The arches are formed by anastomoses of the lateral palpebral arteries and medial palpebral arteries, branching off from the lacrimal artery and ophthalmic artery, respectively.
Eyelashes
The eyelashes (or simply lashes) are hairs that grow on the edges of the upper and lower eyelids. The lashes are short (upper lashes are typically just 7 to 8 mm in length) hairs, though can be exceptionally long (occasionally up to 15 mm in length) and prominent in some individuals with trichomegaly. The lashes protect the eye from dust and debris by catching them via rapid blinking when the blink reflex is triggered by the debris touching the lashes. Long lashes also play a significant part in facial attractiveness.
Function
The eyelids close or blink voluntarily and involuntarily to protect the eye from foreign bodies, and keep the surface of the cornea moist. The upper and lower human eyelids feature a set of eyelashes which grow in up to 6 rows along each eyelid margin, and serve to heighten the protection of the eye from dust and foreign debris, as well as from perspiration.
Clinical significance
Any condition that affects the eyelid is called eyelid disorder. The most common eyelid disorders, their causes, symptoms and treatments are the following:
Hordeolum (stye) is an infection of the sebaceous glands of Zeis usually caused by Staphylococcus aureus bacteria, similar to the more common condition Acne vulgaris. It is characterized by an acute onset of symptoms and it appears similar to a red bump placed underneath the eyelid. The main symptoms of styes include pain, redness of the eyelid and sometimes swollen eyelids. Styes usually disappear within a week without treatment. Otherwise, antibiotics may be prescribed and home remedies such as warm water compresses may be used to promote faster healing. Styes are normally harmless and do not cause long lasting damage.
Chalazion (plural: chalazia) is caused by the obstruction of the oil glands and can occur in both upper and lower eyelids. Chalazia may be mistaken for styes due to the similar symptoms. This condition is however less painful and it tends to be chronic. Chalazia heal within a few months if treatment is administered and otherwise they can resorb within two years. Chalazia that do not respond to topical medication are usually treated with surgery as a last resort.
Blepharitis is the irritation of the lid margin, where eyelashes join the eyelid. This is a common condition that causes inflammation of the eyelids and which is quite difficult to manage because it tends to recur. This condition is mainly caused by staphylococcus infection and scalp dandruff. Blepharitis symptoms include burning sensation, the feeling that there is something in the eye, excessive tearing, blurred vision, redness of the eye, light sensitivity, red and swollen eyelids, dry eye and sometimes crusting of the eyelashes on awakening. Treatment normally consists in maintaining a good hygiene of the eye and holding warm compresses on the affected eyelid to remove the crusts. Gently scrubbing the eyelid with the warm compress is recommended as it eases the healing process. In more serious cases, antibiotics may be prescribed.
Demodex mites are a genus of tiny mites that live as commensals in and around the hair follicles of numerous mammals including humans, cats and dogs. Human demodex mites typically live in the follicles of the eyebrows and eyelashes. While normally harmless, human demodex mites can sometimes cause irritation of the skin (demodicosis) in persons with weakened immune systems.
Entropion usually results from aging, but sometimes can be due to a congenital defect, a spastic eyelid muscle, or a scar on the inside of the lid that could be from surgery, injury, or disease. It is an asymptomatic condition that can, rarely, lead to trichiasis, which requires surgery. It mostly affects the lower lid, and is characterized by the turning inward of the lid, toward the globe.
Ectropion is another aging-related eyelid condition that may lead to chronic eye irritation and scarring. It may also be the result of allergies and its main symptoms are pain, excessive tearing and hardening of the eyelid conjunctiva.
Laxity is also another aging-related eyelid condition that can lead to dryness and irritation. Surgery may be necessary to repair the eyelid to its natural position. In certain instances, excessive lower lid laxity creates the Fornix of Reiss – a pocket between the lower eyelid and globe – which is the ideal location to administer topical ophthalmic medications.
Eyelid edema is a condition in which the eyelids are swollen and tissues contain excess fluid. It may affect eye function when it increases the intraocular pressure. Eyelid edema is caused by allergy, trichiasis or infections. The main symptoms are swollen red eyelids, pain, and itching. Chronic eyelid edema can lead to blepharochalasis.
Eyelid tumors may also occur. Basal cell carcinomas are the most frequently encountered kind of cancer affecting the eyelid, making up 85% to 95% of all malignant eyelid tumors. The tumors may be benign or malignant. Usually benign tumors are localized and removed before becoming a cancerous threat and before they become large enough to impair vision. Malignant tumors on the other hand tend to spread to surrounding areas and tissues.
Blepharospasm (eyelid twitching) is an involuntary spasm of the eyelid muscle. The most common factors that make the muscle in the eyelid twitch are fatigue, stress, and caffeine. Eyelid twitching is not considered a harmful condition and therefore there is no treatment available. Patients are however advised to get more sleep and drink less caffeine.
Eyelid dermatitis is the inflammation of the eyelid skin. It is mostly a result of allergies or contact dermatitis of the eyelid. Symptoms include dry and flaky skin on the eyelids and swollen eyelids. The affected eyelid may itch. Treatment consists in proper eye hygiene and avoiding the allergens that trigger the condition. In rare cases, topical creams may be used but only under a doctor's supervision.
Ptosis (drooping eyelid) is when the upper eyelid droops or sags due to weakness or paralysis of the levator muscle (responsible for raising the eyelid), or due to damage to nerves controlling the muscle. It can be a manifestation of the normal aging process, a congenital condition, or due to an injury or disease. Risk factors related to ptosis include diabetes, stroke, Horner syndrome, Bell's Palsy (compression/damage to Facial nerve), myasthenia gravis, brain tumor or other cancers that can affect nerve or muscle function.
Ablepharia (ablepharon) is the congenital absence of or reduction in the size of the eyelids.
Surgery
The eyelid surgeries are called blepharoplasties and are performed either for medical reasons or to alter one's facial appearance.
Most of the cosmetic eyelid surgeries are aimed to enhance the look of the face and to boost self-confidence by restoring a youthful eyelid appearance. They are intended to remove fat and excess skin that may be found on the eyelids after a certain age.
Eyelid surgeries are also performed to improve peripheral vision or to treat chalazion, eyelid tumors, ptosis, extropion, trichiasis, and other eyelid-related conditions.
Eyelid surgeries are overall safe procedures but they carry certain risks since the area on which the operation is performed is so close to the eye.
Anatomical variation
An anatomical variation in humans occurs in the creases and folds of the upper eyelid.
An epicanthic fold, the skin fold of the upper eyelid covering the inner corner (medial canthus) of the eye, may be present based on various factors, including ancestry, age, and certain medical conditions. In some populations the trait is almost universal, specifically in East Asians and Southeast Asians, where a majority, up to 90% in some estimations, of adults have this feature.
The upper eyelid crease is a common variation between people of White and East Asian ethnicities. Westerners commonly perceive the East Asian upper eyelid as a "single eyelid". However, East Asian eyelids are divided into three types – single, low, and double – based on the presence or position of the lid crease. Jeong Sang-ki et al. of Chonnam University, Kwangju, Korea, in a study using both Asian and White cadavers as well as four healthy young Korean men, said that "Asian eyelids" have more fat in them than those of White people. Single/double eyelids are polygenic traits. A 2011 study also states that Saudis of "pure Arab" descent generally have higher upper lid crease and upper lid skin fold heights, compared to other ethnic groups.
In some individuals, an eyelid with excessive skin may push the eyelashes downwards and into the eye, obstructing vision in the case of long and thick lashes, and potentially causing corneal abrasion.
Prevalence
Society and culture
Cosmetic surgery
Blepharoplasty is a cosmetic surgical procedure performed to correct deformities and improve or modify the appearance of the eyelids. With 1.43 million people undergoing the procedure in 2014, blepharoplasty is the second most popular cosmetic procedure in the world (Botulinum toxin injection is first), and the most frequently performed cosmetic surgical procedure in the world.
East Asian blepharoplasty, or "double eyelid surgery", has been reported to be the most common aesthetic procedure in Taiwan and South Korea. Though the procedure is also used to reinforce muscle and tendon tissues surrounding the eye, the operative goal of East Asian blepharoplasty is to remove the adipose and linear tissues underneath and surrounding the eyelids in order to crease the upper eyelid. A procedure to remove the epicanthal fold (i.e. an epicanthoplasty) is often performed in conjunction with an East Asian blepharoplasty.
The use of double sided tape or eyelid glue to create the illusion of creased, or "double" eyelids has become a prominent practice in China and other Asian countries. There is a social pressure for women to have this surgery, and also to use the alternative (taping) practices. Blepharoplasty has become a common surgical operation that is actively encouraged, whilst other kinds of plastic surgery are actively discouraged in Chinese culture.
Death
After death, it is common in many cultures to pull the eyelids of the deceased down to close the eyes. This is a typical part of the last offices.
Additional images
| Biology and health sciences | Visual system | Biology |
786056 | https://en.wikipedia.org/wiki/Cargo%20ship | Cargo ship | A cargo ship or freighter is a merchant ship that carries cargo, goods, and materials from one port to another. Thousands of cargo carriers ply the world's seas and oceans each year, handling the bulk of international trade. Cargo ships are usually specially designed for the task, often being equipped with cranes and other mechanisms to load and unload, and come in all sizes. Today, they are almost always built of welded steel, and with some exceptions generally have a life expectancy of 25 to 30 years before being scrapped.
Definitions
The words cargo and freight have become interchangeable in casual usage. Technically, "cargo" refers to the goods carried aboard the ship for hire, while "freight" refers to the act of carrying of such cargo, but the terms have been used interchangeably for centuries.
Generally, the modern ocean shipping business is divided into two classes:
Liner business: typically (but not exclusively) container vessels (wherein "general cargo" is carried in 20- or 40-foot containers), operating as "common carriers", calling at a regularly published schedule of ports. A common carrier refers to a regulated service where any member of the public may book cargo for shipment, according to long-established and internationally agreed rules.
Tramp-tanker business: generally this is private business arranged between the shipper and receiver and facilitated by the vessel owners or operators, who offer their vessels for hire to carry bulk (dry or liquid) or break bulk (cargoes with individually handled pieces) to any suitable port(s) in the world, according to a specifically drawn contract, called a charter party.
Larger cargo ships are generally operated by shipping lines: companies that specialize in the handling of cargo in general. Smaller vessels, such as coasters, are often owned by their operators.
Types
Cargo ships/freighters can be divided into eight groups, according to the type of cargo they carry. These groups are:
Feeder ship
General cargo vessels
Container ships
Tankers
Dry bulk carriers
Multi-purpose vessels
Reefer ships
Roll-on/roll-off vessels.
Rough synopses of cargo ship types
General cargo vessels carry packaged items like chemicals, foods, furniture, machinery, motor- and military vehicles, footwear, garments, etc.
Container ships (sometimes spelled containerships) are cargo ships that carry all of their load in truck-size intermodal containers, in a technique called containerization. They are a common means of commercial intermodal freight transport and now carry most seagoing non-bulk cargo. Container ship capacity is measured in twenty-foot equivalent units (TEU).
Tankers carry petroleum products or other liquid cargo.
Dry bulk carriers carry coal, grain, ore and other similar products in loose form.
Multi-purpose vessels, as the name suggests, carry different classes of cargo – e.g. liquid and general cargo – at the same time.
A Reefer, Reefer ships (or Refrigerated) ship is specifically designed and used for shipping perishable commodities which require temperature-controlled, mostly fruits, meat, fish, vegetables, dairy products and other foodstuffs.
Roll-on/roll-off (RORO or ro-ro) ships are designed to carry wheeled cargo, such as cars, trucks, semi-trailer trucks, trailers, and railroad cars, that are driven on and off the ship on their own wheels.
Timber (Lumber) carriers that transport lumber, logs and related wood products.
Specialized cargo ship types
Specialized types of cargo vessels include container ships and bulk carriers (technically tankers of all sizes are cargo ships, although they are routinely thought of as a separate category). Cargo ships fall into two further categories that reflect the services they offer to industry: liner and tramp services. Those on a fixed published schedule and fixed tariff rates are cargo liners. Tramp ships do not have fixed schedules. Users charter them to haul loads. Generally, the smaller shipping companies and private individuals operate tramp ships. Cargo liners run on fixed schedules published by the shipping companies. Each trip a liner takes is called a voyage. Liners mostly carry general cargo. However, some cargo liners may carry passengers also. A cargo liner that carries 12 or more passengers is called a combination or passenger-run-cargo line.
Size categories
Cargo ships are categorized partly by cargo or shipping capacity (tonnage), partly by weight (deadweight tonnage DWT), and partly by dimensions. Maximum dimensions such as length and width (beam) limit the canal locks a ship can fit in, water depth (draft) is a limitation for canals, shallow straits or harbors and height is a limitation in order to pass under bridges. Common categories include:
Dry cargo
Small Handy size, carriers of 20,000–
Seawaymax, the largest vessel that can traverse the St Lawrence Seaway. These are vessels less than in length, wide, and have a draft less than and a height above the waterline no more than .
Handy size, carriers of 28,000–
Handymax, carriers of 40,000–
Panamax, the largest size that can traverse the original locks of the Panama Canal, a length, a width, and a draft as well as a height limit of . Average deadweight between and , with cargo intake limited to .
Neopanamax, upgraded Panama locks with length, beam, draft,
Capesize, vessels larger than Suezmax and Neopanamax, and must traverse Cape Agulhas and Cape Horn to travel between oceans, dimension: about 170,000 DWT, 290 m long, 45 m beam (wide), 18m draught (under water depth).
Chinamax, carriers of 380,000– up to draft, beam and length; these dimensions are limited by port infrastructure in China
Baltimax, limited by the Great Belt. The limit is a draft of 15.4 metres and an air draft of 65 metres (limited by the clearance of the east bridge of the Great Belt Fixed Link). The length can be around 240 m and the width around 42 m. This gives a weight of around 100,000 metric ton.
Wet cargo
Aframax, oil tankers between 75,000 and . This is the largest size defined by the average freight rate assessment (AFRA) scheme.
Q-Max, liquefied natural gas carrier for Qatar exports. A ship of Q-Max size is long and measures wide and high, with a shallow draft of approximately .
Suezmax, typically ships of about , maximum dimensions are a beam of , a draft of as well as a height limit of can traverse the Suez Canal
VLCC (Very Large Crude Carrier), supertankers between 150,000 and .
Malaccamax, ships with a draft less than that can traverse the Strait of Malacca, typically .
ULCC (Ultra Large Crude Carrier), enormous supertankers between 320,000 and
The TI-class supertanker is an Ultra Large Crude Carrier, with a draft that is deeper than Suezmax, Malaccamax and Neopanamax. This causes Atlantic/Pacific routes to be very long, such as the long voyages south of Cape of Good Hope or south of Cape Horn to transit between Atlantic and Pacific oceans.
Lake freighters built for the Great Lakes in North America differ in design from sea water–going ships because of the difference in wave size and frequency in the lakes. A number of these ships are larger than Seawaymax and cannot leave the lakes and pass to the Atlantic Ocean, since they do not fit the locks on the Saint Lawrence Seaway.
History
The earliest records of waterborne activity mention the carriage of items for trade; the evidence of history and archaeology shows the practice to be widespread by the beginning of the 1st millennium BC, and as early as the 14th and 15th centuries BC small Mediterranean cargo ships like those of the 50 foot long (15–16 metre) Uluburun ship were carrying 20 tons of exotic cargo; 11 tons of raw copper, jars, glass, ivory, gold, spices, and treasures from Canaan, Greece, Egypt, and Africa. The desire to operate trade routes over longer distances, and throughout more seasons of the year, motivated improvements in ship design during the Middle Ages.
Before the middle of the 19th century, the incidence of piracy resulted in most cargo ships being armed, sometimes quite heavily, as in the case of the Manila galleons and East Indiamen. They were also sometimes escorted by warships.
Piracy
Piracy is still quite common in some waters, particularly in the Malacca Straits, a narrow channel between Indonesia and Singapore / Malaysia, and cargo ships are still commonly targeted. In 2004, the governments of those three nations agreed to provide better protection for the ships passing through the Straits. The waters off Somalia and Nigeria are also prone to piracy, while smaller vessels are also in danger along parts of the South American coasts, Southeast Asian coasts, and near the Caribbean Sea.
Vessel prefixes
A category designation appears before the vessel's name. A few examples of prefixes for naval ships are "USS" (United States Ship), "HMS" (Her/His Majesty’s Ship), "HMCS" (Her/His Majesty's Canadian Ship) and "HTMS" (His Thai Majesty's Ship), while a few examples for prefixes for merchant ships are "RMS" (Royal Mail Ship, usually a passenger liner), "MV" (Motor Vessel, powered by diesel), "MT" (Motor Tanker, powered vessel carrying liquids only) "FV" Fishing Vessel and "SS" (Screw Steamer, driven by propellers or screws, often understood to stand for Steamship). "TS", sometimes found in first position before a merchant ship's prefix, denotes that it is a .
Famous cargo ships
Famous cargo ships include the 2,710 Liberty ships of World War II, partly based on a British design. Liberty ship sections were prefabricated in locations across the United States and then assembled by shipbuilders in an average of six weeks, with the record being just over four days. These ships allowed the Allies in World War II to replace sunken cargo vessels at a rate greater than the Kriegsmarine's U-boats could sink them, and contributed significantly to the war effort, the delivery of supplies, and eventual victory over the Axis powers. Liberty ships were followed by the faster Victory ships. Canada built Park ships and Fort ships to meet the demand for the Allies shipping. The United Kingdom built Empire ships and used US Ocean ships. After the war many of the ships were sold to private companies. The Ever Given is a ship that was lodged into the Suez Canal from March 25 to 28, 2021, which caused a halt on maritime trade. The MV Dali, which collided with the Francis Scott Key Bridge in Baltimore, Maryland, United States, on 26 March 2024, causing a catastrophic structural failure of the bridge that resulted in at least 6 deaths.
Pollution
Due to its low cost, most large cargo vessels are powered by bunker fuel, also known as heavy fuel oil, which contains higher sulphur levels than diesel. This level of pollution is increasing: with bunker fuel consumption at 278 million tonnes per year in 2001, it is projected to be at 500 million tonnes per year in 2020. International standards to dramatically reduce sulphur content in marine fuels and nitrogen oxide emissions have been put in place. Among some of the solutions offered is changing over the fuel intake to clean diesel or marine gas oil, while in restricted waters and cold ironing the ship while it is in port. The process of removing sulphur from the fuel impacts the viscosity and lubricity of the marine gas oil though, which could cause damage in the engine fuel pump. The fuel viscosity can be raised by cooling the fuel down. If the various requirements are enforced, the International Maritime Organization's marine fuel requirement will mean a 90% reduction in sulphur oxide emissions; whilst the European Union is planning stricter controls on emissions.
Environmental impact
Cargo ships have been reported to have a possible negative impact on the population of whale sharks. Smithsonian Magazine reported in 2022 that whale sharks, the largest species of fish, have been disappearing mysteriously over the past 75 years, with research pointing to cargo ships and large vessels as the likely culprits. A study involving over 75 researchers highlighted the danger posed to whale sharks by shipping activities in various regions, including Ecuador, Mexico, Malaysia, the Philippines, Oman, Seychelles, and Taiwan.
| Technology | Maritime transport | null |
786064 | https://en.wikipedia.org/wiki/Prevailing%20winds | Prevailing winds | In meteorology, prevailing wind in a region of the Earth's surface is a surface wind that blows predominantly from a particular direction. The dominant winds are the trends in direction of wind with the highest speed over a particular point on the Earth's surface at any given time. A region's prevailing and dominant winds are the result of global patterns of movement in the Earth's atmosphere. In general, winds are predominantly easterly at low latitudes globally. In the mid-latitudes, westerly winds are dominant, and their strength is largely determined by the polar cyclone. In areas where winds tend to be light, the sea breeze-land breeze cycle (powered by differential solar heating and night cooling of sea and land) is the most important cause of the prevailing wind. In areas which have variable terrain, mountain and valley breezes dominate the wind pattern. Highly elevated surfaces can induce a thermal low, which then augments the environmental wind flow. Wind direction at any given time is influenced by synoptic-scale and mesoscale weather like pressure systems and fronts. Local wind direction can also be influenced by microscale features like buildings.
Wind roses are tools used to display the history of wind direction and intensity. Knowledge of the prevailing wind allows the development of prevention strategies for wind erosion of agricultural land, such as across the Great Plains. Sand dunes can orient themselves perpendicular to the prevailing wind direction in coastal and desert locations. Insects drift along with the prevailing wind, but the flight of birds is less dependent on it. Prevailing winds in mountain locations can lead to significant rainfall gradients, ranging from wet across windward-facing slopes to desert-like conditions along their lee slopes.
Wind rose
A wind rose is a graphic tool used by meteorologists to give a succinct view of how wind speed and direction are typically distributed at a particular location. Presented in a polar coordinate grid, the wind rose shows the frequency of winds blowing from particular directions. The length of each spoke around the circle is related to the proportion of the time that the wind blows from each direction. Each concentric circle represents a different proportion, increasing outwards from zero at the center. A wind rose plot may contain additional information, in that each spoke is broken down into color-coded bands that show wind speed ranges. Wind roses typically show 8 or 16 cardinal directions, such as north (N), NNE, NE, etc., although they may be subdivided into as many as 32 directions.
Climatology
Trades and their impact
The trade winds (also called trades) are the prevailing pattern of easterly surface winds found in the tropics near the Earth's equator, equatorward of the subtropical ridge. These winds blow predominantly from the northeast in the Northern Hemisphere and from the southeast in the Southern Hemisphere. The trade winds act as the steering flow for tropical cyclones that form over world's oceans, guiding their path westward. Trade winds also steer African dust westward across the Atlantic Ocean into the Caribbean Sea, as well as portions of southeast North America.
Westerlies and their impact
The westerlies or the prevailing westerlies are the prevailing winds in the middle latitudes (i.e. between 35 and 65 degrees latitude), which blow in areas poleward of the high pressure area known as the subtropical ridge in the horse latitudes. These prevailing winds blow from the west to the east, and steer extra-tropical cyclones in this general direction. The winds are predominantly from the southwest in the Northern Hemisphere and from the northwest in the Southern Hemisphere. They are strongest in the winter when the pressure is lower over the poles, such as when the polar cyclone is strongest, and weakest during the summer when the polar cyclone is weakest and when pressures are higher over the poles.
Together with the trade winds, the westerlies enabled a round-trip trade route for sailing ships crossing the Atlantic and Pacific oceans, as the westerlies lead to the development of strong ocean currents in both hemispheres. The westerlies can be particularly strong, especially in the southern hemisphere, where there is less land in the middle latitudes to cause the flow pattern to amplify, which slows the winds down. The strongest westerly winds in the middle latitudes are called the Roaring Forties, between 40 and 50 degrees south latitude, within the Southern Hemisphere. The westerlies play an important role in carrying the warm, equatorial waters and winds to the western coasts of continents, especially in the southern hemisphere because of its vast oceanic expanse.
The westerlies explain why coastal Western North America tends to be wet, especially from Northern Washington to Alaska, during the winter. Differential heating from the Sun between the land which is quite cool and the ocean which is relatively warm causes areas of low pressure to develop over land. This results in moisture-rich air flowing east from the Pacific Ocean, causing frequent rainstorms and wind on the coast. This moisture continues to flow eastward until orographic lift caused by the Coast Ranges, and the Cascade, Sierra Nevada, Columbia, and Rocky Mountains causes a rain shadow effect which limits further penetration of these systems and associated rainfall eastward. This trend reverses in the summer when strong heating of the land causes high pressure and tends to block moisture-rich air from the Pacific from reaching land. This explains why most of coastal Western North America in the highest latitude experiences dry summers, despite vast rainfall in the winter.
Polar easterlies
The polar easterlies (also known as Polar Hadley cells) are the dry, cold prevailing winds that blow from the high-pressure areas of the polar highs at the North and South Poles towards the low-pressure areas within the westerlies at high latitudes. Like trade winds and unlike the westerlies, these prevailing winds blow from the east to the west, and are often weak and irregular. Due to the low sun angle, cold air builds up and subsides at the pole creating surface high-pressure areas, forcing an outflow of air toward the equator; that outflow is deflected westward by the Coriolis effect.
Local considerations
Sea and land breezes
In areas where the wind flow is light, sea breezes and land breezes are important factors in a location's prevailing winds. The sea is warmed by the sun to a greater depth than the land due to its greater specific heat. The sea therefore has a greater capacity for absorbing heat than the land, so the surface of the sea warms up more slowly than the land's surface. As the temperature of the surface of the land rises, the land heats the air above it. The warm air is less dense and so it rises. This rising air over the land lowers the sea level pressure by about 0.2%. The cooler air above the sea, now with higher sea level pressure, flows towards the land into the lower pressure, creating a cooler breeze near the coast.
The strength of the sea breeze is directly proportional to the temperature difference between the land mass and the sea. If an off-shore wind of exists, the sea breeze is not likely to develop. At night, the land cools off more quickly than the ocean due to differences in their specific heat values, which forces the daytime sea breeze to dissipate. If the temperature onshore cools below the temperature offshore, the pressure over the water will be lower than that of the land, establishing a land breeze, as long as an onshore wind is not strong enough to oppose it.
Circulation in elevated regions
Over elevated surfaces, heating of the ground exceeds the heating of the surrounding air at the same altitude above sea level, creating an associated thermal low over the terrain and enhancing any lows which would have otherwise existed, and changing the wind circulation of the region. In areas where there is rugged topography that significantly interrupts the environmental wind flow, the wind can change direction and accelerate parallel to the wind obstruction. This barrier jet can increase the low level wind by 45%. In mountainous areas, local distortion of the airflow is more severe. Jagged terrain combines to produce unpredictable flow patterns and turbulence, such as rotors. Strong updrafts, downdrafts and eddies develop as the air flows over hills and down valleys. Wind direction changes due to the contour of the land. If there is a pass in the mountain range, winds will rush through the pass with considerable speed due to the Bernoulli principle that describes an inverse relationship between speed and pressure. The airflow can remain turbulent and erratic for some distance downwind into the flatter countryside. These conditions are dangerous to ascending and descending airplanes.
Daytime heating and nighttime cooling of the hilly slopes lead to day to night variations in the airflow, similar to the relationship between sea breeze and land breeze. At night, the sides of the hills cool through radiation of the heat. The air along the hills becomes cooler and denser, blowing down into the valley, drawn by gravity. This is known a mountain breeze. If the slopes are covered with ice and snow, the mountain breeze will blow during the day, carrying the cold dense air into the warmer, barren valleys. The slopes of hills not covered by snow will be warmed during the day. The air that comes in contact with the warmed slopes becomes warmer and less dense and flows uphill. This is known as an anabatic wind or valley breeze.
Effect on precipitation
Orographic precipitation occurs on the windward side of mountains. It is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air (see foehn wind) on the descending and generally warming, leeward side where a rain shadow is observed.
In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desertlike climate just downwind across western Argentina. The Sierra Nevada range creates the same effect in North America forming the Great Basin and Mojave Deserts.
Effect on nature
Insects are swept along by the prevailing winds, while birds follow their own course. As such, fine line patterns within weather radar imagery, associated with converging winds, are dominated by insect returns. In the Great Plains, wind erosion of agricultural land is a significant problem, and is mainly driven by the prevailing wind. Because of this, wind barrier strips have been developed to minimize this type of erosion. The strips can be in the form of soil ridges, crop strips, crops rows, or trees which act as wind breaks. They are oriented perpendicular to the wind in order to be most effective. In regions with minimal vegetation, such as coastal and desert areas, transverse sand dunes orient themselves perpendicular to the prevailing wind direction, while longitudinal dunes orient themselves parallel to the prevailing winds.
| Physical sciences | Winds | Earth science |
786501 | https://en.wikipedia.org/wiki/Westerlies | Westerlies | The westerlies, anti-trades, or prevailing westerlies, are prevailing winds from the west toward the east in the middle latitudes between 30 and 60 degrees latitude. They originate from the high-pressure areas in the horse latitudes (about 30 degrees) and trend towards the poles and steer extratropical cyclones in this general manner. Tropical cyclones which cross the subtropical ridge axis into the westerlies recurve due to the increased westerly flow. The winds are predominantly from the southwest in the Northern Hemisphere and from the northwest in the Southern Hemisphere.
The westerlies are strongest in the winter hemisphere and times when the pressure is lower over the poles, while they are weakest in the summer hemisphere and when pressures are higher over the poles. The westerlies are particularly strong, especially in the Southern Hemisphere (called also 'Brave West winds' at striking Chile, Argentina, Tasmania and New Zealand), in areas where land is absent, because land amplifies the flow pattern, making the current more north–south oriented, slowing the westerlies. The strongest westerly winds in the middle latitudes can come in the roaring forties, between 40 and 50 degrees south latitude. The westerlies play an important role in carrying the warm, equatorial waters and winds to the western coasts of continents, especially in the southern hemisphere because of its vast oceanic expanse.
Behaviour
If the Earth were tidally locked to the Sun, solar heating would cause winds across the mid-latitudes to blow in a poleward direction, away from the subtropical ridge. However, the Coriolis effect caused by the rotation of Earth tends to deflect poleward winds eastward from north (to the right) in the Northern Hemisphere and eastward from south (to the left) in the Southern Hemisphere. This is why winds across the Northern Hemisphere tend to blow from the southwest, but they tend to be from the northwest in the Southern Hemisphere. When pressures are lower over the poles, the strength of the westerlies increases, which has the effect of warming the mid-latitudes. This occurs when the Arctic oscillation is positive, and during winter low pressure near the poles is stronger than it would be during the summer. When it is negative and pressures are higher over the poles, the flow is more meridional, blowing from the direction of the pole towards the Equator, which brings cold air into the mid-latitudes.
Throughout the year, the westerlies vary in strength with the polar cyclone. As the cyclone reaches its maximum intensity in winter, the westerlies increase in strength. As the cyclone reaches its weakest intensity in summer, the Westerlies weaken. An example of the impact of the westerlies is when dust plumes, originating in the Gobi Desert combine with pollutants and spread large distances downwind, or eastward, into North America. The westerlies can be particularly strong, especially in the Southern Hemisphere, where there is less land in the middle to cause the progression of west to east winds to slow down. In the Southern hemisphere, because of the stormy and cloudy conditions, it is usual to refer to the westerlies as the roaring forties, furious fifties, or shrieking sixties according to the varying degrees of latitude.
Impact on ocean currents
Due to persistent winds from west to east on the poleward sides of the subtropical ridges located in the Atlantic and Pacific oceans, ocean currents are driven in a similar manner in both hemispheres. The currents in the Northern Hemisphere are weaker than those in the Southern Hemisphere due to the differences in strength between the westerlies of each hemisphere. The process of western intensification causes currents on the western boundary of an ocean basin to be stronger than those on the eastern boundary of an ocean. These western ocean currents transport warm, tropical water polewards toward the polar regions. Ships crossing both oceans have taken advantage of the ocean currents for centuries.
The Antarctic Circumpolar Current (ACC), or the West Wind Drift, is an ocean current that flows from west to east around Antarctica. The ACC is the dominant circulation feature of the Southern Ocean and, at approximately 125 Sverdrups, the largest ocean current. In the northern hemisphere, the Gulf Stream, part of the North Atlantic Subtropical Gyre, has led to the development of strong cyclones of all types at the base of the Westerlies, both within the atmosphere and within the ocean. The Kuroshio (Japanese for "Black Tide") is a strong western boundary current in the western north Pacific Ocean, similar to the Gulf Stream, which has also contributed to the depth of ocean storms in that region.
Extratropical cyclones
An extratropical cyclone is a synoptic scale low-pressure weather system that has neither tropical nor polar characteristics, being connected with fronts and horizontal gradients in temperature and dew point otherwise known as "baroclinic zones".
The descriptor "extratropical" refers to the fact that this type of cyclone generally occurs outside of the tropics, in the middle latitudes of the planet, where the Westerlies steer the system generally from west to east. These systems may also be described as "mid-latitude cyclones" due to their area of formation, or "post-tropical cyclones" where extratropical transition has occurred, and are often described as "depressions" or "lows" by weather forecasters and the general public. These are the everyday phenomena which along with anticyclones, drive the weather over much of the Earth.
Although extratropical cyclones are almost always classified as baroclinic since they form along zones of temperature and dewpoint gradient, they can sometimes become barotropic late in their life cycle when the temperature distribution around the cyclone becomes fairly uniform along the radius from the center of low pressure. An extratropical cyclone can transform into a subtropical storm, and from there into a tropical cyclone, if it dwells over warm waters and develops central convection, which warms its core and causes temperature and dewpoint gradients near their centers to fade.
Interaction with tropical cyclones
When a tropical cyclone crosses the subtropical ridge axis, normally through a break in the high-pressure area caused by a system traversing the Westerlies, its general track around the high-pressure area is deflected significantly by winds moving towards the general low-pressure area to its north. When the cyclone track becomes strongly poleward with an easterly component, the cyclone has begun recurvature, entering the Westerlies. A typhoon moving through the Pacific Ocean towards Asia, for example, will recurve offshore of Japan to the north, and then to the northeast, if the typhoon encounters southwesterly winds (blowing northeastward) around a low-pressure system passing over China or Siberia. Many tropical cyclones are eventually forced toward the northeast by extratropical cyclones in this manner, which move from west to east to the north of the subtropical ridge. An example of a tropical cyclone in recurvature was Typhoon Ioke in 2006, which took a similar trajectory.
| Physical sciences | Winds | Earth science |
788091 | https://en.wikipedia.org/wiki/Psychological%20trauma | Psychological trauma | Psychological trauma (also known as mental trauma, psychiatric trauma, emotional damage, or psychotrauma) is an emotional response caused by severe distressing events, such as bodily injury, sexual violence, or other threats to the life of the subject or their loved ones; indirect exposure, such as from watching television news, may be extremely distressing and can produce an involuntary and possibly overwhelming physiological stress response, but does not always produce trauma per se. Examples of distressing events include violence, rape, or a terrorist attack.
Short-term reactions such as psychological shock and psychological denial are typically followed. Long-term reactions and effects include flashbacks, panic attacks, insomnia, nightmare disorder, difficulties with interpersonal relationships, post-traumatic stress disorder (PTSD), and brief psychotic disorder. Physical symptoms including migraines, hyperventilation, hyperhidrosis, and nausea are often associated with or made worse by trauma.
People react to similar events differently. Most people who experience a potentially traumatic event do not become psychologically traumatized, though they may be distressed and experience suffering. Some will develop PTSD after exposure to a traumatic event, or series of events. This discrepancy in risk rate can be attributed to protective factors some individuals have, that enable them to cope with difficult events, including temperamental and environmental factors, such as resilience and willingness to seek help.
Psychotraumatology is the study of psychological trauma.
Signs and symptoms
People who experience trauma often have problems and difficulties afterwards. The severity of these symptoms depends on the person, the types of trauma involved, and the support and treatment they receive from others. The range of reactions to trauma can be wide and varied, and differ in severity from person to person.
After a traumatic experience, a person may re-experience the trauma mentally and physically. For example, the sound of a motorcycle engine may cause intrusive thoughts or a sense of re-experiencing a traumatic experience that involved a similar sound e.g. gunfire. Sometimes a benign stimulus (e.g. noise from a motorcycle) may get connected in the mind with the traumatic experience. This process is called traumatic coupling. In this process, the benign stimulus becomes a trauma reminder, also called a trauma trigger. These can produce uncomfortable and even painful feelings. Re-experiencing can damage people's sense of safety, self, self-efficacy, as well as their ability to regulate emotions and navigate relationships. They may turn to psychoactive drugs, including alcohol, to try to escape or dampen the feelings. These triggers cause flashbacks, which are dissociative experiences where the person feels as though the events are recurring. Flashbacks can range from distraction to complete dissociation or loss of awareness of the current context. Re-experiencing of symptoms is a sign that the body and mind are actively struggling to cope with the traumatic experience.
Triggers and cues act as reminders of the trauma and can cause anxiety and other associated emotions. Often the person can be completely unaware of what these triggers are. In many cases, this may lead a person with a traumatic disorder to engage in disruptive behaviors or self-destructive coping mechanisms, often without being fully aware of the nature or causes of their own actions. Panic attacks are an example of a psychosomatic response to such emotional triggers.
Consequently, intense feelings of anger may frequently surface, sometimes in inappropriate or unexpected situations, as danger may always seem to be present due to re-experiencing past events. Upsetting memories such as images, thoughts, or flashbacks may haunt the person, and nightmares may be frequent. Insomnia may occur as lurking fears and insecurity keep the person vigilant and on the lookout for danger, both day and night. A messy personal financial scene, as well as debt, are common features in trauma-affected people. Trauma does not only cause changes in one's daily functions, but could also lead to morphological changes. Such epigenetic changes can be passed on to the next generation, thus making genetics one of the components of psychological trauma. However, some people are born with or later develop protective factors such as genetics that help lower their risk of psychological trauma.
Traumatic events are sometimes constantly experienced as if they were happening in the present, preventing the subject from gaining perspective on the experience. This can produce a pattern of prolonged periods of acute arousal punctuated by periods of physical and mental exhaustion. In time, emotional exhaustion may set in, leading to distraction, and clear thinking may be difficult or impossible. Emotional detachment, as well as dissociation (depersonalization or derealization) can frequently occur. Dissociating from the painful emotion includes numbing all emotion, and the person may seem emotionally flat, preoccupied, distant, or cold. Exposure to and re-experiencing trauma can cause neurophysiological changes like slowed myelination, abnormalities in synaptic pruning, shrinking of the hippocampus, cognitive and affective impairment. This is significant in brain scan studies done regarding higher-order function assessment with children and youth who were in vulnerable environments.
Some traumatized people may feel permanently damaged when trauma symptoms do not go away and they do not believe their situation will improve. This can lead to feelings of despair, transient paranoid ideation, loss of self-esteem, profound emptiness, suicidality, and frequently, depression. If important aspects of the person's self and world understanding have been violated, the person may call their own identity into question. Often despite their best efforts, traumatized parents may have difficulty assisting their child with emotion regulation, attribution of meaning, and containment of post-traumatic fear in the wake of the child's traumatization, leading to adverse consequences for the child. In such instances, seeking counselling in appropriate mental health services is in the best interests of both the child and the parent(s).
Trauma is hard to speak of by those that experience it. The event in question might recur to them in a dream or another medium, but it is rare for them to speak of it.
Causes
Situational trauma
Trauma can be caused by human-made, technological and natural disasters, including war, abuse, violence, vehicle collisions, or medical emergencies.
An individual's response to psychological trauma can be varied based on the type of trauma, as well as socio-demographic and background factors.
There are several behavioral responses commonly used towards stressors including the proactive, reactive, and passive responses. Proactive responses include attempts to address and correct a stressor before it has a noticeable effect on lifestyle. Reactive responses occur after the stress and possible trauma has occurred and is aimed more at correcting or minimizing the damage of a stressful event. A passive response is often characterized by an emotional numbness or ignorance of a stressor.
There is also a distinction between trauma induced by recent situations and long-term trauma which may have been buried in the unconscious from past situations such as child abuse. Trauma is sometimes overcome through healing; in some cases this can be achieved by recreating or revisiting the origin of the trauma under more psychologically safe circumstances, such as with a therapist. More recently, awareness of the consequences of climate change is seen as a source of trauma as individuals contemplate future events as well as experience climate change related disasters. Emotional experiences within these contexts are increasing, and collective processing and engagement with these emotions can lead to increased resilience and post-traumatic growth, as well as a greater sense of belongingness. These outcomes are protective against the devastating impacts of psychological trauma.
Stress disorders
All psychological traumas originate from stress, a physiological response to an unpleasant stimulus. Long-term stress increases the risk of poor mental health and mental disorders, which can be attributed to secretion of glucocorticoids for a long period of time. Such prolonged exposure causes many physiological dysfunctions such as the suppression of the immune system and increase in blood pressure. Not only does it affect the body physiologically, but a morphological change in the hippocampus also takes place. Studies showed that extreme stress early in life can disrupt normal development of hippocampus and impact its functions in adulthood. Studies surely show a correlation between the size of hippocampus and one's susceptibility to stress disorders. In times of war, psychological trauma has been known as shell shock or combat stress reaction. Psychological trauma may cause an acute stress reaction which may lead to post-traumatic stress disorder (PTSD). PTSD emerged as the label for this condition after the Vietnam War in which many veterans returned to their respective countries demoralized, and sometimes, addicted to psychoactive substances.
The symptoms of PTSD must persist for at least one month for diagnosis to be made. The main symptoms of PTSD consist of four main categories: trauma (i.e. intense fear), reliving (i.e. flashbacks), avoidance behavior (i.e. emotional numbing), and hypervigilance (i.e. continuous scanning of the environment for danger). Research shows that about 60% of the US population reported as having experienced at least one traumatic symptom in their lives, but only a small proportion actually develops PTSD. There is a correlation between the risk of PTSD and whether or not the act was inflicted deliberately by the offender. Psychological trauma is treated with therapy and, if indicated, psychotropic medications.
The term continuous posttraumatic stress disorder (CTSD) was introduced into the trauma literature by Gill Straker (1987). It was originally used by South African clinicians to describe the effects of exposure to frequent, high levels of violence usually associated with civil conflict and political repression. The term is also applicable to the effects of exposure to contexts in which gang violence and crime are endemic as well as to the effects of ongoing exposure to life threats in high-risk occupations such as police, fire, and emergency services.
As one of the processes of treatment, confrontation with their sources of trauma plays a crucial role. While debriefing people immediately after a critical incident has not been shown to reduce incidence of PTSD, coming alongside people experiencing trauma in a supportive way has become standard practice.
The impact of PTSD on children is to a degree unknown, but education on coping mechanisms have shown to improve the lives of children who have undergone a traumatic event.
Moral injury
Moral injury is distress such as guilt or shame following a moral transgression. There are many other definitions some based on different models of causality. Moral injury is associated with post-traumatic stress disorder but is distinguished from it. Moral injury is associated with guilt and shame while PTSD is correlated with fear and anxiety.
Vicarious trauma
Normally, hearing about or seeing a recording of an event, even if distressing, does not cause trauma; however, an exception is made to the diagnostic criteria for work-related exposures. Vicarious trauma affects workers who witness their clients' trauma. It is more likely to occur in situations where trauma-related work is the norm rather than the exception. Listening with empathy to the clients generates feeling, and seeing oneself in clients' trauma may compound the risk for developing trauma symptoms. Trauma may also result if workers witness situations that happen in the course of their work (e.g. violence in the workplace, reviewing violent video tapes.) Risk increases with exposure and with the absence of help-seeking protective factors and pre-preparation of preventive strategies. Individuals who have a personal history of trauma are also at increased risk for developing vicarious trauma. Vicarious trauma can lead workers to develop more negative views of themselves, others, and the world as a whole, which can compromise their quality of life and ability to work effectively.
Theoretical models
Shattered assumptions theory
Janoff-Bulman, theorises that people generally hold three fundamental assumptions about the world that are built and confirmed over years of experience: the world is benevolent, the world is meaningful, and I am worthy. According to the shattered assumption theory, there are some extreme events that "shatter" an individual's worldviews by severely challenging and breaking assumptions about the world and ourself. Once one has experienced such trauma, it is necessary for an individual to create new assumptions or modify their old ones to recover from the traumatic experience. Therefore, the negative effects of the trauma are simply related to our worldviews, and if we repair these views, we will recover from the trauma.
In psychodynamics
Psychodynamic viewpoints are controversial, but have been shown to have utility therapeutically.
French neurologist Jean-Martin Charcot argued in the 1890s that psychological trauma was the origin of all instances of the mental illness known as hysteria. Charcot's "traumatic hysteria" often manifested as paralysis that followed a physical trauma, typically years later after what Charcot described as a period of "incubation". Sigmund Freud, Charcot's student and the father of psychoanalysis, examined the concept of psychological trauma throughout his career. Jean Laplanche has given a general description of Freud's understanding of trauma, which varied significantly over the course of Freud's career: "An event in the subject's life, defined by its intensity, by the subject's incapacity to respond adequately to it and by the upheaval and long-lasting effects that it brings about in the psychical organization".
The French psychoanalyst Jacques Lacan claimed that what he called "The Real" had a traumatic quality external to symbolization. As an object of anxiety, Lacan maintained that The Real is "the essential object which isn't an object any longer, but this something faced with which all words cease and all categories fail, the object of anxiety par excellence".
Fred Alford, citing the work of object relations theorist Donald Winnicott, uses the concept of inner other, and internal representation of the social world, with which one converses internally and which is generated through interactions with others. He posits that the inner other is damaged by trauma but can be repaired by conversations with others such as therapists. He relates the concept of the inner other to the work of Albert Camus viewing the inner other as that which removes the absurd. Alford notes how trauma damages trust in social relations due to fear of exploitation and argues that culture and social relations can help people recover from trauma.
Diana Fosha, a pioneer of modern psychodynamic perspective, also argues that social relations can help people recover from trauma, but specifically refers to attachment theory and the attachment dynamic of the therapeutic relationship. Fosha argues that the sense of emotional safety and co-regulation that occurs in a psychodynamically oriented therapeutic relationship acts as the secure attachment that is necessary to allow a client to experience and process through their trauma safely and effectively.
Diagnosis
As "trauma" adopted a more widely defined scope, traumatology as a field developed a more interdisciplinary approach. This is in part due to the field's diverse professional representation including: psychologists, medical professionals, and lawyers. As a result, findings in this field are adapted for various applications, from individual psychiatric treatments to sociological large-scale trauma management. While the field has adopted a number of diverse methodological approaches, many pose their own limitations in practical application.
The experience and outcomes of psychological trauma can be assessed in a number of ways. Within the context of a clinical interview, the risk of imminent danger to the self or others is important to address but is not the focus of assessment. In most cases, it will not be necessary to involve contacting emergency services (e.g., medical, psychiatric, law enforcement) to ensure the individuals safety; members of the individual's social support network are much more critical.
Understanding and accepting the psychological state of an individual is paramount. There are many misconceptions of what it means for a traumatized individual to be in psychological crisis. These are times when an individual is in inordinate amounts of pain and incapable of self-comfort. If treated humanely and respectfully, the individual is less likely to resort to self harm. In these situations it is best to provide a supportive, caring environment and to communicate to the individual that no matter the circumstance, the individual will be taken seriously rather than being treated as delusional. It is vital for the assessor to understand that what is going on in the traumatized person's head is valid and real. If deemed appropriate, the assessing clinician may proceed by inquiring about both the traumatic event and the outcomes experienced (e.g., post-traumatic symptoms, dissociation, substance abuse, somatic symptoms, psychotic reactions). Such inquiry occurs within the context of established rapport and is completed in an empathic, sensitive, and supportive manner. The clinician may also inquire about possible relational disturbance, such as alertness to interpersonal danger, abandonment issues, and the need for self-protection via interpersonal control. Through discussion of interpersonal relationships, the clinician is better able to assess the individual's ability to enter and sustain a clinical relationship.
During assessment, individuals may exhibit activation responses in which reminders of the traumatic event trigger sudden feelings (e.g., distress, anxiety, anger), memories, or thoughts relating to the event. Because individuals may not yet be capable of managing this distress, it is necessary to determine how the event can be discussed in such a way that will not "retraumatize" the individual. It is also important to take note of such responses, as these responses may aid the clinician in determining the intensity and severity of possible post traumatic stress as well as the ease with which responses are triggered. Further, it is important to note the presence of possible avoidance responses. Avoidance responses may involve the absence of expected activation or emotional reactivity as well as the use of avoidance mechanisms (e.g., substance use, effortful avoidance of cues associated with the event, dissociation).
In addition to monitoring activation and avoidance responses, clinicians carefully observe the individual's strengths or difficulties with affect regulation (i.e., affect tolerance and affect modulation). Such difficulties may be evidenced by mood swings, brief yet intense depressive episodes, or self-mutilation. The information gathered through observation of affect regulation will guide the clinician's decisions regarding the individual's readiness to partake in various therapeutic activities.
Though assessment of psychological trauma may be conducted in an unstructured manner, assessment may also involve the use of a structured interview. Such interviews might include the Clinician-Administered PTSD Scale, Acute Stress Disorder Interview, Structured Interview for Disorders of Extreme Stress, Structured Clinical Interview for DSM-IV Dissociative Disorders - Revised, and Brief Interview for post-traumatic Disorders.
Lastly, assessment of psychological trauma might include the use of self-administered psychological tests. Individual scores on such tests are compared to normative data in order to determine how the individual's level of functioning compares to others in a sample representative of the general population. Psychological testing might include the use of generic tests (e.g., MMPI-2, MCMI-III, SCL-90-R) to assess non-trauma-specific symptoms as well as difficulties related to personality. In addition, psychological testing might include the use of trauma-specific tests to assess post-traumatic outcomes. Such tests might include the post-traumatic Stress Diagnostic Scale, Davidson Trauma Scale, Detailed Assessment of post-traumatic Stress, Trauma Symptom Inventory, Trauma Symptom Checklist for Children, Traumatic Life Events Questionnaire, and Trauma-related Guilt Inventory.
Children are assessed through activities and therapeutic relationship, some of the activities are play genogram, sand worlds, coloring feelings, self and kinetic family drawing, symbol work, dramatic-puppet play, story telling, Briere's TSCC, etc.
Definition
The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) defines trauma as the symptoms that occur following exposure to an event (i.e., traumatic event) that involves actual or threatened death, serious injury, or sexual violence. This exposure could come in the form of experiencing the event or witnessing the event, or learning that an extreme violent or accidental event was experienced by a loved one. Trauma symptoms may come in the form of intrusive memories, dreams, or flashbacks; avoidance of reminders of the traumatic event; negative thoughts and feelings; or increased alertness or reactivity. Memories associated with trauma are typically explicit, coherent, and difficult to forget. Due to the complexity of the interaction between traumatic event occurrence and trauma symptomatology, a person's distress response to aversive details of a traumatic event may involve intense fear or helplessness, but ranges according to the context. In children, trauma symptoms can be manifested in the form of disorganized or agitative behaviors.
Trauma can be caused by a wide variety of events, but there are a few common aspects. There is frequently a violation of the person's core assumptions about the world and their human rights, putting the person in a state of extreme confusion and insecurity. This is seen when institutions depended upon for survival violate, humiliate, betray, or cause major losses or separations instead of evoking aspects like positive self worth, safe boundaries and personal freedom.
Psychologically traumatic experiences often involve physical trauma that threatens one's survival and sense of security. Typical causes and dangers of psychological trauma include harassment; embarrassment; abandonment; abusive relationships; rejection; co-dependence; physical assault; sexual abuse; partner battery; employment discrimination; police brutality; judicial corruption and misconduct; bullying; paternalism; domestic violence; indoctrination; being the victim of an alcoholic parent; the threat or the witnessing of violence (particularly in childhood); life-threatening medical conditions; and medication-induced trauma. Catastrophic natural disasters such as earthquakes and volcanic eruptions; large scale transportation accidents; house or domestic fire; motor collision; mass interpersonal violence like war; terrorist attacks or other mass victimization like sex trafficking; being taken as a hostage or being kidnapped can also cause psychological trauma. Long-term exposure to situations such as extreme poverty or other forms of abuse, such as verbal abuse, exist independently of physical trauma but still generate psychological trauma.
Some theories suggest childhood trauma can increase one's risk for mental disorders including post-traumatic stress disorder (PTSD), depression, and substance abuse. Childhood adversity is associated with neuroticism during adulthood.
Parts of the brain in a growing child are developing in a sequential and hierarchical order, from least complex to most complex. The brain's neurons change in response to the constant external signals and stimulation, receiving and storing new information. This allows the brain to continually respond to its surroundings and promote survival. The five traditional signals (sight, hearing, taste, smell, and touch) contribute to the developing brain structure and its function.
Infants and children begin to create internal representations of their external environment, and in particular, key attachment relationships, shortly after birth. Violent and victimizing attachment figures impact infants' and young children's internal representations. The more frequently a specific pattern of brain neurons is activated, the more permanent the internal representation associated with the pattern becomes. This causes sensitization in the brain towards the specific neural network. Because of this sensitization, the neural pattern can be activated by decreasingly less external stimuli.
Child abuse tends to have the most complications, with long-term effects out of all forms of trauma, because it occurs during the most sensitive and critical stages of psychological development. It could lead to violent behavior, possibly as extreme as serial murder. For example, Hickey's Trauma-Control Model suggests that "childhood trauma for serial murderers may serve as a triggering mechanism resulting in an individual's inability to cope with the stress of certain events."
Often, psychological aspects of trauma are overlooked even by health professionals: "If clinicians fail to look through a trauma lens and to conceptualize client problems as related possibly to current or past trauma, they may fail to see that trauma victims, young and old, organize much of their lives around repetitive patterns of reliving and warding off traumatic memories, reminders, and affects." Biopsychosocial models offer a broader view of health problems than biomedical models.
Effects
Evidence suggests that a majority of people who experience severe trauma in adulthood will experience enduring personality change. Personality changes include guilt, distrust, impulsiveness, aggression, avoidance, obsessive behaviour, emotional numbness, loss of interest, hopelessness and altered self-perception.
Treatment
A number of psychotherapy approaches have been designed with the treatment of trauma in mind—EMDR, progressive counting, somatic experiencing, biofeedback, Internal Family Systems Therapy, and sensorimotor psychotherapy, and Emotional Freedom Technique (EFT) etc. Trauma informed care provides a framework for any person in any discipline or context to promote healing, or at least not re-traumatizing. A 2018 systematic review provided moderate evidence that Eye Movement Desensitization and Reprocessing (EMDR) is effective in reducing PTSD and depression symptoms, and it increases the likelihood of patients no longer meeting the criteria for PTSD.
There is a large body of empirical support for the use of cognitive behavioral therapy for the treatment of trauma-related symptoms, including post-traumatic stress disorder. Institute of Medicine guidelines identify cognitive behavioral therapies as the most effective treatments for PTSD. Two of these cognitive behavioral therapies, prolonged exposure and cognitive processing therapy, are being disseminated nationally by the Department of Veterans Affairs for the treatment of PTSD. A 2010 Cochrane review found that trauma-focused cognitive behavioral therapy was effective for individuals with acute traumatic stress symptoms when compared to waiting list and supportive counseling. Seeking Safety is another type of cognitive behavioral therapy that focuses on learning safe coping skills for co-occurring PTSD and substance use problems. While some sources highlight Seeking Safety as effective with strong research support, others have suggested that it did not lead to improvements beyond usual treatment. A review from 2014 showed that a combination of treatments involving dialectical behavior therapy (DBT), often used for borderline personality disorder, and exposure therapy is highly effective in treating psychological trauma. If, however, psychological trauma has caused dissociative disorders or complex PTSD, ,Brief Psychotic disorder, the trauma model approach (also known as phase-oriented treatment of structural dissociation) has been proven to work better than the simple cognitive approach. Studies funded by pharmaceuticals have also shown that medications such as the new anti-depressants are effective when used in combination with other psychological approaches. At present, the selective serotonin reuptake inhibitor (SSRI) antidepressants sertraline (Zoloft) and paroxetine (Paxil) are the only medications that have been approved by the Food and Drug Administration (FDA) in the United States to treat PTSD. Other options for pharmacotherapy include serotonin-norepinephrine reuptake inhibitor (SNRI) antidepressants and anti-psychotic medications, though none have been FDA approved.
Trauma therapy allows processing trauma-related memories and allows growth towards more adaptive psychological functioning. It helps to develop positive coping instead of negative coping and allows the individual to integrate upsetting-distressing material (thoughts, feelings and memories) and to resolve these internally. It also aids in the growth of personal skills like resilience, ego regulation, empathy, etc.
Processes involved in trauma therapy are:
Psychoeducation: Information dissemination and educating in vulnerabilities and adoptable coping mechanisms.
Emotional regulation: Identifying, countering discriminating, grounding thoughts and emotions from internal construction to an external representation.
Cognitive processing: Transforming negative perceptions and beliefs about self, others and environment to positive ones through cognitive reconsideration or re-framing.
Trauma processing: Systematic desensitization, response activation and counter-conditioning, titrated extinction of emotional response, deconstructing disparity (emotional vs. reality state), resolution of traumatic material (in theory, to a state in which triggers no longer produce harmful distress and the individual is able to express relief.)
Emotional processing: Reconstructing perceptions, beliefs and erroneous expectations, habituating new life contexts for auto-activated trauma-related fears, and providing crisis cards with coded emotions and appropriate cognition. (This stage is only initiated in pre-termination phase from clinical assessment and judgement of the mental health professional.)
Experiential processing: Visualization of achieved relief state and relaxation methods.
A number of complementary approaches to trauma treatment have been implicated as well, including yoga and meditation. There has been recent interest in developing trauma-sensitive yoga practices, but the actual efficacy of yoga in reducing the effects of trauma needs more exploration.
In health and social care settings, a trauma informed approach means that care is underpinned by understandings of trauma and its far-reaching implications. Trauma is widespread. For example, 26% of participants in the Adverse Childhood Experiences (ACEs) study were survivors of one ACE and 12.5% were survivors of four or more ACEs. A trauma-informed approach acknowledges the high rates of trauma and means that care providers treat every person as if they might be a survivor of trauma. Measurement of the effectiveness of a universal trauma informed approach is in early stages and is largely based in theory and epidemiology.
Trauma informed teaching practice is an educative approach for migrant children from war-torn countries, who have typically experienced complex trauma, and the number of such children entering Canadian schools has led some school jurisdictions to consider new classroom approaches to assist these pupils. Along with complex trauma, these students often have experienced interrupted schooling due to the migration process, and as a consequence may have limited literacy skills in their first language. One study of a Canadian secondary school classroom, as told through journal entries of a student teacher, showed how Blaustein and Kinniburgh's ARC (attachment, regulation and competency) framework was used to support newly arrived refugee students from war zones. Tweedie et al. (2017) describe how key components of the ARC framework, such as establishing consistency in classroom routines; assisting students to identify and self-regulate emotional responses; and enabling student personal goal achievement, are practically applied in one classroom where students have experienced complex trauma. The authors encourage teachers and schools to avoid a deficit lens to view such pupils, and suggest ways schools can structure teaching and learning environments which take into account the extreme stresses these students have encountered.
Society and culture
Some people, and many self-help books, use the word trauma broadly, to refer to any unpleasant experience, even if the affected person has a psychologically healthy response to the experience. This imprecise language may promote the medicalization of normal human behaviors (e.g., grief after a death) and make discussions of psychological trauma more complex, but it might also encourage people to respond with compassion to the distress and suffering of others.
| Biology and health sciences | Health and fitness: General | Health |
788093 | https://en.wikipedia.org/wiki/Major%20trauma | Major trauma | Major trauma is any injury that has the potential to cause prolonged disability or death. There are many causes of major trauma, blunt and penetrating, including falls, motor vehicle collisions, stabbing wounds, and gunshot wounds. Depending on the severity of injury, quickness of management, and transportation to an appropriate medical facility (called a trauma center) may be necessary to prevent loss of life or limb. The initial assessment is critical, and involves a physical evaluation and also may include the use of imaging tools to determine the types of injuries accurately and to formulate a course of treatment.
In 2002, unintentional and intentional injuries were the fifth and seventh leading causes of deaths worldwide, accounting for 6.23% and 2.84% of all deaths. For research purposes the definition often is based on an Injury Severity Score (ISS) of greater than 15.
Classification
Injuries generally are classified by either severity, the location of damage, or a combination of both. Trauma also may be classified by demographic group, such as age or gender. It also may be classified by the type of force applied to the body, such as blunt trauma or penetrating trauma. For research purposes injury may be classified using the Barell matrix, which is based on ICD-9-CM. The purpose of the matrix is for international standardization of the classification of trauma. Major trauma sometimes is classified by body area; injuries affecting 40% are polytrauma, 30% head injuries, 20% chest trauma, 10%, abdominal trauma, and 2%, extremity trauma.
Various scales exist to provide a quantifiable metric to measure the severity of injuries. The value may be used for triaging a patient or for statistical analysis. Injury scales measure damage to anatomical parts, physiological values (blood pressure etc.), comorbidities, or a combination of those. The Abbreviated Injury Scale and the Glasgow Coma Scale are used commonly to quantify injuries for the purpose of triaging and allow a system to monitor or "trend" a patient's condition in a clinical setting. The data also may be used in epidemiological investigations and for research purposes.
Approximately 2% of those who have experienced significant trauma have a spinal cord injury.
Causes
Injuries may be caused by any combination of external forces that act physically against the body. The leading causes of traumatic death are blunt trauma, motor vehicle collisions, and falls, followed by penetrating trauma such as stab wounds or impaled objects. Subsets of blunt trauma are both the number one and two causes of traumatic death.
For statistical purposes, injuries are classified as either intentional such as suicide, or unintentional, such as a motor vehicle collision. Intentional injury is a common cause of traumas. Penetrating trauma is caused when a foreign body such as a bullet or a knife enters the body tissue, creating an open wound. In the United States, most deaths caused by penetrating trauma occur in urban areas and 80% of these deaths are caused by firearms. Blast injury is a complex cause of trauma because it commonly includes both blunt and penetrating trauma, and also may be accompanied by a burn injury. Trauma also may be associated with a particular activity, such as an occupational or sports injury.
Pathophysiology
The body responds to traumatic injury both systemically and at the injury site. This response attempts to protect vital organs such as the liver, to allow further cell duplication and to heal the damage. The healing time of an injury depends on various factors including sex, age, and the severity of injury.
The symptoms of injury may manifest in many different ways, including:
Altered mental status
Fever
Increased heart rate
Generalized edema
Increased cardiac output
Increased rate of metabolism
Various organ systems respond to injury to restore homeostasis by maintaining perfusion to the heart and brain. Inflammation after injury occurs to protect against further damage and starts the healing process. Prolonged inflammation may cause multiple organ dysfunction syndrome or systemic inflammatory response syndrome. Immediately after injury, the body increases production of glucose through gluconeogenesis and its consumption of fat via lipolysis. Next, the body tries to replenish its energy stores of glucose and protein via anabolism. In this state the body will temporarily increase its maximum expenditure for the purpose of healing injured cells.
Diagnosis
The initial assessment is critical in determining the extent of injuries and what will be needed to manage an injury, and for treating immediate life threats.
Physical examination
Primary physical examination is undertaken to identify any life-threatening problems, after which the secondary examination is carried out. This may occur during transportation or upon arrival at the hospital. The secondary examination consists of a systematic assessment of the abdominal, pelvic, and thoracic areas, a complete inspection of the body surface to find all injuries, and a neurological examination. Injuries that may manifest themselves later may be missed during the initial assessment, such as when a patient is brought into a hospital's emergency department. Generally, the physical examination is performed in a systematic way that first checks for any immediate life threats (primary survey), and then taking a more in-depth examination (secondary survey).
Imaging
Persons with major trauma commonly have chest and pelvic x-rays taken, and, depending on the mechanism of injury and presentation, a focused assessment with sonography for trauma (FAST) exam to check for internal bleeding. For those with relatively stable blood pressure, heart rate, and sufficient oxygenation, CT scans are useful. Full-body CT scans, known as pan-scans, improve the survival rate of those who have suffered major trauma. These scans use intravenous injections for the radiocontrast agent, but not oral administration. There are concerns that intravenous contrast administration in trauma situations without confirming adequate renal function may cause damage to kidneys, but this does not appear to be significant.
In the U.S., CT or MRI scans are performed on 15% of those with trauma in emergency departments. Where blood pressure is low or the heart rate is increasedlikely from bleeding in the abdomenimmediate surgery bypassing a CT scan is recommended. Modern 64-slice CT scans are able to rule out, with a high degree of accuracy, significant injuries to the neck following blunt trauma.
Surgical techniques
Surgical techniques, using a tube or catheter to drain fluid from the peritoneum, chest, or the pericardium around the heart, often are used in cases of severe blunt trauma to the chest or abdomen, especially when a person is experiencing early signs of shock. In those with low blood-pressure, likely because of bleeding in the abdominal cavity, cutting through the abdominal wall surgically is indicated.
Prevention
By identifying risk factors present within a community and creating solutions to decrease the incidence of injury, trauma referral systems may help to enhance the overall health of a population. Injury prevention strategies are commonly used to prevent injuries in children, who are a high risk population. Injury prevention strategies generally involve educating the general public about specific risk factors and developing strategies to avoid or reduce injuries. Legislation intended to prevent injury typically involves seatbelts, child car-seats, helmets, alcohol control, and increased enforcement of the legislation. Other controllable factors, such as the use of drugs including alcohol or cocaine, increases the risk of trauma by increasing the likelihood of traffic collisions, violence, and abuse occurring. Prescription drugs such as benzodiazepines may increase the risk of trauma in elderly people.
The care of acutely injured people in a public health system requires the involvement of bystanders, community members, health care professionals, and health care systems. It encompasses pre-hospital trauma assessment and care by emergency medical services personnel, emergency department assessment, treatment, stabilization, and in-hospital care among all age groups. An established trauma system network is also an important component of community disaster preparedness, facilitating the care of people who have been involved in disasters that cause large numbers of casualties, such as earthquakes.
Management
Pre-hospital
The pre-hospital use of stabilization techniques improves the chances of a person surviving the journey to the nearest trauma-equipped hospital. Emergency medicine services determines which people need treatment at a trauma center as well as provide primary stabilization by checking and treating airway, breathing, and circulation as well as assessing for disability and gaining exposure to check for other injuries.
Spinal motion restriction by securing the neck with a cervical collar and placing the person on a long spine board was of high importance in the pre-hospital setting, but due to lack of evidence to support its use, the practice is losing favor. Instead, it is recommended that more exclusive criteria be met such as age and neurological deficits to indicate the need of these adjuncts. This may be accomplished with other medical transport devices, such as a Kendrick extrication device, before moving the person. It is important to quickly control severe bleeding with direct pressure to the wound and consider the use of hemostatic agents or tourniquets if the bleeding continues. Conditions such as impending airway obstruction, enlargening neck hematoma, or unconsciousness require intubation. It is unclear, however, if this is best performed before reaching hospital or in the hospital.
Rapid transportation of severely injured patients improves the outcome in trauma. Helicopter EMS transport reduces mortality compared to ground-based transport in adult trauma patients. Before arrival at the hospital, the availability of advanced life support does not greatly improve the outcome for major trauma when compared to the administration of basic life support. Evidence is inconclusive in determining support for pre-hospital intravenous fluid resuscitation while some evidence has found it may be harmful. Hospitals with designated trauma centers have improved outcomes when compared to hospitals without them, and outcomes may improve when persons who have experienced trauma are transferred directly to a trauma center.
Improvements in pre-hospital care have led to "unexpected survivors", where patients survive trauma when they would have previously been expected to die. However these patients may struggle to rehabilitate.
In-hospital
Management of those with trauma often requires the help of many healthcare specialists including physicians, nurses, respiratory therapists, and social workers. Cooperation allows many actions to be completed at once. Generally, the first step of managing trauma is to perform a primary survey that evaluates a person's airway, breathing, circulation, and neurologic status. These steps may happen simultaneously or depend on the most pressing concern such as a tension pneumothorax or major arterial bleed. The primary survey generally includes assessment of the cervical spine, though clearing it is often not possible until after imaging, or the person has improved. After immediate life threats are controlled, a person is either moved into an operating room for immediate surgical correction of the injuries, or a secondary survey is performed that is a more detailed head-to-toe assessment of the person.
Indications for intubation include airway obstruction, inability to protect the airway, and respiratory failure. Examples of these indications include penetrating neck trauma, expanding neck hematoma, and being unconscious. In general, the method of intubation used is rapid sequence intubation followed by ventilation, though intubating in shock due to bleeding can lead to arrest, and should be done after some resuscitation whenever possible. Trauma resuscitation includes control of active bleeding. When a person is first brought in, vital signs are checked, an ECG is performed, and, if needed, vascular access is obtained. Other tests should be performed to get a baseline measurement of their current blood chemistry, such as an arterial blood gas or thromboelastography. In those with cardiac arrest due to trauma chest compressions are considered futile, but still recommended. Correcting the underlying cause such as a pneumothorax or pericardial tamponade, if present, may help.
A FAST exam may help assess for internal bleeding. In certain traumas, such as maxillofacial trauma, it may be beneficial to have a highly trained health care provider available to maintain airway, breathing, and circulation.
Intravenous fluids
Traditionally, high-volume intravenous fluids were given to people who had poor perfusion due to trauma. This is still appropriate in cases with isolated extremity trauma, thermal trauma, or head injuries. In general, however, giving lots of fluids appears to increase the risk of death. Current evidence supports limiting the use of fluids for penetrating thorax and abdominal injuries, allowing mild hypotension to persist. Targets include a mean arterial pressure of 60 mmHg, a systolic blood pressure of 70–90 mmHg, or the re-establishment of peripheral pulses and adequate ability to think. Hypertonic saline has been studied and found to be of little difference from normal saline.
As no intravenous fluids used for initial resuscitation have been shown to be superior, warmed Lactated Ringer's solution continues to be the solution of choice. If blood products are needed, a greater use of fresh frozen plasma and platelets instead of only packed red blood cells has been found to improve survival and lower overall blood product use; a ratio of 1:1:1 is recommended. The success of platelets has been attributed to the fact that they may prevent coagulopathy from developing. Cell salvage and autotransfusion also may be used.
Blood substitutes such as hemoglobin-based oxygen carriers are in development; however, as of 2013 there are none available for commercial use in North America or Europe. These products are only available for general use in South Africa and Russia.
Medications
Tranexamic acid decreases death in people who are having ongoing bleeding due to trauma, as well as those with mild to moderate traumatic brain injury and evidence of intracranial bleeding on CT scan. It only appears to be beneficial, however, if administered within the first three hours after trauma. For severe bleeding, for example from bleeding disorders, recombinant factor VIIaa protein that assists blood clottingmay be appropriate. While it decreases blood use, it does not appear to decrease the mortality rate. In those without previous factor VII deficiency, its use is not recommended outside of trial situations.
Other medications may be used in conjunction with other procedures to stabilize a person who has sustained a significant injury. While positive inotropic medications such as norepinephrine sometimes are used in hemorrhagic shock as a result of trauma, there is a lack of evidence for their use. Therefore, as of 2012 they have not been recommended. Allowing a low blood pressure may be preferred in some situations.
Surgery
The decision whether to perform surgery is determined by the extent of the damage and the anatomical location of the injury. Bleeding must be controlled before definitive repair may occur. Damage control surgery is used to manage severe trauma in which there is a cycle of metabolic acidosis, hypothermia, and hypotension that may lead to death, if not corrected. The main principle of the procedure involves performing the fewest procedures to save life and limb; less critical procedures are left until the victim is more stable. Approximately 15% of all people with trauma have abdominal injuries, and approximately 25% of these require exploratory surgery. The majority of preventable deaths from trauma result from unrecognised intra-abdominal bleeding.
Prognosis
Trauma deaths occur in immediate, early, or late stages. Immediate deaths usually are due to apnea, severe brain or high spinal cord injury, or rupture of the heart or of large blood vessels. Early deaths occur within minutes to hours and often are due to hemorrhages in the outer meningeal layer of the brain, torn arteries, blood around the lungs, air around the lungs, ruptured spleen, liver laceration, or pelvic fracture. Immediate access to care may be crucial to prevent death in persons experiencing major trauma. Late deaths occur days or weeks after the injury and often are related to infection. Prognosis is better in countries with a dedicated trauma system where injured persons are provided quick and effective access to proper treatment facilities.
Long-term prognosis frequently is complicated by pain; more than half of trauma patients have moderate to severe pain one year after injury. Many also experience a reduced quality of life years after an injury, with 20% of victims sustaining some form of disability.
Physical trauma may lead to development of post-traumatic stress disorder (PTSD). One study has found no correlation between the severity of trauma and the development of PTSD.
Epidemiology
Trauma is the sixth leading cause of death worldwide, resulting in five million or 10% of all deaths annually. It is the fifth leading cause of significant disability. About half of trauma deaths are in people aged between 15 and 45 years and trauma is the leading cause of death in this age group. Injury affects more males; 68% of injuries occur in males and death from trauma is twice as common in males as it is in females, this is believed to be because males are much more willing to engage in risk-taking activities. Teenagers and young adults are more likely to need hospitalization from injuries than other age groups. While elderly persons are less likely to be injured, they are more likely to die from injuries sustained due to various physiological differences that make it more difficult for the body to compensate for the injuries. The primary causes of traumatic death are central nervous system injuries and substantial blood loss. Various classification scales exist for use with trauma to determine the severity of injuries, which are used to determine the resources used and, for statistical collection.
History
The human remains discovered at the site of Nataruk in Turkana, Kenya, are claimed to show major trauma—both blunt and penetrating—caused by violent trauma to the head, neck, ribs, knees, and hands, which has been interpreted by some researchers as establishing the existence of warfare between two groups of hunter-gatherers 10,000 years ago. The evidence for blunt-force trauma at Nataruk has been challenged, however, and the interpretation that the site represents an early example of warfare has been questioned.
Society and culture
Economics
The financial cost of trauma includes both the amount of money spent on treatment and the loss of potential economic gain through absence from work. The average financial cost for the treatment of traumatic injury in the United States is approximately per person, making it costlier than the treatment of cancer and cardiovascular diseases. One reason for the high cost of the treatment for trauma is the increased possibility of complications, which leads to the need for more interventions. Maintaining a trauma center is costly because they are open continuously and maintain a state of readiness to receive patients, even if there are none. In addition to the direct costs of the treatment, there also is a burden on the economy due to lost wages and productivity, which in 2009, accounted for approximately in the United States.
Low- and middle-income countries
Citizens of low- and middle-income countries (LMICs) often have higher mortality rates from injury. These countries accounted for 89% of all deaths from injury worldwide. Many of these countries do not have access to sufficient surgical care and many do not have a trauma system in place. In addition, most LMICs do not have a pre-hospital care system that treats injured persons initially and transports them to hospital quickly, resulting in most casualty patients being transported by private vehicles. Also, their hospitals lack the appropriate equipment, organizational resources, or trained staff. By 2020, the amount of trauma-related deaths is expected to decline in high-income countries, while in low- to middle-income countries it is expected to increase.
Special populations
Children
Due to anatomical and physiological differences, injuries in children need to be approached differently from those in adults. Accidents are the leading cause of death in children between 1 and 14 years old. In the United States, approximately sixteen million children go to an emergency department due to some form of injury every year, with boys being more frequently injured than girls by a ratio of 2:1. The world's five most common unintentional injuries in children as of 2008 are road crashes, drowning, burns, falls, and poisoning.
Weight estimation is an important part of managing trauma in children because the accurate dosing of medicine may be critical for resuscitative efforts. A number of methods to estimate weight, including the Broselow tape, Leffler formula, and Theron formula exist.
Pregnancy
Trauma occurs in approximately 5% of all pregnancies, and is the leading cause of maternal death. Additionally, pregnant women may experience placental abruption, pre-term labor, and uterine rupture. There are diagnostic issues during pregnancy; ionizing radiation has been shown to cause birth defects, although the doses used for typical exams generally are considered safe. Due to normal physiological changes that occur during pregnancy, shock may be more difficult to diagnose. Where the woman is more than 23 weeks pregnant, it is recommended that the fetus be monitored for at least four hours by cardiotocography.
A number of treatments beyond typical trauma care may be needed when the patient is pregnant. Because the weight of the uterus on the inferior vena cava may decrease blood return to the heart, it may be very beneficial to lay a woman in late pregnancy on her left side. also recommended are Rho(D) immune globulin in those who are rh negative, corticosteroids in those who are 24 to 34 weeks and may need delivery or a caesarean section in the event of cardiac arrest.
Research
Most research on trauma occurs during war and military conflicts as militaries will increase trauma research spending in order to prevent combat related deaths. Some research is being conducted on patients who were admitted into an intensive care unit or trauma center, and received a trauma diagnosis that caused a negative change in their health-related quality of life, with a potential to create anxiety and symptoms of depression. New preserved blood products also are being researched for use in pre-hospital care; it is impractical to use the currently available blood products in a timely fashion in remote, rural settings or in theaters of war.
| Biology and health sciences | Injury | null |
788664 | https://en.wikipedia.org/wiki/Kenorland | Kenorland | Kenorland is a hypothetical Neoarchean supercontinent. If it existed, it would have been one of the earliest known supercontinents on Earth. It is thought to have formed during the Neoarchaean Era c. 2.72 billion years ago (2.72 Ga) by the accretion of Neoarchaean cratons and the formation of new continental crust. It comprised what later became Laurentia (the core of today's North America and Greenland), Baltica (today's Scandinavia and Baltic), Western Australia and Kalaharia.
Swarms of volcanic dikes and their paleomagnetic orientation as well as the existence of similar stratigraphic sequences permit this reconstruction. The core of Kenorland, the Baltic/Fennoscandian Shield, traces its origins back to over 3.1 Ga. The Yilgarn Craton (present-day Western Australia) contains zircon crystals in its crust that date back to 4.4 Ga.
Kenorland was named after the Kenoran orogeny (also called the Algoman orogeny), which in turn was named after the town of Kenora, Ontario.
Formation
Kenorland was formed around 2.72 billion years ago (2.72 Ga) as a result of a series of accretion events and the formation of new continental crust.
The accretion events are recorded in the greenstone belts of the Yilgarn Craton as metamorphosed basalt belts and granitic domes accreted around the high grade metamorphic core of the Western Gneiss Terrane, which includes elements of up to 3.2 Ga in age and some older portions, for example the Narryer Gneiss Terrane.
Breakup or disassembly
Paleomagnetic studies show Kenorland was in generally low latitudes until tectonic magma-plume rifting began to occur between 2.48 Ga and 2.45 Ga. At 2.45 Ga the Baltic Shield was over the equator and was joined to Laurentia (the Canadian Shield) and both the Kola and Karelia cratons. The protracted breakup of Kenorland during the Late Neoarchaean and early Paleoproterozoic Era 2.48 to 2.10 Gya, during the Siderian and Rhyacian periods, is manifested by mafic dikes and sedimentary rift-basins and rift-margins on many continents. On early Earth, this type of bimodal deep mantle plume rifting was common in Archaean and Neoarchaean crust and continent formation.
The geological time period surrounding the breakup of Kenorland is thought by many geologists to be the beginning of the transition point from the deep-mantle-plume method of continent formation in the Hadean to Early Archean (before the final formation of the Earth's inner core) to the subsequent two-layer core-mantle plate tectonics convection theory. However, the findings of an earlier continent, Ur, and a supercontinent of around 3.1 Gya, Vaalbara, indicate this transition period may have occurred much earlier.
The Kola and Karelia cratons began to drift apart around 2.45 Gya, and by 2.4 Gya the Kola craton was at about 30 degrees south latitude and the Karelia craton was at about 15 degrees south latitude. Paleomagnetic evidence shows that at 2.45 Gya the Yilgarn craton (now the bulk of Western Australia) was not connected to Fennoscandia-Laurentia and was at about ~5 degrees south latitude.
This implies that at 2.515 Gya an ocean existed between the Kola and Karelia cratons, and that by 2.45 Gya there was no longer a supercontinent. Also, there is speculation based on the rift margin spatial arrangements of Laurentia, that at some time during the breakup, the Slave and Superior cratons were not part of the supercontinent Kenorland, but, by then may have been two different Neoarchaean landmasses (supercratons) on opposite ends of a very large Kenorland. This is based on how drifting assemblies of various constituent pieces should flow reasonably together toward the amalgamation of the new subsequent continent. The Slave and Superior cratons now constitute the northwest and southeast portions of the Canadian Shield, respectively.
The breakup of Kenorland was contemporary with the Huronian glaciation which persisted for up to 60 million years. The banded iron formations (BIF) show their greatest extent at this period, thus indicating a massive increase in oxygen build-up from an estimated 0.1% of the atmosphere to 1%. The rise in oxygen levels caused the virtual disappearance of the greenhouse gas methane (oxidized into carbon dioxide and water).
The simultaneous breakup of Kenorland generally increased continental rainfall everywhere, thus increasing erosion and further reducing the other greenhouse gas, carbon dioxide. With the reduction in greenhouse gases, and with solar output being less than 85% its current power, this led to a runaway Snowball Earth scenario, where average temperatures planet-wide plummeted to below freezing. Despite the anoxia indicated by the BIF, photosynthesis continued, stabilizing climates at new levels during the second part of the Proterozoic Era.
| Physical sciences | Paleogeography | Earth science |
788698 | https://en.wikipedia.org/wiki/Hachik%C5%8D | Hachikō | was a Japanese Akita dog remembered for his remarkable loyalty to his owner, Hidesaburō Ueno, for whom he continued to wait for over nine years following Ueno's death.
Hachikō was born on November 10, 1923, at a farm near the city of Ōdate, Akita Prefecture. In 1924, Hidesaburō Ueno, a professor at the Tokyo Imperial University, brought him to live in Shibuya, Tokyo, as his pet. Hachikō would meet Ueno at Shibuya Station every day after his commute home. This continued until May 21, 1925, when Ueno died of a cerebral hemorrhage while at work. From then until his death on March 8, 1935, Hachikō would return to Shibuya Station every day to await Ueno's return.
During his lifetime, the dog was held up in Japanese culture as an example of loyalty and fidelity. Since his death, he continues to be remembered worldwide in popular culture with statues, movies and books. Hachikō is also known in Japanese as , with the suffix originating as one once used for ancient Chinese dukes; in this context, it was an affectionate addition to his name Hachi.
Life
Hachikō, a white Akita, was born on November 10, 1923, at a farm located in Ōdate, Akita Prefecture, Japan. In 1924, Hidesaburō Ueno, a professor in the agriculture department at the Tokyo Imperial University, took Hachikō as a pet and brought him to live in Shibuya, Tokyo. Ueno would commute daily to work, and Hachikō would leave the house to greet him at the end of each day at the nearby Shibuya Station. The pair continued the daily routine until May 21, 1925, when Ueno did not return. The professor had suffered a cerebral hemorrhage while he was giving a lecture to his class, and he died without ever returning to the train station at which Hachikō waited.
Each day, for the next 9 years, 9 months and 15 days, Hachikō awaited Ueno's return, appearing precisely when the train was due at the station.
Hachikō attracted the attention of other commuters. Many of the people who frequented the Shibuya train station had seen Hachikō and Professor Ueno together each day. Initial reactions from the people, especially from those working at the station, were not necessarily friendly. However, after the first appearance of the article about him in on October 4, 1932, people started to bring Hachikō treats and food to nourish him during his wait.
Publication
One of Ueno's students, Hirokichi Saito, who developed expertise on the Akita breed, saw the dog at the station and followed him to the home of Ueno's former gardener, Kozaburo Kobayashi, where he learned the history of Hachikō's life. Shortly after the meeting, the former student published a documented census of Akitas in Japan. His research found only 30 purebred Akitas remaining, including Hachikō from Shibuya Station.
He returned frequently to visit Hachikō, and over the years he published several articles about the dog's remarkable loyalty. In 1932, one of his articles, published in , placed the dog in the national spotlight.
Hachikō became a national sensation. His faithfulness to his master's memory impressed the people of Japan as a spirit of family loyalty to which all should strive to achieve. Teachers and parents used Hachikō's vigil as an example for children to follow. Teru Ando rendered a sculpture of the dog, and throughout the country, a new awareness of the Akita breed grew.
Eventually, Hachikō's faithfulness became a national symbol of loyalty, particularly to the person and institution of Emperors.
Death
Hachikō died on March 8, 1935, at the age of 11. He was found on a street in Shibuya. In March 2011, scientists finally settled the cause of Hachikō's death: the dog had both terminal cancer and a filaria infection. There were also four skewers in Hachikō's stomach, but the skewers did not damage his stomach nor cause his death.
Legacy
After his death, Hachikō's remains were cremated and his ashes were buried in Aoyama Cemetery, Minato, Tokyo where they rest beside those of Hachikō's beloved master, Professor Ueno. Hachikō's pelt was preserved after his death, and his taxidermy mount is on permanent display at the National Science Museum of Japan in Ueno, Tokyo.
Bronze statues
In April 1934, a bronze statue based in his likeness sculpted by Teru Ando was erected at Shibuya Station. The statue was recycled for the war effort during World War II. In 1948, Takeshi Ando (son of the original artist) made a second statue. The new statue, which was erected in August 1948, still stands and is a popular meeting spot. The station entrance near this statue is named "Hachikō-guchi", meaning "The Hachikō Entrance/Exit", and is one of Shibuya Station's five exits.
A similar statue stands in Hachikō's hometown, in front of Ōdate Station; it was built in 1932. In 2004, a new statue of Hachikō was erected in front of the Akita Dog Museum in Ōdate.
After the release of the American movie Hachi: A Dog's Tale (2009), which was filmed in Woonsocket, Rhode Island, the Japanese Consulate in the United States helped the Blackstone Valley Tourism Council and the city of Woonsocket to unveil an identical statue of Hachikō at the Woonsocket Depot Square, which was the location of the "Bedridge" train station featured in the movie.
On March 9, 2015, the Faculty of Agriculture of the University of Tokyo, Ueno's alma mater and workplace where he commuted every workday during his time with Hachikō, unveiled a bronze statue depicting Ueno returning to meet Hachikō to commemorate the 80th anniversary of Hachikō's death. The statue was sculpted by Tsutomu Ueda from Nagoya and depicts an excited Hachikō jumping up to greet his master at the end of a workday. Ueno is dressed in a hat, suit, and trench coat, with his briefcase placed on the ground. Hachikō wears a studded harness as seen in his last photos.
Annual ceremony
Each year on March 8, Hachikō's devotion is honored with a solemn ceremony of remembrance at Shibuya Station. Hundreds of dog lovers often turn out to honor his memory and loyalty.
Hachikō's bark
In 1994, Nippon Cultural Broadcasting in Japan was able to lift a recording of Hachikō barking from an old 78 RPM record that had been broken into several pieces. The pieces were melded together using a laser. A huge advertising campaign ensued and on Saturday, May 28, 1994, 59 years after his death, millions of radio listeners tuned in to hear Hachikō's bark.
Shibuya ward minibus
In 2003, in Shibuya ward, a minibus (officially called "community bus") started routes in the ward, nicknamed "Hachiko-bus". There are four different routes. People can hear the theme song in this bus.
Images
In July 2012, rare photos from Hachikō's life were shown at the Shibuya Folk and Literary Shirane Memorial Museum in Shibuya ward as part of the (exhibition of newly stored materials).
In November 2015, a previously undiscovered photograph of Hachikō was published for the first time. The image, which was captured in 1934 by a Tokyo bank employee, shows the dog relaxing by himself in front of Shibuya Station.
Yaeko Sakano
, more often referred as Yaeko Ueno, was the unmarried partner of Hidesaburō Ueno for about 10 years until his death in 1925. Hachikō was reported to have shown great happiness and affection towards her whenever she came to visit him. Yaeko died on April 30, 1961, at the age of 76 and was buried at a temple in Taitō, further away from Ueno's grave, despite her requests to her family members to be buried with her late partner.
In 2013, Yaeko's record, which indicated that she had wanted to be buried with Ueno, was found by Sho Shiozawa, the professor of the University of Tokyo. Shiozawa was also the president of the Japanese Society of Irrigation, Drainage and Rural Engineering, which manages Ueno's grave at Aoyama Cemetery.
Later on November 10, 2013, which also marked the 90th anniversary of the birth of Hachikō, Sho Shiozawa and Keita Matsui, a curator of the Shibuya Folk and Literary Shirane Memorial Museum, felt the need of Yaeko to be buried together with Ueno and Hachikō.
The process began with willing consent from the Ueno and Sakano families and the successful negotiations with management of the Aoyama Cemetery. However, due to regulations and bureaucracy, the process took about 2 years. Shiozawa also went on as one of the organizers involved with the erection of bronze statue of Hachikō and Ueno which was unveiled on the grounds of the University of Tokyo on March 9, 2015, to commemorate the 80th anniversary of Hachikō's death.
89th Birthday
On November 10, 2012, Google commemorated what would have been Hachikō's 89th birthday by uploading a Google Doodle that depicts the famous dog waiting by the Shibuya Station railway and holding Ueno's hat in his mouth.
100th Birthday
On November 10, 2023, the Japanese people commemorated what would have been Hachikō's 100th birthday. Events included visits to the Shibuya Station, songs, and dances. A holographic display of Hachikō was installed at the Akita Dog Visitor Center in Odate, Akita Prefecture, greeting guests who came by to celebrate his birth.
Reunion of Hachikō's family
On May 19, 2016, during the ceremony at the Aoyama Cemetery with both Ueno and Sakano families in present, some of the ashes of Yaeko Sakano were buried with Ueno and Hachikō, her name and the date of her death was inscribed on the side of his tombstone, thus fulfilling the reunion of Hachikō's family.
"By putting the names of both on their grave, we can show future generations the fact that Hachikō had two keepers," Shiozawa said. "To Hachikō the professor was his father, and Yaeko was his mother," Matsui added.
Gallery
In popular culture
Hachikō plays an important part in the 1967 children's book Taka-chan and I: A Dog's Journey to Japan.
Hachikō was the subject of the 1987 film directed by Seijirō Kōyama, which told the story of his life from his birth up until his death and imagined spiritual reunion with his master. Considered a blockbuster success, the film was the last big hit for Japanese film studio Shochiku Kinema Kenkyū-jo.
"Jurassic Bark" (2002), episode 7 of season 4 of the animated series Futurama has an extended homage to Hachikō, with Fry discovering the fossilized remains of his dog, Seymour. After Fry was frozen, Seymour is shown to have waited for Fry to return for 12 years outside Panucci's Pizza, where Fry worked, never disobeying his master's last command to wait for him.
Hachikō is also the subject of a 2004 children's book entitled Hachikō: The True Story of a Loyal Dog, written by Pamela S. Turner and illustrated by Yan Nascimbene. Another children's book, a short novel for readers of all ages called Hachiko Waits, written by Lesléa Newman and illustrated by Machiyo Kodaira, was published by Henry Holt & Co. in 2004. Another illustrated book about the faithful dog is Hachikō: The Dog that Waited, by Catalan author Lluís Prats and Polish illustrator Zuzanna Celej, published in 2022.
In the Japanese manga One Piece, there is a similar story with a dog named Shushu.
In the video game The World Ends with You (2007), the Hachikō statue is featured, its legend referenced on several occasions. The location of the statue plays an important role in the narrative of the game. The statue is featured again in the sequel, NEO: The World Ends With You (2021).
Hachi: A Dog's Tale, released in August 2009, is an American movie starring actor Richard Gere, directed by Lasse Hallström, about Hachikō and his relationship with an American professor & his family following the same basic story, but a little different, for example Hachikō was a gift to professor Ueno, this part is entirely different in the American version. The movie was filmed in Woonsocket, Rhode Island, primarily in and around the Woonsocket Depot Square area and also featured Joan Allen and Jason Alexander. The role of Hachi was played by three Akitas – Leyla, Chico and Forrest. Mark Harden describes how he and his team trained the three dogs in the book, "Animal Stars: Behind the Scenes with Your Favorite Animal Actors." After the movie was completed, Harden adopted Chico.
The 2015 Telugu film Tommy was based on the story of Hachikō.
Similar cases
| Biology and health sciences | Individual animals | Animals |
23809352 | https://en.wikipedia.org/wiki/Carbon-fiber%20reinforced%20polymer | Carbon-fiber reinforced polymer | Carbon fiber-reinforced polymers (American English), carbon-fibre-reinforced polymers (Commonwealth English), carbon-fiber-reinforced plastics, carbon-fiber reinforced-thermoplastic (CFRP, CRP, CFRTP), also known as carbon fiber, carbon composite, or just carbon, are extremely strong and light fiber-reinforced plastics that contain carbon fibers. CFRPs can be expensive to produce, but are commonly used wherever high strength-to-weight ratio and stiffness (rigidity) are required, such as aerospace, superstructures of ships, automotive, civil engineering, sports equipment, and an increasing number of consumer and technical applications.
The binding polymer is often a thermoset resin such as epoxy, but other thermoset or thermoplastic polymers, such as polyester, vinyl ester, or nylon, are sometimes used. The properties of the final CFRP product can be affected by the type of additives introduced to the binding matrix (resin). The most common additive is silica, but other additives such as rubber and carbon nanotubes can be used.
Carbon fiber is sometimes referred to as graphite-reinforced polymer or graphite fiber-reinforced polymer (GFRP is less common, as it clashes with glass-(fiber)-reinforced polymer).
Properties
CFRP are composite materials. In this case the composite consists of two parts: a matrix and a reinforcement. In CFRP the reinforcement is carbon fiber, which provides its strength. The matrix is usually a thermosetting plastic, such as polyester resin, to bind the reinforcements together. Because CFRPs consist of two distinct elements, the material properties depend on these two elements.
Reinforcement gives CFRPs their strength and rigidity, measured by stress and elastic modulus respectively. Unlike isotropic materials like steel and aluminum, CFRPs have directional strength properties. The properties of a CFRP depend on the layouts of the carbon fiber and the proportion of the carbon fibers relative to the polymer. The two different equations governing the net elastic modulus of composite materials using the properties of the carbon fibers and the polymer matrix can also be applied to carbon fiber reinforced plastics. The equation:
is valid for composite materials with the fibers oriented in the direction of the applied load. is the total composite modulus, and are the volume fractions of the matrix and fiber respectively in the composite, and and are the elastic moduli of the matrix and fibers respectively. The other extreme case of the elastic modulus of the composite with the fibers oriented transverse to the applied load can be found using the equation:
The fracture toughness of carbon fiber reinforced plastics is governed by the mechanisms: 1) debonding between the carbon fiber and polymer matrix, 2) fiber pull-out, and 3) delamination between the CFRP sheets. Typical epoxy-based CFRPs exhibit virtually no plasticity, with less than 0.5% strain to failure. Although CFRPs with epoxy have high strength and elastic modulus, the brittle fracture mechanics presents unique challenges to engineers in failure detection since failure occurs catastrophically. As such, recent efforts to toughen CFRPs include modifying the existing epoxy material and finding alternative polymer matrix. One such material with high promise is PEEK, which exhibits an order of magnitude greater toughness with similar elastic modulus and tensile strength. However, PEEK is much more difficult to process and more expensive.
Despite their high initial strength-to-weight ratios, a design limitation of CFRPs are their lack of a definable fatigue limit. This means, theoretically, that stress cycle failure cannot be ruled out. While steel and many other structural metals and alloys do have estimable fatigue or endurance limits, the complex failure modes of composites mean that the fatigue failure properties of CFRPs are difficult to predict and design against; however emerging research has shed light on the effects of low velocity impacts on composites. Low velocity impacts can make carbon fibre polymers susceptible to damage. As a result, when using CFRPs for critical cyclic-loading applications, engineers may need to design in considerable strength safety margins to provide suitable component reliability over its service life.
Environmental effects such as temperature and humidity can have profound effects on the polymer-based composites, including most CFRPs. While CFRPs demonstrate excellent corrosion resistance, the effect of moisture at wide ranges of temperatures can lead to degradation of the mechanical properties of CFRPs, particularly at the matrix-fiber interface. While the carbon fibers themselves are not affected by the moisture diffusing into the material, the moisture plasticizes the polymer matrix. This leads to significant changes in properties that are dominantly influenced by the matrix in CFRPs such as compressive, interlaminar shear, and impact properties. The epoxy matrix used for engine fan blades is designed to be impervious against jet fuel, lubrication, and rain water, and external paint on the composites parts is applied to minimize damage from ultraviolet light.
Carbon fibers can cause galvanic corrosion when CRP parts are attached to aluminum or mild steel but not to stainless steel or titanium.
Carbon Fiber Reinforced Plastics are very hard to machine, and cause significant tool wear. The tool wear in CFRP machining is dependent on the fiber orientation and machining condition of the cutting process. To reduce tool wear various types of coated tools are used in machining CFRP and CFRP-metal stack.
Manufacturing
The primary element of CFRPs is a carbon filament; this is produced from a precursor polymer such as polyacrylonitrile (PAN), rayon, or petroleum pitch. For synthetic polymers such as PAN or rayon, the precursor is first spun into filament yarns, using chemical and mechanical processes to initially align the polymer chains in a way to enhance the final physical properties of the completed carbon fiber. Precursor compositions and mechanical processes used during spinning filament yarns may vary among manufacturers. After drawing or spinning, the polymer filament yarns are then heated to drive off non-carbon atoms (carbonization), producing the final carbon fiber. The carbon fibers filament yarns may be further treated to improve handling qualities, then wound onto bobbins. From these fibers, a unidirectional sheet is created. These sheets are layered onto each other in a quasi-isotropic layup, e.g. 0°, +60°, or −60° relative to each other.
From the elementary fiber, a bidirectional woven sheet can be created, i.e. a twill with a 2/2 weave. The process by which most CFRPs are made varies, depending on the piece being created, the finish (outside gloss) required, and how many of the piece will be produced. In addition, the choice of matrix can have a profound effect on the properties of the finished composite.
Many CFRP parts are created with a single layer of carbon fabric that is backed with fiberglass. A tool called a chopper gun is used to quickly create these composite parts. Once a thin shell is created out of carbon fiber, the chopper gun cuts rolls of fiberglass into short lengths and sprays resin at the same time, so that the fiberglass and resin are mixed on the spot. The resin is either external mix, wherein the hardener and resin are sprayed separately, or internal mixed, which requires cleaning after every use.
Manufacturing methods may include the following:
Molding
One method of producing CFRP parts is by layering sheets of carbon fiber cloth into a mold in the shape of the final product. The alignment and weave of the cloth fibers is chosen to optimize the strength and stiffness properties of the resulting material. The mold is then filled with epoxy and is heated or air-cured. The resulting part is very corrosion-resistant, stiff, and strong for its weight. Parts used in less critical areas are manufactured by draping cloth over a mold, with epoxy either pre-impregnated into the fibers (also known as pre-preg) or "painted" over it. High-performance parts using single molds are often vacuum-bagged and/or autoclave-cured, because even small air bubbles in the material will reduce strength. An alternative to the autoclave method is to use internal pressure via inflatable air bladders or EPS foam inside the non-cured laid-up carbon fiber.
Vacuum bagging
For simple pieces of which relatively few copies are needed (one or two per day), a vacuum bag can be used. A fiberglass, carbon fiber, or aluminum mold is polished and waxed, and has a release agent applied before the fabric and resin are applied, and the vacuum is pulled and set aside to allow the piece to cure (harden). There are three ways to apply the resin to the fabric in a vacuum mold.
The first method is manual and called a wet layup, where the two-part resin is mixed and applied before being laid in the mold and placed in the bag. The other one is done by infusion, where the dry fabric and mold are placed inside the bag while the vacuum pulls the resin through a small tube into the bag, then through a tube with holes or something similar to evenly spread the resin throughout the fabric. Wire loom works perfectly for a tube that requires holes inside the bag. Both of these methods of applying resin require hand work to spread the resin evenly for a glossy finish with very small pin-holes.
A third method of constructing composite materials is known as a dry layup. Here, the carbon fiber material is already impregnated with resin (pre-preg) and is applied to the mold in a similar fashion to adhesive film. The assembly is then placed in a vacuum to cure. The dry layup method has the least amount of resin waste and can achieve lighter constructions than wet layup. Also, because larger amounts of resin are more difficult to bleed out with wet layup methods, pre-preg parts generally have fewer pinholes. Pinhole elimination with minimal resin amounts generally require the use of autoclave pressures to purge the residual gases out.
Compression molding
A quicker method uses a compression mold, also commonly known as carbon fiber forging. This is a two (male and female), or multi-piece mold, usually made out of aluminum or steel and more recently 3D printed plastic. The mold components are pressed together with the fabric and resin loaded into the inner cavity that ultimately becomes the desired component. The benefit is the speed of the entire process. Some car manufacturers, such as BMW, claimed to be able to cycle a new part every 80 seconds. However, this technique has a very high initial cost since the molds require CNC machining of very high precision.
Filament winding
For difficult or convoluted shapes, a filament winder can be used to make CFRP parts by winding filaments around a mandrel or a core.
Applications
Applications for CFRPs include the following:
Aerospace engineering
The Airbus A350 XWB is built of 53% CFRP including wing spars and fuselage components, overtaking the Boeing 787 Dreamliner, for the aircraft with the highest weight ratio for CFRP, which is 50%. This was one of the first commercial aircraft to have wing spars made from composites. The Airbus A380 was one of the first commercial airliners to have a central wing-box made of CFRP; it is the first to have a smoothly contoured wing cross-section instead of the wings being partitioned span-wise into sections. This flowing, continuous cross section optimises aerodynamic efficiency. Moreover, the trailing edge, along with the rear bulkhead, empennage, and un-pressurised fuselage are made of CFRP. However, many delays have pushed order delivery dates back because of problems with the manufacture of these parts. Many aircraft that use CFRPs have experienced delays with delivery dates due to the relatively new processes used to make CFRP components, whereas metallic structures have been studied and used on airframes for decades, and the processes are relatively well understood. A recurrent problem is the monitoring of structural ageing, for which new methods are constantly investigated, due to the unusual multi-material and anisotropic nature of CFRPs.
In 1968 a Hyfil carbon-fiber fan assembly was in service on the Rolls-Royce Conways of the Vickers VC10s operated by BOAC.
Specialist aircraft designers and manufacturers Scaled Composites have made extensive use of CFRPs throughout their design range, including the first private crewed spacecraft Spaceship One. CFRPs are widely used in micro air vehicles (MAVs) because of their high strength-to-weight ratio.
Automotive engineering
CFRPs are extensively used in high-end automobile racing. The high cost of carbon fiber is mitigated by the material's unsurpassed strength-to-weight ratio, and low weight is essential for high-performance automobile racing. Race-car manufacturers have also developed methods to give carbon fiber pieces strength in a certain direction, making it strong in a load-bearing direction, but weak in directions where little or no load would be placed on the member. Conversely, manufacturers developed omnidirectional carbon fiber weaves that apply strength in all directions. This type of carbon fiber assembly is most widely used in the "safety cell" monocoque chassis assembly of high-performance race-cars. The first carbon fiber monocoque chassis was introduced in Formula One by McLaren in the 1981 season. It was designed by John Barnard and was widely copied in the following seasons by other F1 teams due to the extra rigidity provided to the chassis of the cars.
Many supercars over the past few decades have incorporated CFRPs extensively in their manufacture, using it for their monocoque chassis as well as other components. As far back as 1971, the Citroën SM offered optional lightweight carbon fiber wheels.
Use of the material has been more readily adopted by low-volume manufacturers who used it primarily for creating body-panels for some of their high-end cars due to its increased strength and decreased weight compared with the glass-reinforced polymer they used for the majority of their products.
Civil engineering
CFRPs have become a notable material in structural engineering applications. Studied in an academic context as to their potential benefits in construction, CFRPs have also proved themselves cost-effective in a number of field applications strengthening concrete, masonry, steel, cast iron, and timber structures. Their use in industry can be either for retrofitting to strengthen an existing structure or as an alternative reinforcing (or prestressing) material instead of steel from the outset of a project.
Retrofitting has become the increasingly dominant use of the material in civil engineering, and applications include increasing the load capacity of old structures (such as bridges, beams, ceilings, columns and walls) that were designed to tolerate far lower service loads than they are experiencing today, seismic retrofitting, and repair of damaged structures. Retrofitting is popular in many instances as the cost of replacing the deficient structure can greatly exceed the cost of strengthening using CFRP.
Applied to reinforced concrete structures for flexure, the use of CFRPs typically has a large impact on strength (doubling or more the strength of the section is not uncommon), but only moderately increases stiffness (as little as 10%). This is because the material used in such applications is typically very strong (e.g., 3 GPa ultimate tensile strength, more than 10 times mild steel) but not particularly stiff (150 to 250 GPa elastic modulus, a little less than steel, is typical). As a consequence, only small cross-sectional areas of the material are used. Small areas of very high strength but moderate stiffness material will significantly increase strength, but not stiffness.
CFRPs can also be used to enhance shear strength of reinforced concrete by wrapping fabrics or fibers around the section to be strengthened. Wrapping around sections (such as bridge or building columns) can also enhance the ductility of the section, greatly increasing the resistance to collapse under dynamic loading. Such 'seismic retrofit' is the major application in earthquake-prone areas, since it is much more economic than alternative methods.
If a column is circular (or nearly so) an increase in axial capacity is also achieved by wrapping. In this application, the confinement of the CFRP wrap enhances the compressive strength of the concrete. However, although large increases are achieved in the ultimate collapse load, the concrete will crack at only slightly enhanced load, meaning that this application is only occasionally used. Specialist ultra-high modulus CFRP (with tensile modulus of 420 GPa or more) is one of the few practical methods of strengthening cast iron beams. In typical use, it is bonded to the tensile flange of the section, both increasing the stiffness of the section and lowering the neutral axis, thus greatly reducing the maximum tensile stress in the cast iron.
In the United States, prestressed concrete cylinder pipes (PCCP) account for a vast majority of water transmission mains. Due to their large diameters, failures of PCCP are usually catastrophic and affect large populations. Approximately of PCCP were installed between 1940 and 2006. Corrosion in the form of hydrogen embrittlement has been blamed for the gradual deterioration of the prestressing wires in many PCCP lines. Over the past decade, CFRPs have been used to internally line PCCP, resulting in a fully structural strengthening system. Inside a PCCP line, the CFRP liner acts as a barrier that controls the level of strain experienced by the steel cylinder in the host pipe. The composite liner enables the steel cylinder to perform within its elastic range, to ensure the pipeline's long-term performance is maintained. CFRP liner designs are based on strain compatibility between the liner and host pipe.
CFRPs are more costly materials than commonly used their counterparts in the construction industry, glass fiber-reinforced polymers (GFRPs) and aramid fiber-reinforced polymers (AFRPs), though CFRPs are, in general, regarded as having superior properties. Much research continues to be done on using CFRPs both for retrofitting and as an alternative to steel as reinforcing or prestressing materials. Cost remains an issue and long-term durability questions still remain. Some are concerned about the brittle nature of CFRPs, in contrast to the ductility of steel. Though design codes have been drawn up by institutions such as the American Concrete Institute, there remains some hesitation among the engineering community about implementing these alternative materials. In part, this is due to a lack of standardization and the proprietary nature of the fiber and resin combinations on the market.
Carbon-fiber microelectrodes
Carbon fibers are used for fabrication of carbon-fiber microelectrodes. In this application typically a single carbon fiber with diameter of 5–7 μm is sealed in a glass capillary. At the tip the capillary is either sealed with epoxy and polished to make carbon-fiber disk microelectrode or the fiber is cut to a length of 75–150 μm to make carbon-fiber cylinder electrode. Carbon-fiber microelectrodes are used either in amperometry or fast-scan cyclic voltammetry for detection of biochemical signalling.
Sports goods
CFRPs are now widely used in sports equipment such as in squash, tennis, and badminton racquets, sport kite spars, high-quality arrow shafts, hockey sticks, fishing rods, surfboards, high end swim fins, and rowing shells. Amputee athletes such as Jonnie Peacock use carbon fiber blades for running. It is used as a shank plate in some basketball sneakers to keep the foot stable, usually running the length of the shoe just above the sole and left exposed in some areas, usually in the arch.
Controversially, in 2006, cricket bats with a thin carbon-fiber layer on the back were introduced and used in competitive matches by high-profile players including Ricky Ponting and Michael Hussey. The carbon fiber was claimed to merely increase the durability of the bats, but it was banned from all first-class matches by the ICC in 2007.
A CFRP bicycle frame weighs less than one of steel, aluminum, or titanium having the same strength. The type and orientation of the carbon-fiber weave can be designed to maximize stiffness in required directions. Frames can be tuned to address different riding styles: sprint events require stiffer frames while endurance events may require more flexible frames for rider comfort over longer periods. The variety of shapes it can be built into has further increased stiffness and also allowed aerodynamic tube sections. CFRP forks including suspension fork crowns and steerers, handlebars, seatposts, and crank arms are becoming more common on medium as well as higher-priced bicycles. CFRP rims remain expensive but their stability compared to aluminium reduces the need to re-true a wheel and the reduced mass reduces the moment of inertia of the wheel. CFRP spokes are rare and most carbon wheelsets retain traditional stainless steel spokes. CFRPs also appear increasingly in other components such as derailleur parts, brake and shifter levers and bodies, cassette sprocket carriers, suspension linkages, disc brake rotors, pedals, shoe soles, and saddle rails. Although strong and light, impact, over-torquing, or improper installation of CFRP components has resulted in cracking and failures, which may be difficult or impossible to repair.
Other applications
The fire resistance of polymers and thermo-set composites is significantly improved if a thin layer of carbon fibers is moulded near the surface because a dense, compact layer of carbon fibers efficiently reflects heat.
CFRPs are being used in an increasing number of high-end products that require stiffness and low weight, these include:
Musical instruments, including violin bows; guitar picks, necks (carbon fiber rods), and pick-guards; drum shells; bagpipe chanters; piano actions; and entire musical instruments such as carbon fiber cellos, violas, and violins, acoustic guitars and ukuleles; also audio components such as turntables and loudspeakers.
Firearms use it to replace certain metal, wood, and fiberglass components but many of the internal parts are still limited to metal alloys as current reinforced plastics are unsuitable.
High-performance drone bodies and other radio-controlled vehicle and aircraft components such as helicopter rotor blades.
Lightweight poles such as: tripod legs, tent poles, fishing rods, billiards cues, walking sticks, and high-reach poles such as for window cleaning.
Dentistry, carbon fiber posts are used in restoring root canal treated teeth.
Railed train bogies for passenger service. This reduces the weight by up to 50% compared to metal bogies, which contributes to energy savings.
Laptop shells and other high performance cases.
Carbon woven fabrics.
Archery: carbon fiber arrows and bolts, stock (for crossbows) and riser (for vertical bows), and rail.
As a filament for the 3D fused deposition modeling printing process, carbon fiber-reinforced plastic (polyamide-carbon filament) is used for the production of sturdy but lightweight tools and parts due to its high strength and tear length.
District heating pipe rehabilitation, using CIPP method.
Disposal and recycling
CFRPs have a long service lifetime when protected from the sun. When it is time to decommission CFRPs, they cannot be melted down in air like many metals. When free of vinyl (PVC or polyvinyl chloride) and other halogenated polymers, CFRPs can be thermally decomposed via thermal depolymerization in an oxygen-free environment. This can be accomplished in a refinery in a one-step process. Capture and reuse of the carbon and monomers is then possible. CFRPs can also be milled or shredded at low temperature to reclaim the carbon fiber; however, this process shortens the fibers dramatically. Just as with downcycled paper, the shortened fibers cause the recycled material to be weaker than the original material. There are still many industrial applications that do not need the strength of full-length carbon fiber reinforcement. For example, chopped reclaimed carbon fiber can be used in consumer electronics, such as laptops. It provides excellent reinforcement of the polymers used even if it lacks the strength-to-weight ratio of an aerospace component.
Carbon nanotube reinforced polymer (CNRP)
In 2009, Zyvex Technologies introduced carbon nanotube-reinforced epoxy and carbon pre-pregs. Carbon nanotube reinforced polymer (CNRP) is several times stronger and tougher than typical CFRPs and is used in the Lockheed Martin F-35 Lightning II as a structural material for aircraft. CNRP still uses carbon fiber as the primary reinforcement, but the binding matrix is a carbon nanotube-filled epoxy.
| Technology | Materials | null |
23809410 | https://en.wikipedia.org/wiki/Scooter%20%28motorcycle%29 | Scooter (motorcycle) | A scooter (motor scooter) is a motorcycle with an underbone or step-through frame, a seat, a transmission that shifts without the operator having to operate a clutch lever, a platform for their feet, and with a method of operation that emphasizes comfort and fuel economy. Elements of scooter design were present in some of the earliest motorcycles, and motor scooters have been made since at least 1914. More recently, scooters have evolved to include scooters exceeding 250cc classified as Maxi-scooters.
The global popularity of motor scooters dates from the post-World War II introductions of the Vespa and Lambretta models in Italy. These scooters were intended to provide economical personal transportation (engines from ). The original layout is still widely used in this application. Maxi-scooters, with larger engines from have been developed for Western markets.
Scooters are popular for personal transportation partly due to being more affordable, easier to operate, and more convenient to park and store than a car. Licensing requirements for scooters are easier and cheaper than for cars in most parts of the world, and insurance is usually cheaper. The term motor scooter is sometimes used to avoid confusion with kick scooter, but can then be confused with motorized scooter or e-scooter, a kick-scooter with an electric motor.
Description
The Shorter Oxford English Dictionary defines a motor scooter as a motorcycle similar to a kick scooter with a seat, a floorboard, and small or low wheels. The US Department of Transportation defines a scooter as a motorcycle that has a platform for the operator's feet or has integrated footrests and has a step-through architecture.
The classic scooter design features a step-through frame and a flat floorboard for the rider's feet. This design is possible because most scooter engines and drive systems are attached to the rear axle or under the seat. Unlike a conventional motorcycle, in which the engine is mounted on the frame, most modern scooters allow the engine to swing with the rear wheel, while most vintage scooters and some newer retro models have an axle-mounted engine. Modern scooters starting from the late-1980s generally use a continuously variable transmission (CVT), while older ones use a manual transmission with the gearshift and clutch control built into the left handlebar.
Scooters usually feature bodywork, including a front leg shield and body that conceals all or most of the mechanicals. There is often some integral storage space, either under the seat, built into the front leg shield, or both. Scooters have varying engine displacements and configurations ranging from single-cylinder to twin-cylinder models.
Traditionally, scooter wheels are smaller than conventional motorcycle wheels and are made of pressed steel or cast aluminum alloy, bolt on easily, and often are interchangeable between front and rear. Some scooters carry a spare wheel. Many recent scooters use conventional front forks with the front axle fastened at both ends.
Regulatory classification
Some jurisdictions do not differentiate between scooters and motorcycles. Though some jurisdictions classify smaller engine scooters (typically maximum) as moped class vehicles rather than motorcycles, meaning these scooters often have less stringent regulations (for example, 50 cc scooters can be driven with a normal car drivers license - or by adults aged 18+ years without any license (other than a valid liability insurance) at all as in case of at least Denmark - in many jurisdictions, and might pay less road-tax and be subject to less stringent roadworthiness testing).
United States
For all legal purposes in the United States of America, the National Highway Traffic Safety Administration (NHTSA) recommends using the term motorcycle for all of these vehicles. However, while NHTSA excludes the term motor scooter from legal definition, it proceeds, in the same document, to give detailed instructions on how to import a small motor scooter.
California
the US state of California has a regulatory system for 2- and 3-wheeled vehicles. It classifies vehicles with fewer than four wheels into the following categories:
Motorcycle: a motorcycle is any 2- or 3-wheeled gas operated vehicle weighing under 1,500 lbs. with an engine displacement greater than or equal to 150ccs. Operation requires an M1 class license, and such vehicles must be registered with the state and carry mandatory insurance as well as bear a motorcycle license plate. Motorcycles may travel on any public roadway, including freeways, and may carry a single passenger in addition to the driver. Helmets are mandatory.
Motor-driven cycle: a motor-driven cycle is 2-wheeled gas operated vehicle with an engine displacement of 149ccs or less that does not qualify as a moped (see below) and is capable of traveling greater than 30 mph. It has the same licensing, registration, insurance, license plating, and helmet requirements as a motorcycle, though it may not travel on freeways. Such vehicles are commonly referred to as "scooters".
Moped: a moped (or "motorized bicycle") is a 2- or 3-wheeled device with an automatic transmission capable of traveling no more than 30 mph, with either a gas engine displacement of less than 50ccs (i.e., 49ccs or less) with built-in pedals like a bicycle for human operation, OR, if powered only by electricity, it must not produce more than four gross brake horsepower (bicycle pedals are optional for electric mopeds). There are no registration or insurance requirements for the device, but the operator themself must have an M1 or M2 class license and must personally carry the minimum state automobile insurance and the moped itself must bear a special moped license plate. A single passenger is permitted if the vehicle is fitted with a specific seat and footrests for same.
Motorized tricycle/quadricycle: a motorized tricycle or quadricycle is a 3- or 4-wheeled vehicle propelled by a gas motor not capable of traveling greater than 30 mph and with a gross brake horsepower of 2 or less.
Motorized scooter: a motorized scooter is a 2-wheeled vehicle not capable of traveling greater than 15 mph with a floorboard designed to be stood upon while operating. They do not require a license plate or insurance, and may not be driven on a roadway with a posted speed limit greater than 25 mph. A valid class C driver license is required, as is a bicycle helmet. Passengers are prohibited. They may be operated on a bikepath or bikeway but not on a sidewalk. If a given roadway has a bicycle lane, the motorized scooter must travel within it, and can only make a left-hand turn by dismounting and crossing an intersection as a pedestrian.
Electric bicycle: California recognizes three classes of electric bicycles. A class 1 electric bicycle is a bicycle with pedals whose electric motor only assists the rider when using the pedals and stops assisting when the bicycle reaches 20 mph; a class 2 electric bicycle is a bicycle with pedals whose motor can drive the bicycle entirely on its own, but will not assist the rider above 20 mph; a class 3 electric bicycle is identical to a class 1 electric bicycle, but is capable of traveling up 28 mph before the engine stops assisting the rider AND is equipped with a speedometer. No electric bicycle requires insurance, a license, or any form of registration or license plate as it is not considered a "motor vehicle" by the state.
Emissions
The emissions of mopeds and scooters have been the subject of multiple studies. Studies have found that two-stroke 50 cc mopeds, with and without catalytic converters, emit ten to thirty times more hydrocarbons and particulate emissions than the outdated Euro 3 automobile standards. In the same study, four-stroke mopeds, with and without catalytic converters, emitted three to eight times more hydrocarbons and particulate emissions than the Euro 3 automobile standards. Approximate parity with automobiles was achieved with NOx emissions in these studies. Emissions performance was tested on a g/km basis and was unaffected by fuel economy. the United States Environmental Protection Agency allowed motorcycles, scooters, and mopeds with engine displacements less than 280 cc to emit ten times the NOx and six times the CO than the median Tier II bin 5 automobile regulations. An additional air quality challenge can also arise from the use of moped and scooter transportation over automobiles, as a higher density of two-wheeled vehicles can be supported by existing transportation infrastructure.
In Genoa, 2-stroke engine scooters made before 1999 are banned since 2019.
In some cities, such as Shanghai, petrol scooters/mopeds are banned and only LPG or electric scooters are allowed to be used in the city due to air pollution.
History
Predecessors
Scooter-like traits began to develop in motorcycle designs around the 1900s. In 1894, Hildebrand & Wolfmüller in Munich, Germany produced the first motorcycle that was available for purchase. Their motorcycle had a step-through frame, with its fuel tank mounted on the down tube, its parallel two-cylinder engine mounted low on the frame, and its cylinders mounted in line with the frame. It was water-cooled and had a radiator built into the top of the rear fender. It became the first mass-produced and publicly sold powered two-wheel vehicle, and among the first powered mainly by its engine rather than foot pedals. Maximum speed was . The rear wheel was driven directly by rods from the pistons in a manner similar to the drive wheels of steam locomotives. Only a few hundred such bikes were built, and the high price and technical difficulties made the venture a financial failure for both Wolfmüller and his financial backer, Hildebrand.
In France, the Auto-Fauteuil was introduced in 1902. This was basically a step-through motorcycle with an armchair instead of a traditional saddle. Production continued until 1922.
First generation (1915–1930)
The motoped entered production in 1915, and is believed to be the first motor scooter. They were followed that year by the Autoped, whose engine was engaged by pushing the handlebar column forward and whose brake was engaged by pulling the column back. Autopeds were made in Long Island, New York from 1915 to 1921, and were also made under license by Krupp in Germany from 1919 to 1922, following World War I.
The number of scooter manufacturers and designs increased after World War I. The British - ABC Motors Skootamota, the Kenilworth, and the Reynolds Runabout debuted in 1919, with Gloucestershire Aircraft Company following with its Unibus in 1920. The Skootamota was noted for being practical, popular, and economical, the Kenilworth for its electric lights, and the Reynolds Runabout for its advanced specifications, including front suspension, a two-speed gearbox, leg shields, and a seat sprung with leaf springs and coil springs. The Unibus also had a two-speed gearbox, but it is more notable for its full bodywork, similar to that which would appear of second- and third-generation scooters.
The reputation of first-generation scooters was damaged by a glut of unstable machines with flexible frames, and more substantial examples like the Reynolds Runabout and the Unibus were too expensive to be competitive. The first generation had ended by the mid-1920s.
Second generation (1936–1968)
E. Foster Salsbury and Austin Elmore developed the Salsbury Motor Glide, which was a division of Northrop Aircraft, a scooter with a seat above an enclosed drivetrain, and began production in 1936 in California. In 1938, Salsbury introduced a more powerful scooter with a continuously variable transmission (CVT). This was the first use of a CVT on a scooter. It was such a success that Salsbury attempted to license the design to several European manufacturers including Piaggio. The Motor Glide set the standards for all later models. It inspired production of motor scooters by Powell, Moto-scoot, Cushman, Rock-Ola, and others.
The Cushman Company produced motor scooters from 1936 to 1965. Cushman was an engine manufacturer that started making scooters after Salsbury found their offer to supply engines to be unacceptable. Cushman and Salsbury competed against each other, with both companies advertising the economy of their scooters. Cushman claimed an efficiency of at . Cushman introduced a centrifugal clutch to their scooters in 1940. The Cushman Auto Glide Model 53 was designed to be dropped by parachute with Army Airborne troops, and was eventually called the "Cushman Airborne". Cushman scooters were also used around military bases for messenger service.
Salsbury continued manufacturing scooters until 1948, while Cushman continued until 1965.
Small numbers of the Harley-Davidson Topper scooter were produced from 1960 to 1965 using the engine from their line of light motorcycles based on the DKW RT 125. It had a fiberglass body, a continuously variable transmission, and a pull-cord starting mechanism.
Early postwar Japan
After World War II, wartime aircraft manufacturers were forbidden from making aircraft, and had to find other products to make in order to stay in business. Fuji Sangyo, a part of the former Nakajima Aircraft Company, began production of the Fuji Rabbit S-1 scooter in June 1946. Inspired by Powell scooters used by American servicemen, the S1 was designed to use surplus military parts, including the tailwheel of a Nakajima bomber, re-purposed as the front wheel of the S1. Later that year, Mitsubishi introduced the C10, the first of its line of Silver Pigeon scooters. This was inspired by a Salsbury Motor Glide that had been brought to Japan by a Japanese man who had lived in the United States.
Production of the Mitsubishi Silver Pigeon and the Fuji Rabbit continued through several series until the 1960s. Some series of the Fuji Rabbit were developed to a high level of technological content; the S-601 Rabbit Superflow had an automatic transmission with a torque converter, an electric starter, and pneumatic suspension. Mitsubishi ended scooter production with the C140 Silver Pigeon, while Fuji continued production of the Rabbit until the last of the S-211 series was built in June 1968.
Third generation (1946–1964) and beyond
Italy - Vespa and Lambretta
In post-World War II Italy the Piaggio Vespa became the standard for scooters, and has remained so for over 60 years. Patented in April 1946, it used aircraft design and materials. D'Ascanio's scooter had various new design concepts, including a stress-bearing structure. The gear shift lever was moved to the handlebars for easier riding. The engine was placed near the rear wheel, eliminating the belt drive. The typical fork support was replaced by an arm similar to an aircraft carriage for easier tire-changing. The body design protected the driver from wind and road dirt. The smaller wheels and shorter wheelbase provide improved maneuverability through narrow streets and congested traffic. The name originated when Piaggio's president, upon seeing the prototype, remarked "Sembra una vespa", "It looks like a wasp".
Months after the Vespa, in 1947, Innocenti introduced the Lambretta, beginning a rivalry with Vespa. The scooter was designed by Innocenti, his General Director Giuseppe Lauro and engineer Pierluigi Torre. The Lambretta was named after Lambrate, the Milanese neighborhood where the factory stood. It debuted in 1947 at the Paris Motor Show. The Lambretta 'A' went on sale on December 23, 1947, and sold 9,000 units in one year. It was efficient, at a time when fuel was severely rationed. It had a top speed of from a fan-cooled engine of . The first Lambretta designs had shaft drive and no rear suspension, later designs used various drive and suspension systems until Lambretta settled on a swingarm-mounted engine with chain drive.
Also other Italian firms manufactured scooters in 1950s and 1960s, like Italjet and Iso.
Germany
Germany's aviation industry was also dismantled after World War II. Heinkel stayed in business by making bicycles and mopeds, while Messerschmitt made sewing machines and automobile parts. Messerschmitt took over the German license to manufacture Vespa scooters from Hoffman in 1954 and built Vespas under from 1954 to 1964. Heinkel designed and built its own scooters. The Heinkel Tourist was a large and relatively heavy touring scooter produced in the 1960s. It provided good weather protection with a full fairing, and the front wheel turned under a fixed nose extension. It had effective streamlining, perhaps thanks to its aircraft ancestry. Although it had only a four stroke motor, it could sustain speeds of . Heinkel scooters were known for their reliability.
Glas, a manufacturer of agricultural machinery, made the Goggo scooter from 1951 to 1955. Glas discontinued scooter production to concentrate on its Goggomobil microcar.
Several manufacturers in the German motorcycle industry made scooters. NSU made Lambrettas under license from 1950 to 1955, during which they developed their Prima scooter. Production of the Prima began when NSU's license to build Lambrettas ran out. Zündapp made the popular Bella scooter in the 1950s and 1960s. It was in production for about ten years, in three engine sizes, , and . They could perform all day at a steady speed of . Extremely reliable and very well made, many of these scooters still exist today. Maico built the large Maicoletta scooter in the 1950s. It had a single cylinder piston-port two-stroke engine, with four foot-operated gears and centrifugal fan cooling. The Maicoletta had a choice of engine sizes, approximately , , or , The tubular frame was built on motorcycle principles, with long-travel telescopic forks and wheels. The Maicoletta had a top speed of which was comparable with most motorcycles of the time. Other German scooters made by motorcycle manufacturers included the DKW Hobby, the Dürkopp Diana, and the TWN Contessa.
United Kingdom
In the United Kingdom, Douglas manufactured the Vespa under license from 1951 to 1961 and assembled them from 1961 to 1965. BSA and Triumph made several models of scooter including the BSA Dandy 70, the Triumph Tina, and the Triumph Tigress. The Tigress was made from 1959 to 1964 and was sold with a 175 cc 2-stroke single engine or a 250 cc 4-stroke twin; both versions used a foot-operated four-speed gearbox. The 250 twin had a top speed of . The BSA Sunbeam was a badge engineered version of the Tigress.
The early 2000s saw the small scale production of the Scomadi scooter, a retro styled UK designed and manufactured scooter. Scomadis were styled after classic Lambrettas. A number of different models at different capacity was produced. Production was later moved to Thailand.
Eastern Bloc
In Eastern Bloc countries scooters also became popular in the second half of 1950s, but their production was a result of planned economy rather than market competition. The Soviet Union started in 1957 with producing reverse engineered copies of 150 cc Vespa and 200 cc Glas Goggo as Vyatka and Tula T-200 respectively. They and their developments were manufactured in big numbers into the 1980s. In East Germany, IWL manufactured several own design 125 cc and 150 cc scooters (most notably SR 59 Berlin) from 1955 to 1964, when the authorities decided to switch the production to trucks. There were also produced small 50 cc Simson scooters, manufactured into the 1990s. From 1959 until 1965 there was produced the only Polish scooter, 150 cc to 175 cc WFM Osa. In Czechoslovakia, there was produced a unique 175 cc scooter Čezeta at the outbreak of 1950s/1960s, then there remained only small 50 cc Jawa scooter-style mopeds.
India
Scooters are responsible for about 70 percent of India's gasoline consumption and the cost of a 100-kilometer ride is approximately 100 rupees ($1.30). Electric scooters are just one percent of all scooters, but this number is expected to increase to 74 percent of all scooters sold in India by 2040. The cost of operating an electric scooter is a sixth of the cost of a gasoline version.
API were the first scooter manufacturers in India, with a Lambretta model in the 1950s. Bajaj Auto manufactured its line of scooters from 1972 to 2009, which included the Chetak, Legend, Super and Priya. The Chetak and Legend were based on the Italian Vespa Sprint. It was discontinued in 2009.
Another Vespa partner in India was LML Motors. Beginning as a joint-venture with Piaggio in 1983, LML, in addition to being a large parts supplier for Piaggio, produced the P-Series scooters for the Indian market. In 1999, after protracted dispute with Piaggio, LML bought back Piaggio's stake in the company and the partnership ceased. LML continues to produce (and also exports) the P-Series variant known as the Stella in the U.S. market and by other names in different markets.
East Asia
Since the 1980s Japan, and latterly China and Taiwan, have become world leaders in the mass production of plastic bodied scooters, most often with "twist-and-go" type transmissions (where gear selection and clutch operation are fully automatic). A popular early model being the Honda Spree/Nifty Fifty. Advertising campaigns in the USA featured popular stars like Michael Jackson (Suzuki), and Grace Jones and Lou Reed (Honda), and sales of Japanese scooters peaked there in the 1980s. Both 2-stroke and 4-stroke plastic bodied scooters have been mass-produced in East Asia, with engine and transmission designs being either local designs or license built versions of European engines (e.g. Minarelli or Morini). A popular 4-stroke engine in Chinese production is the GY6 engine, but electric motor-scooters are constantly increasing in the Chinese home market share.
Australia
Unlike other countries, Australia had no major motorcycle companies, nor scooter manufacturers in the original hey day of scooters in the 1950s and 1960s. Scooters were mostly traditionally imported from Italy, and then in the 1970s and 1980s, from Japan and Asia. Australian scooters have only appeared in the last 20 years or so, and many of them relating to the recent advent and viability of the electric engine.
Australian scooter companies design, market and manage the company from Australia, but manufacturing is largely done in Asia, with some assembly in Australia. The oldest scooter company in Australia is Vmoto, a Perth-based company that started off importing and distributing scooters, but then started to manufacture its own electric scooters. Sydney based Hunted Scooters producers smaller numbers of niche petrol scooters, based on the customised Honda Ruckus scooters in Japan.
More recently Sydney based Fonz Moto produce electric scooters and electric motorbikes, assembled in Australia, using overseas and Australian sourced components.
Developments
Trends around the world have seen new developments of the classic scooter, some with larger engines and tires. High-end scooter models now include comprehensive technological features, including cast aluminium frames, engines with integral counterbalancing, and cross-linked brake systems. Some of these scooters have comfort features such as an alarm, start button, radio, windshield, heated hand grips and full instrumentation (including clock or outside temperature gauge).
Three-wheeled scooter
During World War II, Cushman made the Model 39, a three-wheeled utility scooter with a large storage bin between the front wheels. They sold 606 to the US military during the war.
The Piaggio MP3 and Yamaha Tricity are modern tilting three-wheeled scooters. Unlike most motorcycle trikes, they are reverse trikes, with two front wheels which steer, and a single driven rear wheel. The front suspension allows both front wheels to tilt independently, so that all three wheels remain in contact with the ground as it leans when cornering.
Maxi-scooter
A maxi-scooter or touring scooter is a large scooter, with engines ranging in size from , and using larger frames than normal scooters with longer wheelbases. Typically, the dash is fixed & is not mounted on the handlebars.
The trend toward maxi-scooters began in 1986 when Honda introduced the CN250 Helix / Fusion / Spazio. Many years later, Suzuki launched the Burgman 400 and 650 models. Honda (), Aprilia/Gilera (), Yamaha (), Kymco () and others have also introduced scooters with engine displacements ranging from . Honda's PS250 (also known as Big Ruckus) features a motorcycle-like exoskeleton instead of bodywork.
A new direction in maxi-scooters has the engine fixed to the frame. This arrangement improves handling by allowing bigger wheels and less unsprung weight, also tending to move the centre of gravity forwards. The trend toward larger, more powerful scooters with fully automatic transmissions converges with an emerging trend in motorcycle design that foreshadows automatic transmission motorcycles with on-board storage. Examples include the Aprilia Mana 850 automatic-transmission motorcycle and the Honda NC700D Integra, which is a scooter built on a motorcycle platform.
Enclosed scooter
Some scooters, including the BMW C1 and the Honda Gyro Canopy, have a windscreen and a roof. The Piaggio MP3 offered a tall windscreen with roof as an option.
Four-stroke engines and fuel-injection
With increasingly strict environmental laws, including United States emission standards and European emission standards, more scooters are using four-stroke engines again.
Electric scooter
Scooters may be powered by an electric motor powered by a rechargeable battery. Petroleum hybrid-electric scooters are available. Electric scooters are rising in popularity because of higher gasoline prices, and battery technology is gradually improving, making this form of transportation more practical—the battery size is constrained by what the frame will fit, limiting range.
Underbone
An underbone is a motorcycle built on a chassis consisting mostly of a single large diameter tube. An underbone differs from a conventional motorcycle mainly by not having a structural member connecting the head stock to the structure under the front of the seat and by not having a fuel tank or similarly styled appendage in the space between the rider's knees. Underbones are commonly referred to as "step-throughs" and appeal to both genders in much the same way as scooters.
Underbones are often mistaken for scooters and are sometimes marketed as such. However, an underbone does not have a footboard, and is therefore not a scooter.
The engine of an underbone is usually fixed to the chassis under the downtube, while a scooter usually has its engine mounted on its swingarm. As a result, underbone engines are usually further forwards than those of scooters. A typical underbone therefore has a more central centre of gravity than a typical scooter. Furthermore, having an engine mounted on the swingarm gives a typical scooter more unsprung mass than a typical underbone. These factors give a typical underbone better handling than a typical scooter.
The engine of an underbone typically drives the rear wheel by a chain of the kind used on a conventional motorcycle. This final drive is often concealed by a chain enclosure to keep the chain clean and reduce wear. The final drive of a scooter with a swingarm-mounted engine runs in a sealed oil bath and is shorter.
An underbone is usually fitted with near full-size motorcycle wheels, which are often spoked. Scooter wheels are usually small, and made from pressed steel. In both cases, more recent examples often have cast alloy wheels. The bigger wheels of an underbone allow more ventilation and better cooling for the brakes than the smaller wheels of a scooter.
While the engine and suspension layouts described here for scooters and underbones are typical, they are not rigid definitions. There have been scooters with fixed engines and chain drive, and there have been underbones with swingarm-mounted engines. A twenty-first century example of variance from the typical scooter layout is the Suzuki Choinori, which had both its engine and its rear axle rigidly bolted to its frame.
Popularity
Motor scooters are very popular in Asia, particularly in countries such as India, Indonesia, The Philippines, Thailand, Vietnam, China, Japan and Taiwan where there is local manufacturing. They are also popular in the West, mainly in Europe (particularly Italy and the Mediterranean), but not in the US. Parking, storage, and traffic issues in crowded cities, along with the easy driving position make them a popular form of urban transportation. In many nations, scooter (and other small motorcycle) sales exceed those of automobiles, and a motor scooter is often the family transport.
In Taiwan, road infrastructure has been built specifically with two wheelers in mind, with separate lanes and intersection turn boxes. In Thailand, scooters are used for street to door taxi services, as well as for navigating through heavy traffic. The extensive range of cycle tracks in the Netherlands extends into parts of Belgium and Germany and is open to all small powered two-wheelers. Motor scooters are popular because of their size, fuel-efficiency, weight, and typically larger storage room than a motorcycle. In many localities, certain road motor scooters are considered by law to be in the same class as mopeds or small motorcycles and therefore they have fewer restrictions than do larger motorcycles.
According to the Motorcycle Industry Council, sales of motor scooters in the United States have more than doubled since 2000. The motorcycle industry as a whole has seen 13 years of consecutive growth. According to council figures, 42,000 scooters were sold in 2000. By 2004, that number increased to 97,000. Scooter sales in 2008 in the United States were up 41% on 2007, and represented 9% of all powered two-wheeler sales. However, there was a decrease in US scooter sales in 2009 of 59% against2008, compared with a 41% fall for all powered two-wheelers, while the scooter's contribution to total US powered two-wheeler sales in 2009 fell to 6%. After a two-year slump, scooter sales in the US rebounded in the first quarter of 2011.
In popular culture
A common reference for the glamorous image of scooters is Roman Holiday, a 1953 romantic comedy in which Gregory Peck carries Audrey Hepburn around Rome on a Vespa.
In the 1960s mod subculture, some members of this British youth cult used motorscooters for transportation, usually Vespas or Lambrettas. Scooters had provided inexpensive transportation for decades before the development of the mod subculture, but the mods stood out in the way that they treated the vehicle as a fashion accessory, expressed through clubs such as the Ace of Herts. Italian scooters were preferred for their cleanlined, curving shapes and gleaming chrome. For young mods, Italian scooters were the "embodiment of continental style and a way to escape the working-class row houses of their upbringing". They customized their scooters by painting them in "two-tone and candyflake and overaccessorized [them] with luggage racks, crash bars, and scores of mirrors and fog lights", and they often put their names on the small windscreen. Engine side panels and front bumpers were taken to local electroplating workshops and plated with highly reflective chrome.
Scooters were also a practical and accessible form of transportation for 1960s teens. In the early 1960s, public transport stopped relatively early in the night, and so having scooters allowed mods to stay out all night at dance clubs. To keep their expensive suits clean and keep warm while riding, mods often wore long army parkas. For teens with low-end jobs, scooters were cheaper than cars, and they could be bought on a payment plan through newly available hire purchase plans. After a law was passed requiring at least one mirror be attached to every motorcycle, mods were known to add four, ten, or as many as 30 mirrors to their scooters. The cover of The Who's album Quadrophenia, which includes themes related to mods and rockers, depicts a young man on a Vespa GS with four mirrors attached. The album spawned a 1979 motion picture of the same name.
Scooterboy magazines include the British monthly magazine Scootering
and the American quarterly magazine Scoot!.
| Technology | Motorized road transport | null |
3536933 | https://en.wikipedia.org/wiki/Sodium%20dichromate | Sodium dichromate | Sodium dichromate is the inorganic compound with the formula Na2Cr2O7. However, the salt is usually handled as its dihydrate Na2Cr2O7·2H2O. Virtually all chromium ore is processed via conversion to sodium dichromate and virtually all compounds and materials based on chromium are prepared from this salt. In terms of reactivity and appearance, sodium dichromate and potassium dichromate are very similar. The sodium salt is, however, around twenty times more soluble in water than the potassium salt (49 g/L at 0 °C) and its equivalent weight is also lower, which is often desirable.
Preparation
Sodium dichromate is generated on a large scale from ores containing chromium(III) oxides. The ore is fused with a base, typically sodium carbonate, at around 1000 °C in the presence of air (source of oxygen):
2Cr2O3 + 4Na2CO3 + 3O2 -> 4Na2CrO4 + 4CO2
This step solubilizes the chromium and allows it to be extracted into hot water. At this stage, other components of the ore such as aluminium and iron compounds, are poorly soluble. Acidification of the resulting aqueous extract with sulfuric acid or carbon dioxide affords the dichromate:
2Na2CrO4 + 2CO2 + H2O -> Na2Cr2O7 + 2NaHCO3
2Na2CrO4 + H2SO4 -> Na2Cr2O7 + Na2SO4 + H2O
The dichromate is isolated as the dihydrate by crystallization. In this way, many millions of kilograms of sodium dichromate are produced annually.
Since chromium(VI) is toxic, especially as the dust, such factories are subject to stringent regulations. For example, effluent from such refineries is treated with reducing agents to return any chromium(VI) to chromium(III), which is less threatening to the environment. A variety of hydrates of this salt are known, ranging from the decahydrate below 19.5 °C (CAS# ) as well as hexa-, tetra-, and dihydrates. Above 62 °C, these salts lose water spontaneously to give the anhydrous material.
It is crystallised around 30 to 35 degrees C
Reactions
Dichromate and chromate salts are oxidizing agents. For the tanning of leather, sodium dichromate is first reduced with sulfur dioxide.
In the area of organic synthesis, this compound oxidizes benzylic and allylic C-H bonds to carbonyl derivatives. For example, 2,4,6-trinitrotoluene is oxidized to the corresponding carboxylic acid. Similarly, 2,3-dimethylnaphthalene is oxidized by Na2Cr2O7 to 2,3-naphthalenedicarboxylic acid.
Secondary alcohols are oxidized to the corresponding ketone, e.g. menthol to menthone; dihydrocholesterol to cholestanone:
3 R2CHOH + Cr2O72− + 2 H+ → 3 R2C=O + Cr2O3 + 4 H2O
Relative to the potassium salt, the main advantage of sodium dichromate is its greater solubility in water and polar solvents like acetic acid.
For hexavalent chrome plating, chromate is converted to the so-called chromic acid (essentially chromium trioxide) by sulfuric acid.
Sodium dichromate can be used in fluorene to fluorenone conversion.
Safety
Like all hexavalent chromium compounds, sodium dichromate is carcinogenic. The compound is also corrosive and exposure may produce severe eye damage or blindness. Human exposure further encompasses impaired fertility, heritable genetic damage and harm to unborn children.
| Physical sciences | Metallic oxyanions | Chemistry |
3537945 | https://en.wikipedia.org/wiki/Boudinage | Boudinage | Boudinage is a geological term for structures formed by extension, where a rigid tabular body such as hornfels, is stretched and deformed amidst less competent surroundings. The competent bed begins to break up, forming sausage-shaped boudins. Boudinage is common and can occur at any scale, from microscopic to lithospheric, and can be found in all terranes. In lithospheric-scale tectonics, boudinage of strong layers can signify large-scale creep transfer of rock matter. The study of boudinage can also help provide insight into the forces involved in tectonic deformation of rocks and their strength.
Boudinage can develop in two ways: planar fracturing into rectangular fragments or by necking or tapering into elongate depressions and swells. Boudins are typical features of sheared veins and shear zones where, due to stretching along the shear foliation and shortening perpendicular to this, rigid bodies break up. This causes the resulting boudin to take a characteristic sausage or barrel shape. They can also form rectangular structures. Ductile deformation conditions also encourage boudinage rather than imbricate fracturing. Boudins can become separated by fractures or vein material; such zones of separation are known as boudin necks.
In three dimensions, the boudinage may take the form of ribbon-like boudins or chocolate-tablet boudins, depending on the axis and isotropy of extension. They range in size from about 20 m thick to about 1 cm.
Types
There are three different types of boudinage. These include no-slip boudinage, s-slip boudinage, and a-slip boudinage. No-slip boudinage occurs when there is no slip, resulting in a symmetrical structure. S-slip boudinage occurs when the boudin moves in opposition to the shear movement, whereas A-slip occurs when it moves with the direction of the shear. These types can be further classified into 5 different groups with relation to their general shape. These groups are drawn, torn, domino, gash and shearband boudins. In general, drawn and torn shapes form where there is a no-slip boudinage, domino and gash boudins by A-slip, and shearband boudins by S-slip boudinage.
Etymology
Lohest (1909) coined the term boudinage, which is derived from the French word "boudin", meaning blood sausage. Boudins were first observed and described by Belgian geologists in the Collignon quarry near Bastogne in the Ardennes (Belgium).
| Physical sciences | Structural geology | Earth science |
3538929 | https://en.wikipedia.org/wiki/Fibrobacterota | Fibrobacterota | Fibrobacterota is a small bacterial phylum which includes many of the major rumen bacteria, allowing for the degradation of plant-based cellulose in ruminant animals. Members of this phylum were categorized in other phyla. The genus Fibrobacter (the only genus of Fibrobacterota) was removed from the genus Bacteroides in 1988.
Phylogeny and comparative genomic studies
Although Fibrobacterota is currently recognized as a distinct phylum, phylogenetic studies based RpoC and Gyrase B protein sequences, indicate that Fibrobacter succinogenes is closely related to the species from the phyla Bacteroidetes and Chlorobi. The species from these three phyla also branch in the same position based upon conserved signature indels in a number of important proteins. Lastly and most importantly, comparative genomic studies have identified two conserved signature indels (a 5-7 amino acid insert in the RpoC protein and a 13-16 amino acid insertion in serine hydroxymethyltransferase) and one signature protein (PG00081) that are uniquely shared by all of the species from these three phyla. All of these results provide compelling evidence that the species from these three phyla shared a common ancestor exclusive of all other bacteria and it has been proposed that they should all recognized as part of a single “FCB”superphylum.
Phylogeny
Phylogeny of Fibrobacterota.
Taxonomy
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LSPN) and the National Center for Biotechnology Information (NCBI).
Class Chitinispirillia Sorokin et al. 2016
Order Chitinispirillales Sorokin et al. 2016]
Family Chitinispirillaceae Sorokin et al. 2016
Genus Chitinispirillum Sorokin et al. 2016
Species C. alkaliphilum Sorokin et al. 2016
Class Chitinivibrionia Sorokin et al. 2014
Order Chitinivibrionales Sorokin et al. 2014
Family Chitinivibrionaceae Sorokin et al. 2014
Genus Chitinivibrio Sorokin et al. 2014
Species C. alkaliphilus Sorokin et al. 2014
Class Fibrobacteria Spain et al. 2012
Order Fibrobacterales Spain et al. 2012 ["Fibromonadales" Abdul Rahman et al. 2016]
Family Fibrobacteraceae Spain et al. 2012 [Fibromonadaceae" Abdul Rahman et al. 2016]
Genus Fibrobacter Montgomery et al. 1988
Species F. intestinalis Montgomery et al. 1988
Species F. succinogenes (Hungate 1950) Montgomery et al. 1988
Subspecies F. s. elongatus Montgomery et al. 1988
Subspecies F. s. succinogenes (Hungate 1950) Montgomery et al. 1988
Genus "Candidatus Fibromonas" Abdul Rahman et al. 2016
Species "Ca. F. termitidis" Abdul Rahman et al. 2016
Genus "Hallerella" Wylensek et al. 2020
Species "H. porci" Wylensek et al. 2021
Species "H. succinigenes" Wylensek et al. 2020
Distribution
The phylum Fibrobacterota is considered to be closely related to the CFB [Cytophaga-Flavibacterium-Bacteroidota]. It contains the genus Fibrobacter, which has strains present in the guts of many mammals including cattle and pigs. The two described species in this genus namely, Fibrobacter succinogenes and Fibrobacter intestinalis are important members of fibrolytic communities in mammalian guts and have received a lot of attention in recent decades due to the long-standing interest microbes capable of degrading plant fiber.
Molecular evidence based on the amplification of 16rRNA genes from various environments suggest that the phylum is much more widespread than previously thought. Most of the clones from mammalian environments group along with the known isolates in what has been called subphylum 1. Members of subphylum 2 however, have so far been found only in the gut of termites. and in some litter-feeding cockroaches. The predominance of subphylum 2 in cellulolytic fibre-associated bacterial communities in hindguts of wood-feeding Nasutitermes corniger suggests that they play an important role in the breakdown of plant material in higher termites.
| Biology and health sciences | Gram-negative bacteria | Plants |
3539477 | https://en.wikipedia.org/wiki/Heterodontosaurus | Heterodontosaurus | Heterodontosaurus is a genus of heterodontosaurid dinosaur that lived during the Early Jurassic, 200–190 million years ago. Its only known member species, Heterodontosaurus tucki, was named in 1962 based on a skull discovered in South Africa. The genus name means "different toothed lizard", in reference to its unusual, heterodont dentition; the specific name honours G. C. Tuck, who supported the discoverers. Further specimens have since been found, including an almost complete skeleton in 1966.
Though it was a small dinosaur, Heterodontosaurus was one of the largest members of its family, reaching between and possibly in length, and weighing between . The skull was elongated, narrow, and triangular when viewed from the side. The front of the jaws were covered in a horny beak. It had three types of teeth; in the upper jaw, small, incisor-like teeth were followed by long, canine-like tusks. A gap divided the tusks from the chisel-like cheek-teeth. The body was short with a long tail. The five-fingered forelimbs were long and relatively robust, whereas the hind-limbs were long, slender, and had four toes.
Heterodontosaurus is the eponymous and best-known member of the family Heterodontosauridae. This family is considered a basal (or "primitive") group within the order of ornithischian dinosaurs, while their closest affinities within the group are debated. In spite of the large tusks, Heterodontosaurus is thought to have been herbivorous, or at least omnivorous. Though it was formerly thought to have been capable of quadrupedal locomotion, it is now thought to have been bipedal. Tooth replacement was sporadic and not continuous, unlike its relatives. At least four other heterodontosaurid genera are known from the same geological formations as Heterodontosaurus.
History of discovery
The holotype specimen of Heterodontosaurus tucki (SAM-PK-K337) was discovered during the British–South African expedition to South Africa and Basutoland (former name of Lesotho) in 1961–1962. Today, it is housed in the Iziko South African Museum. It was excavated on a mountain at an altitude of about , at a locality called Tyinindini, in the district of Transkei (sometimes referred to as Herschel) in the Cape Province of South Africa. The specimen consists of a crushed but nearly complete skull; associated postcranial remains mentioned in the original description could not be located in 2011. The animal was scientifically described and named in 1962 by palaeontologists Alfred Walter Crompton and Alan J. Charig. The genus name refers to the different-shaped teeth, and the specific name honors George C. Tuck, a director of Austin Motor Company, who supported the expedition. The specimen was not fully prepared by the time of publication, so only the front parts of the skull and lower jaw were described, and the authors conceded that their description was preliminary, serving mainly to name the animal. It was considered an important discovery, as few early ornithischian dinosaurs were known at the time. The preparation of the specimen, i.e. the freeing of the bones from the rock matrix, was very time consuming, since they were covered in a thin, very hard, ferruginous layer containing haematite. This could only be removed by a diamond saw, which damaged the specimen.
In 1966, a second specimen of Heterodontosaurus (SAM-PK-K1332) was discovered at the Voyizane locality, in the Elliot Formation of the Stormberg Group of rock formations, above sea level, on Krommespruit Mountain. This specimen included both the skull and skeleton, preserved in articulation (i.e. the bones being preserved in their natural position in relation to each other), with little displacement and distortion of the bones. The postcranial skeleton was briefly described by palaeontologists Albert Santa Luca, Crompton and Charig in 1976. Its forelimb bones had previously been discussed and figured in an article by the palaeontologists Peter Galton and Robert T. Bakker in 1974, as the specimen was considered significant in establishing that Dinosauria was a monophyletic natural group, whereas most scientists at the time, including the scientists who described Heterodontosaurus, thought that the two main orders Saurischia and Ornithischia were not directly related. The skeleton was fully described in 1980. SAM-PK-K1332 is the most complete heterodontosaurid skeleton described to date. Though a more detailed description of the skull of Heterodontosaurus was long promised, it remained unpublished upon the death of Charig in 1997. It was not until 2011 that the skull was fully described by the palaeontologist David B. Norman and colleagues.
Other specimens referred to Heterodontosaurus include the front part of a juvenile skull (SAM-PK-K10487), a fragmentary maxilla (SAM-PK-K1326), a left maxilla with teeth and adjacent bones (SAM-PK-K1334), all of which were collected at the Voyizane locality during expeditions in 1966–1967, although the first was only identified as belonging to this genus in 2008. A partial snout (NM QR 1788) found in 1975 on Tushielaw Farm south of Voyizane was thought to belong to Massospondylus until 2011, when it was reclassified as Heterodontosaurus. The palaeontologist Robert Broom discovered a partial skull, possibly in the Clarens Formation of South Africa, which was sold to the American Museum of Natural History in 1913, as part of a collection that consisted almost entirely of synapsid fossils. This specimen (AMNH 24000) was first identified as belonging to a sub-adult Heterodontosaurus by Sereno, who reported it in a 2012 monograph about the Heterodontosauridae, the first comprehensive review article about the family. This review also classified a partial postcranial skeleton (SAM-PK-K1328) from Voyizane as Heterodontosaurus. However, in 2014, Galton suggested it might belong to the related genus Pegomastax instead, which was named by Sereno based on a partial skull from the same locality. In 2005, a new Heterodontosaurus specimen (AM 4766) was found in a streambed near Grahamstown in the Eastern Cape Province; it was very complete, but the rocks around it were too hard to fully remove. The specimen was therefore scanned at the European Synchrotron Radiation Facility in 2016, to help reveal the skeleton, and aid in research of its anatomy and lifestyle, some of which was published in 2021.
In 1970, palaeontologist Richard A. Thulborn suggested that Heterodontosaurus was a junior synonym of the genus Lycorhinus, which was named in 1924 with the species L. angustidens, also from a specimen discovered in South Africa. He reclassified the type species as a member of the older genus, as the new combination Lycorhinus tucki, which he considered distinct due to slight differences in its teeth and its stratigraphy. He reiterated this claim in 1974, in the description of a third Lycorhinus species, Lycorhinus consors, after criticism of the synonymy by Galton in 1973. In 1974, Charig and Crompton agreed that Heterodontosaurus and Lycorhinus belonged in the same family, Heterodontosauridae, but disagreed that they were similar enough to be considered congeneric. They also pointed out that the fragmentary nature and poor preservation of the Lycorhinus angustidens holotype specimen made it impossible to fully compare it properly to H. tucki. In spite of the controversy, neither party had examined the L. angustidens holotype first hand, but after doing so, palaeontologist James A. Hopson also defended generic separation of Heterodontosaurus in 1975, and moved L. consors to its own genus, Abrictosaurus.
Description
Heterodontosaurus was a small dinosaur. The most complete skeleton, SAM-PK-K1332, belonged to an animal measuring about in length. Its weight was variously estimated at , , and in separate studies. The closure of vertebral sutures on the skeleton indicates that the specimen was an adult, and probably fully grown. A second specimen, consisting of an incomplete skull, indicates that Heterodontosaurus could have grown substantially larger – up to a length of and with a body mass of nearly . The reason for the size difference between the two specimens is unclear, and might reflect variability within a single species, sexual dimorphism, or the presence of two separate species. The size of this dinosaur has been compared to that of a turkey. Heterodontosaurus was amongst the largest known members of the family Heterodontosauridae. The family contains some of the smallest known ornithischian dinosaurs – the North American Fruitadens, for example, reached a length of only .
Following the description of the related Tianyulong in 2009, which was preserved with hundreds of long, filamentous integuments (sometimes compared to bristles) from neck to tail, Heterodontosaurus has also been depicted with such structures, for example in publications by the palaeontologists Gregory S. Paul and Paul Sereno. Sereno has stated that a heterodontosaur may have looked like a "nimble two-legged porcupine" in life. The restoration published by Sereno also featured a hypothetical display structure located on the snout, above the nasal fossa (depression).
Skull and dentition
The skull of Heterodontosaurus was small but robustly built. The two most complete skulls measured (holotype specimen SAM-PK-K337) and (specimen SAM-PK-K1332) in length. The skull was elongated, narrow, and triangular when viewed from the side, with the highest point being the sagittal crest, from where the skull sloped down towards the snout tip. The back of the skull ended in a hook-like shape, which was offset to the quadrate bone. The orbit (eye opening) was large and circular, and a large spur-like bone, the palpebral, protruded backwards into the upper part of the opening. Below the eye socket, the jugal bone gave rise to a sideways projecting boss, or horn-like structure. The jugal bone also formed a "blade" that created a slot together with a flange on the pterygoid bone, for guiding the motion of the lower jaw. Ventrally, the antorbital fossa was bounded by a prominent bony ridge, to which the animal's fleshy cheek would have been attached. It has also been suggested that heterodontosaurs and other basal (or "primitive") orhithischians had lip-like structures like lizards do (based on similarities in their jaws), rather than bridging skin between the upper and lower jaws (such as cheeks). The proportionally large lower temporal fenestra was egg-shaped and tilted back, and located behind the eye opening. The elliptical upper temporal fenestra was visible only looking at the top of the skull. The left and right upper temporal fenestrae were separated by the sagittal crest, which would have provided lateral attachment surfaces for the jaw musculature in the living animal.
The lower jaw tapered towards the front, and the dentary bone (the main part of the lower jaw) was robust. The front of the jaws were covered by a toothless keratinous beak (or rhamphotheca). The upper beak covered the front of the premaxilla bone and the lower beak covered the predentary, which are, respectively, the foremost bones of the upper and lower jaw in ornithischians. This is evidenced by the rough surfaces on these structures. The palate was narrow, and tapered towards the front. The external nostril openings were small, and the upper border of this opening does not seem to have been completely bridged by bone. If not due to breakage, the gap may have been formed by connective tissue instead of bone. The antorbital fossa, a large depression between the eye and nostril openings, contained two smaller openings. A depression above the snout has been termed the "nasal fossa" or "sulcus". A similar fossa is also seen in Tianyulong, Agilisaurus, and Eoraptor, but its function is unknown.
An unusual feature of the skull was the different-shaped teeth (heterodonty) for which the genus is named, which is otherwise mainly known from mammals. Most dinosaurs (and indeed most reptiles) have a single type of tooth in their jaws, but Heterodontosaurus had three. The beaked tip of the snout was toothless, whereas the hind part of the premaxilla in the upper jaw had three teeth on each side. The first two upper teeth were small and cone-shaped (comparable to incisors), while the third on each side was much enlarged, forming prominent, canine-like tusks. These first teeth were probably partially encased by the upper beak. The first two teeth in the lower jaw also formed canines, but were much bigger than the upper equivalents.
The canines had fine serrations along the back edge, but only the lower ones were serrated at the front. Eleven tall and chisel-like cheek-teeth lined each side of the posterior parts of the upper jaw, which were separated from the canines by a large diastema (gap). The cheek-teeth increased gradually in size, with the middle teeth being largest, and decreased in size after this point. These teeth had a heavy coat of enamel on the inwards side, and were adapted for wear (hypsodonty), and they had long roots, firmly embedded in their sockets. The tusks in the lower jaw fit into an indentation within the diastema of the upper jaw. The cheek-teeth in the lower jaw generally matched those in the upper jaw, though the enamel surface of these were on the outwards side. The upper and lower teeth rows were inset, which created a "cheek-recess" also seen in other ornithischians. Despite the different types of teeth, their histology and enamel microstructure was not complex. But while the enamel thinned out towards the outer surface of the teeth, a thick band of wear-resistant dentine arose concurrently with the thinning enamel, and formed the cutting crest of the occlusal surface, a roletypically filled by enamel.
Postcranial skeleton
The neck consisted of nine cervical vertebrae, which would have formed an S-shaped curve, as indicated by the shape of the vertebral bodies in the side view of the skeleton. The vertebral bodies of the anterior cervical vertebrae are shaped like a parallelogram, those of the middle are rectangular and those of the posterior show a trapezoid shape. The trunk was short, consisting of 12 dorsal and 6 fused sacral vertebrae. The tail was long compared to the body; although incompletely known, it probably consisted of 34 to 37 caudal vertebrae. The dorsal spine was stiffened by ossified tendons, beginning with the fourth dorsal vertebra. This feature is present in many other ornithischian dinosaurs and probably countered stress caused by bending forces acting on the spine during bipedal locomotion. In contrast to many other ornithischians, the tail of Heterodontosaurus lacked ossified tendons, and was therefore probably flexible.
The shoulder blade was capped by an additional element, the suprascapula, which is, among dinosaurs, otherwise only known from Parksosaurus. In the chest region, Heterodontosaurus possessed a well-developed pair of sternal plates that resembled those of theropods, but was different from the much simpler sternal plates of other ornithischians. The sternal plates were connected to the rib cage by elements known as sternal ribs. In contrast to other ornithischians, this connection was moveable, allowing the body to expand during breathing. Heterodontosaurus is the only known ornithischian that possessed gastralia (bony elements within the skin between the sternal plates and the pubis of the pelvis). The gastralia were arranged in two lengthwise rows, each containing around nine elements. The pelvis was long and narrow, with a pubis that resembled those possessed by more advanced ornithischians.
The forelimbs were robustly built and proportionally long, measuring 70% of the length of the hind limbs. The radius of the forearm measured 70% of the length of the humerus (forearm bone). The hand was large, approaching the humerus in length, and possessed five fingers equipped for grasping. The second finger was the longest, followed by the third and the first finger (the thumb). The first three fingers ended in large and strong claws. The fourth and fifth fingers were strongly reduced, and possibly vestigial. The phalangeal formula, which states the number of finger bones in each finger starting from the first, was 2-3-4-3-2.
The hindlimbs were long, slender, and ended in four toes, the first of which (the hallux) did not contact the ground. Uniquely for ornithischians, several bones of the leg and foot were fused: the tibia and fibula were fused with upper tarsal bones (astragalus and calcaneus), forming a tibiotarsus, while the lower tarsal bones were fused with the metatarsal bones, forming a tarsometatarsus. This constellation can also be found in modern birds, where it has evolved independently. The tibiotarsus was about 30% longer than the femur. The ungual bones of the toes were claw-like, and not hoof-like as in more advanced ornithischians.
Classification
When it was described in 1962, Heterodontosaurus was classified as a primitive member of Ornithischia, one of the two main orders of Dinosauria (the other being Saurischia). The authors found it most similar to the poorly known genera Geranosaurus and Lycorhinus, the second of which had been considered a therapsid stem-mammal until then due to its dentition. They noted some similarities with ornithopods, and provisionally placed the new genus in that group. The palaeontologists Alfred Romer and Oskar Kuhn independently named the family Heterodontosauridae in 1966 as a family of ornithischian dinosaurs including Heterodontosaurus and Lycorhinus. Thulborn instead considered these animals as hypsilophodontids, and not a distinct family. Bakker and Galton recognised Heterodontosaurus as important to the evolution of ornithischian dinosaurs, as its hand pattern was shared with primitive saurischians, and therefore was primitive or basal to both groups. This was disputed by some scientists who believed the two groups had instead evolved independently from "thecodontian" archosaur ancestors, and that their similarities were due to convergent evolution. Some authors also suggested a relationship, such as descendant/ancestor, between heterodontosaurids and fabrosaurids, both being primitive ornithischians, as well as to primitive ceratopsians, such as Psittacosaurus, though the nature of these relations was debated.
By the 1980s, most researchers considered the heterodontosaurids as a distinct family of primitive ornithischian dinosaurs, but with an uncertain position with respect to other groups within the order. By the early 21st century, the prevailing theories were that the family was the sister group of either the Marginocephalia (which includes pachycephalosaurids and ceratopsians), or the Cerapoda (the former group plus ornithopods), or as one of the most basal radiations of ornithischians, before the split of the Genasauria (which includes the derived ornithischians). Heterodontosauridae was defined as a clade by Sereno in 1998 and 2005, and the group shares skull features such as three or fewer teeth in each premaxilla, caniniform teeth followed by a diastema, and a jugal horn below the eye. In 2006, palaeontologist Xu Xing and colleagues named the clade Heterodontosauriformes, which included Heterodontosauridae and Marginocephalia, since some features earlier only known from heterodontosaurs were also seen in the basal ceratopsian genus Yinlong.
Many genera have been referred to Heterodontosauridae since the family was erected, yet Heterodontosaurus remains the most completely known genus, and has functioned as the primary reference point for the group in the palaeontological literature. The cladogram below shows the interrelationships within Heterodontosauridae, and follows the analysis by Sereno, 2012:
Heterodontosaurids persisted from the Late Triassic until the Early Cretaceous period, and existed for at least a 100 million years. They are known from Africa, Eurasia, and the Americas, but the majority have been found in southern Africa. Heterodontosaurids appear to have split into two main lineages by the Early Jurassic; one with low-crowned teeth, and one with high-crowned teeth (including Heterodontosaurus). The members of these groups are divided biogeographically, with the low-crowned group having been discovered in areas that were once part of Laurasia (northern landmass), and the high-crowned group from areas that were part of Gondwana (southern landmass). In 2012, Sereno labelled members of the latter grouping a distinct subfamily, Heterodontosaurinae. Heterodontosaurus appears to be the most derived heterodontosaurine, due to details in its teeth, such as very thin enamel, arranged in an asymmetrical pattern. The unique tooth and jaw features of heterodontosaurines appear to be specialisations for effectively processing plant material, and their level of sophistication is comparable to that of later ornithischians.
In 2017, similarities between the skeletons of Heterodontosaurus and the early theropod Eoraptor were used by palaeontologist Matthew G. Baron and colleagues to suggest that ornithischians should be grouped with theropods in a group called Ornithoscelida. Traditionally, theropods have been grouped with sauropodomorphs in the group Saurischia. In 2020, palaeontologist Paul-Emile Dieudonné and colleagues suggested that members of Heterodontosauridae were basal marginocephalians not forming their own natural group, instead progressively leading to Pachycephalosauria, and were therefore basal members of that group. This hypothesis would reduce the ghost lineage of pachycephalosaurs and pull back the origins of ornithopods back to the Early Jurassic. The subfamily Heterodontosaurinae was considered a valid clade within Pachycephalosauria, containing Heterodontosaurus, Abrictosaurus, and Lycorhinus.
Palaeobiology
Diet and tusk function
Heterodontosaurus is commonly regarded as a herbivorous dinosaur. In 1974, Thulborn proposed that the tusks of the dinosaur played no important role in feeding; rather, that they would have been used in combat with conspecifics, for display, as a visual threat, or for active defence. Similar functions are seen in the enlarged tusks of modern muntjacs and chevrotains, but the curved tusks of warthogs (used for digging) are dissimilar.
Several more recent studies have raised the possibility that the dinosaur was omnivorous and used its tusks for prey killing during an occasional hunt. In 2000, Paul Barrett suggested that the shape of the premaxillary teeth and the fine serration of the tusks are reminiscent of carnivorous animals, hinting at facultative carnivory. In contrast, the muntjac lacks serration on its tusks. In 2008, Butler and colleagues argued that the enlarged tusks formed early in the development of the individual, and therefore could not constitute sexual dimorphism. Combat with conspecifics thus is an unlikely function, as enlarged tusks would be expected only in males if they were a tool for combat. Instead, feeding or defence functions are more likely. It has also been suggested that Heterodontosaurus could have used its jugal bosses to deliver blows during combat, and that the palpebral bone could have protected the eyes against such attacks. In 2011, Norman and colleagues drew attention to the arms and hands, which are relatively long and equipped with large, recurved claws. These features, in combination with the long hindlimbs that allowed for fast running, would have made the animal capable of seizing small prey. As an omnivore, Heterodontosaurus would have had a significant selection advantage during the dry season when vegetation was scarce.
In 2012, Sereno pointed out several skull and dentition features that suggest a purely or at least preponderantly herbivorous diet. These include the horny beak and the specialised cheek teeth (suitable for cutting off vegetation), as well as fleshy cheeks which would have helped keeping food within the mouth during mastication. The jaw muscles were enlarged, and the jaw joint was set below the level of the teeth. This deep position of the jaw joint would have allowed an evenly spread bite along the tooth row, in contrast to the scissor-like bite seen in carnivorous dinosaurs. Finally, size and position of the tusks are very different in separate members of the Heterodontosauridae; a specific function in feeding thus appears unlikely. Sereno surmised that heterodontosaurids were comparable to today's peccaries, which possess similar tusks and feed on a variety of plant material such as roots, tubers, fruits, seeds and grass. Butler and colleagues suggested that the feeding apparatus of Heterodontosaurus was specialised to process tough plant material, and that late-surviving members of the family (Fruitadens, Tianyulong and Echinodon) probably showed a more generalised diet including both plants and invertebrates. Heterodontosaurus was characterised by a strong bite at small gape angles, but the later members were adapted to a more rapid bite and wider gapes. A 2016 study of ornithischian jaw mechanics found that the relative bite forces of Heterodontosaurus was comparable to that of the more derived Scelidosaurus. The study suggested that the tusks could have played a role in feeding by grazing against the lower beak while cropping vegetation.
Tooth replacement and aestivation
Much controversy has surrounded the question of whether or not, and to what degree, Heterodontosaurus showed the continuous tooth replacement that is typical for other dinosaurs and reptiles. In 1974 and 1978, Thulborn found that the skulls known at that time lacked any indications of continuous tooth replacement: The cheek teeth of the known skulls are worn uniformly, indicating that they formed simultaneously. Newly erupted teeth are absent. Further evidence was derived from the wear facets of the teeth, which were formed by tooth-to-tooth contact of the lower with the upper dentition. The wear facets were merged into one another, forming a continuous surface along the complete tooth row. This surface indicates that food procession was achieved by back and forth movements of the jaws, not by simple vertical movements which was the case in related dinosaurs such as Fabrosaurus. Back and forth movements are only possible if the teeth are worn uniformly, again strengthening the case for the lack of a continuous tooth replacement. Simultaneously, Thulborn stressed that a regular tooth replacement was essential for these animals, as the supposed diet consisting of tough plant material would have led to quick abrasion of the teeth. These observations led Thulborn to conclude that Heterodontosaurus must have replaced its entire set of teeth at once on a regular basis. Such a complete replacement could only have been possible within phases of aestivation, when the animal did not feed. Aestivation also complies with the supposed habitat of the animals, which would have been desert-like, including hot dry seasons when food was scarce.
A comprehensive analysis conducted in 1980 by Hopson questioned Thulborn's ideas. Hopson showed that the wear facet patterns on the teeth in fact indicate vertical and lateral rather than back and forth jaw movements. Furthermore, Hopson demonstrated variability in the degree of tooth wear, indicating continuous tooth replacement. He did acknowledge that X-ray images of the most complete specimen showed that this individual indeed lacked unerupted replacement teeth. According to Hopson, this indicated that only juveniles continuously replaced their teeth, and that this process ceased when reaching adulthood. Thulborn's aestivation hypothesis was rejected by Hopson due to lack of evidence.
In 2006, Butler and colleagues conducted computer tomography scans of the juvenile skull SAM-PK-K10487. To the surprise of these researchers, replacement teeth yet to erupt were present even in this early ontogenetic stage. Despite these findings, the authors argued that tooth replacement must have occurred since the juvenile displayed the same tooth morphology as adult individuals – this morphology would have changed if the tooth simply grew continuously. In conclusion, Butler and colleagues suggested that tooth replacement in Heterodontosaurus must have been more sporadic than in related dinosaurs. Unerupted replacement teeth in Heterodontosaurus were not discovered until 2011, when Norman and colleagues described the upper jaw of specimen SAM-PK-K1334. Another juvenile skull (AMNH 24000) described by Sereno in 2012 also yielded unerupted replacement teeth. As shown by these discoveries, tooth replacement in Heterodontosaurus was episodical and not continuous as in other heterodontosaurids. The unerupted teeth are triangular in lateral view, which is the typical tooth morphology in basal ornithischians. The characteristic chisel-like shape of the fully erupted teeth therefore resulted from tooth-to-tooth contact between the dentition of the upper and lower jaws.
Locomotion, metabolism and breathing
Although most researchers now consider Heterodontosaurus a bipedal runner, some earlier studies proposed a partial or fully quadrupedal locomotion. In 1980, Santa Luca described several features of the forelimb that are also present in recent quadrupedal animals and imply a strong arm musculature: These include a large olecranon (a bony eminence forming the uppermost part of the ulna), enlarging the lever arm of the forearm. The medial epicondyle of the humerus was enlarged, providing attachment sites for strong flexor muscles of the forearm. Furthermore, projections on the claws might have increased the forward thrust of the hand during walking. According to Santa Luca, Heterodontosaurus was quadrupedal when moving slowly but was able to switch to a much faster, bipedal run. The palaeontologists Teresa Maryańska and Halszka Osmólska supported Santa Luca's hypothesis in 1985; furthermore, they noted that the dorsal spine was strongly flexed downwards in the most completely known specimen. In 1987, Gregory S. Paul suggested that Heterodontosaurus might have been obligatorily quadrupedal, and that these animals would have galloped for fast locomotion. David Weishampel and Lawrence Witmer in 1990 as well as Norman and colleagues in 2004 argued in favour of exclusively bipedal locomotion, based on the morphology of the claws and shoulder girdle. The anatomical evidence suggested by Santa Luca was identified as adaptations for foraging; the robust and strong arms might have been used for digging up roots and breaking open insect nests.
Most studies consider dinosaurs as endothermic (warm-blooded) animals, with an elevated metabolism comparable to that of today's mammals and birds. In a 2009 study, Herman Pontzer and colleagues calculated the aerobic endurance of various dinosaurs. Even at moderate running speeds, Heterodontosaurus would have exceeded the maximum aerobic capabilities possible for an ectotherm (cold-blooded) animal, indicating endothermy in this genus.
Dinosaurs likely possessed an air sac system as found in modern birds, which ventilated an immobile lung. Air flow was generated by contraction of the chest, which was allowed by mobile sternal ribs and the presence of gastralia. Extensions of the air sacs also invaded bones, forming excavations and chambers, a condition known as postcranial skeletal pneumaticity. Ornithischians, with the exception of Heterodontosaurus, lacked mobile sternal ribs and gastralia, and all ornithischians (including Heterodontosaurus) lacked postcranial skeletal pneumaticity. Instead, ornithischians had a prominent anterior extension of the pubis, the anterior pubic process (APP), which was absent in other dinosaurs. Based on synchrotron data of a well-preserved Heterodontosaurus specimen (AM 4766), Viktor Radermacher and colleagues, in 2021, argued that the breathing system of ornithischians drastically differed from that of other dinosaurs, and that Heterodontosaurus represents an intermediate stage. According to these authors, ornithischians lost the ability to contract the chest for breathing, and instead relied on a muscle that ventilated the lung directly, which they termed the puberoperitoneal muscle. The APP of the pelvis would have provided the attachment site for this muscle. Heterodontosaurus had an incipient APP, and its gastralia were reduced compared to non-ornithischian dinosaurs, suggesting that the pelvis was already involved in breathing while chest contraction became less important.
Growth and proposed sexual dimorphism
The ontogeny, or the development of the individual from juvenile to adult, is poorly known for Heterodontosaurus, as juvenile specimens are scarce. As shown by the juvenile skull SAM-PK-K10487, the eye sockets became proportionally smaller as the animal grew, and the snout became longer and contained additional teeth. Similar changes have been reported for several other dinosaurs. The morphology of the teeth, however, did not change with age, indicating that the diet of juveniles was the same as that of adults. The length of the juvenile skull was suggested to be . Assuming similar body proportions as adult individuals, the body length of this juvenile would have been . Indeed, the individual probably would have been smaller, since juvenile animals in general show proportionally larger heads.
In 1974, Thulborn suggested that the large tusks of heterodontosaurids represented a secondary sex characteristic. According to this theory, only adult male individuals would have possessed fully developed tusks; the holotype specimen of the related Abrictosaurus, which lacked tusks altogether, would have represented a female. This hypothesis was questioned by palaeontologist Richard Butler and colleagues in 2006, who argued that the juvenile skull SAM-PK-K10487 possessed tusks despite its early developmental state. At this state, secondary sex characteristics are not expected. Furthermore, tusks are present in almost all known Heterodontosaurus skulls; the presence of sexual dimorphism however would suggest a 50:50 ratio between individuals bearing tusks and those lacking tusks. The only exception is the holotype specimen of Abrictosaurus; the lack of tusks in this individual is interpreted as a specialisation of this particular genus.
Palaeoenvironment
Heterodontosaurus is known from fossils found in formations of the Karoo Supergroup, including the Upper Elliot Formation and the Clarens Formation, which date to the Hettangian and Sinemurian ages of the Lower Jurassic, around 200–190 million years ago. Originally, Heterodontosaurus was thought to be from the Upper Triassic period. The Upper Elliot Formation consists of red/purple mudstone and red/white sandstone, whereas the slightly younger Clarens Formation consists of white/cream-coloured sandstone. The Clarens Formation is less rich in fossils than the Upper Elliot Formation; its sediments also often form cliffs, restricting accessibility for fossil hunters. The Upper Elliot Formation is characterised by animals that appear to be more lightly built than those of the Lower Elliot Formation, which may have been an adaptation to the drier climate at this time in southern Africa. Both formations are famous for their abundant vertebrate fossils, including temnospondyl amphibians, turtles, lepidosaurs, aetosaurs, crocodylomorphs, and non-mammal cynodonts.
Other dinosaurs from these formations include the genasaur Lesothosaurus, the basal sauropodomorph Massospondylus, and the theropod Megapnosaurus. The Upper Elliot Formation shows the largest known heterodontosaurid diversity of any rock unit; besides Heterodontosaurus, it contained Lycorhinus, Abrictosaurus, and Pegomastax. Yet another member of the family, Geranosaurus, is known from the Clarens Formation. The high heterodontosaurid diversity have led researchers to conclude that different species might have fed on separate food sources in order to avoid competition (niche partitioning). With its highly specialised dentition, Heterodontosaurus might have been specialised for tough plant material, while the less specialised Abrictosaurus might have predominantly consumed softer vegetation. The position of the individual heterodontosaurid specimens within the rock succession is poorly known, making it difficult to determine how many of these species really were coeval, and which species existed at separate times.
| Biology and health sciences | Ornitischians | Animals |
3542375 | https://en.wikipedia.org/wiki/Borei-class%20submarine | Borei-class submarine | The Borei class, alternate transliteration Borey, Russian designation Project 955 Borei and Project 955A Borei-A (, NATO reporting name Dolgorukiy), are a series of nuclear-powered ballistic missile submarines being constructed by Sevmash for the Russian Navy. The class has been replacing the steadily retiring Russian Navy Delta III and Delta IV classes and fully retired (as of February 2023) , all three classes being Soviet-era submarines.
Despite being a replacement for many types of SSBNs, Borei-class submarines are much smaller than those of the Typhoon class in both displacement and crew ( tons submerged opposed to tons and 107 personnel as opposed to 160 for the Typhoons). In terms of class, they are more accurately a follow-on for the Delta IV-class SSBNs.
History
The first design work on the project started in the mid-1980s and the construction of the first vessel started in 1996. Previously, a short-lived, smaller parallel design appeared in 1980s with designation Project 935 Borei II. A new submarine-launched ballistic missile (SLBM) called the R-39UTTH Bark was developed in parallel. However, the work on this missile was abandoned and a new missile, the RSM-56 Bulava, was designed. The submarine needed to be redesigned to accommodate the new missile, and the design name was changed to Project 955. The vessels were developed by Rubin Design Bureau and are being built by Russia's Northern shipyard Sevmash in Severodvinsk.
It was reported in 2013 that the arrival of the Borei class will enable the Russian Navy to resume strategic patrols in southern latitudes that had not seen a Russian missile submarine for 20 years.
Launch and trials
The launch of the first submarine of the class, (Юрий Долгорукий), was scheduled for 2002 but was delayed because of budget constraints. The vessel was eventually rolled out of its construction hall on 15 April 2007 in a ceremony attended by many senior military and industrial personnel. Yuriy Dolgorukiy was the first Russian strategic missile submarine to be launched in seventeen years since the end of the Cold War. The planned contingent of eight strategic submarines was expected to be commissioned within the next decade, with five Project 955 planned for purchase through 2015.
Yuriy Dolgorukiy was not put into the water until February 2008. On 21 November 2008 the reactor on Yuriy Dolgorukiy was activated and on 19 June 2009, the submarine began its sea trials in the White Sea. By July 2009, it had yet to be armed with Bulava missiles and was therefore not fully operational, although it had been ready for sea trials on 24 October 2008.
On 28 September 2010 Yuriy Dolgorukiy completed company sea trials. By late October the Russian Pacific Fleet was fully prepared to host Russia's new Borei-class strategic nuclear-powered submarines. It is expected that four subs will be deployed in the Northern Fleet and four subs in the Pacific Fleet. On 9 November 2010 Yuriy Dolgorukiy passed all sea trials directed to new equipment and systems.
Initially, the plan was to conduct the first torpedo launches during the ongoing state trials in December 2010 and then in the same month conduct the first launch of the main weapon system, RSM-56 Bulava SLBM. The plan was then postponed to mid-summer 2011 due to ice conditions in the White Sea.
On 2 December 2010 the second Borei-class submarine, Alexander Nevskiy, was moved to a floating dock in Sevmash shipyard. There the final preparations took place before the submarine was launched. The submarine was launched on 6 December 2010 and began sea trials on 24 October 2011.
On 28 June 2011 a Bulava missile was launched for the first time from Yuriy Dolgorukiy. The test was announced as a success. After long delays finally the lead vessel, Yuriy Dolgorukiy, joined the Russian Navy on 10 January 2013. The official ceremony raising the Russian Navy colors on the submarine was led by Russian Defence Minister Sergei Shoigu. It was actively deployed in 2014 after a series of exercises.
Design
Borei class includes a compact and integrated hydrodynamically efficient hull for reduced broadband noise and the first ever use of pump-jet propulsion on a Russian nuclear submarine. Russian news service TASS claimed the noise level is to be five times lower when compared to the third-generation nuclear-powered Akula-class submarines and two times lower than that of the U.S. Virginia-class submarines. The acoustic signature of Borei is significantly stealthier than that of the previous generations of Russian SSBNs, but it has been reported that their hydraulic pumps become noisier after a relatively short period of operation, reducing the stealth capabilities of the submarine.
The Borei submarines are approximately long, in diameter, and have a maximum submerged speed of at least . They are equipped with a floating rescue chamber designed to fit in the whole crew. Smaller than the Typhoon class, the Boreis were initially reported to carry 12 missiles but are able to carry four more due to the decrease in mass of the 36-ton Bulava SLBM (a modified version of the Topol-M ICBM) over the originally proposed R-39UTTH Bark. Cost was estimated in 2010 at some ₽23 billion (USD$734 million, equivalent to US$863 million in 2020 terms). In comparison the cost of an SSBN was around US$2 billion per boat (1997 prices, equivalent to over US$3 billion in 2020 terms).
Each Borei is constructed with 1.3 million components and mechanisms. Its construction requires 17 thousand tons of metal which is 50% more than the Eiffel Tower. The total length of piping is 109 km and the length of wiring is 600 km. Ten thousand rubber plates cover the hull of the boat.
Versions
Project 955A (Borei-A)
Units of the Project 955A include improved communication and detection systems, improved acoustic signature and have major structural changes such as addition of all moving rudders and vertical endplates to the hydroplanes for higher maneuverability, and a different sail geometry. Besides, they are equipped with hydraulic jets and improved screws that allow them to sail at nearly 30 knots while submerged with minimal noise. Although first reported to carry 20 Bulava SLBMs, the 955A will be armed with 16 SLBMs with 6 to 10 nuclear warheads atop each, just like the project 955 submarines.
The contract for five modified 955A submarines was delayed several times due to price dispute between the Russian Defence Ministry and the United Shipbuilding Corporation. The contract was formally signed on 28 May 2012.
The first 955A submarine, Knyaz Vladimir, was laid down on 30 July 2012, during a ceremony attended by the Russian President Vladimir Putin. Two additional project 955A submarines were laid down in 2014, one in late 2015, and one in late 2016.
On 17 November 2017, the fourth Borei-class submarine and the first of the improved Project 955A, Knyaz Vladimir, was launched.
On 25 October 2022, the first photo of Generalissimus Suvorov, the sixth vessel in the class, were published while performing sea trials. On 7 November, all trials were finished and she was being prepared for commissioning.
According to Sevmash official, Vitaliy Bukovskiy, all Borei-A submarines are to be equipped with aspen banyas able to accommodate 3–4 people.
Project 955B (Borei-B)
The Project 955B was expected to feature a new water jet propulsion system, an upgraded hull, and new noise reduction technology. The concept design was to be initiated by the Rubin Design Bureau in 2018 and four project 955B boats were proposed with the first unit to be delivered to the Russian Navy in 2026. However, the project wasn't reportedly included in the Russia's State Armament Programme for 2018–2027 due to cost-efficiency. Instead, six more Borei-A submarines were to be built after 2023. According to a 2018 report, Russia's State Armament Programme for 2018–2027 includes construction of two more Borei-A submarines by 2028. The construction should take place at Sevmash starting in 2024 with deliveries to the Russian Navy in 2026 and 2027 respectively.
Borei-K
A proposed version armed with cruise missiles instead of SLBMs, similar to the American Ohio-class nuclear-powered cruise missile submarines (SSGNs), is under consideration by the Russian Defence Ministry.
Planned successor
At the Army-2022 expo, the Rubin Design Bureau revealed a new ballistic missile submarine design, intended to replace the Borei class. The Arcturus class will have an angled hull design, similar to the , intended to make the submarine harder to detect. The submarine will also contain 12 missile silos, and will be able to carry the Surrogat-V AUV, which is an anti-submarine warfare drone. It will also have 20% lower displacement compared to current ballistic missile submarines, with a planned crew of around 100 people, and being 134 meters in length.
On 21 June 2023, the Rubin Design Bureau announced that the Arcturus class would begin replacing the Borei class from 2037 onwards.
Units
| Technology | Naval warfare | null |
23658144 | https://en.wikipedia.org/wiki/Recrystallization%20%28geology%29 | Recrystallization (geology) | In geology, solid-state recrystallization is a metamorphic process that occurs under high temperatures and pressures where atoms of minerals are reorganized by diffusion and/or dislocation glide. During this process, the physical structure of the minerals is altered while the composition remains unchanged. This is in contrast to metasomatism, which is the chemical alteration of a rock by hydrothermal and other fluids.
Solid-state recrystallization can be illustrated by observing how snow recrystallizes to ice. When snow is subjected to varying temperatures and pressures, individual snowflakes undergo a physical transformation but their composition remains the same. Limestone is a sedimentary rock that undergoes metamorphic recrystallization to form marble, and clays can recrystallize to muscovite mica.
| Physical sciences | Geochemistry | Earth science |
8210537 | https://en.wikipedia.org/wiki/Extratropical%20cyclone | Extratropical cyclone | Extratropical cyclones, sometimes called mid-latitude cyclones or wave cyclones, are low-pressure areas which, along with the anticyclones of high-pressure areas, drive the weather over much of the Earth. Extratropical cyclones are capable of producing anything from cloudiness and mild showers to severe hail, thunderstorms, blizzards, and tornadoes. These types of cyclones are defined as large scale (synoptic) low pressure weather systems that occur in the middle latitudes of the Earth. In contrast with tropical cyclones, extratropical cyclones produce rapid changes in temperature and dew point along broad lines, called weather fronts, about the center of the cyclone.
Terminology
The term "cyclone" applies to numerous types of low pressure areas, one of which is the extratropical cyclone. The descriptor extratropical signifies that this type of cyclone generally occurs outside the tropics and in the middle latitudes of Earth between 30° and 60° latitude. They are termed mid-latitude cyclones if they form within those latitudes, or post-tropical cyclones if a tropical cyclone has intruded into the mid latitudes. Weather forecasters and the general public often describe them simply as "depressions" or "lows". Terms like frontal cyclone, frontal depression, frontal low, extratropical low, non-tropical low and hybrid low are often used as well.
Extratropical cyclones are classified mainly as baroclinic, because they form along zones of temperature and dewpoint gradient known as frontal zones. They can become barotropic late in their life cycle, when the distribution of heat around the cyclone becomes fairly uniform with its radius.
Formation
Extratropical cyclones form anywhere within the extratropical regions of the Earth (usually between 30° and 60° latitude from the equator), either through cyclogenesis or extratropical transition. In a climatology study with two different cyclone algorithms, a total of 49,745–72,931 extratropical cyclones in the Northern Hemisphere and 71,289–74,229 extratropical cyclones in the Southern Hemisphere were detected between 1979 and 2018 based on reanalysis data. A study of extratropical cyclones in the Southern Hemisphere shows that between the 30th and 70th parallels, there are an average of 37 cyclones in existence during any 6-hour period. A separate study in the Northern Hemisphere suggests that approximately 234 significant extratropical cyclones form each winter.
Cyclogenesis
Extratropical cyclones form along linear bands of temperature/dew point gradient with significant vertical wind shear, and are thus classified as baroclinic cyclones. Initially, cyclogenesis, or low pressure formation, occurs along frontal zones near a favorable quadrant of a maximum in the upper level jetstream known as a jet streak. The favorable quadrants are usually at the right rear and left front quadrants, where divergence ensues. The divergence causes air to rush out from the top of the air column. As mass in the column is reduced, atmospheric pressure at surface level (the weight of the air column) is reduced. The lowered pressure strengthens the cyclone (a low pressure system). The lowered pressure acts to draw in air, creating convergence in the low-level wind field. Low-level convergence and upper-level divergence imply upward motion within the column, making cyclones cloudy. As the cyclone strengthens, the cold front sweeps towards the equator and moves around the back of the cyclone. Meanwhile, its associated warm front progresses more slowly, as the cooler air ahead of the system is denser, and therefore more difficult to dislodge. Later, the cyclones occlude as the poleward portion of the cold front overtakes a section of the warm front, forcing a tongue, or trowal, of warm air aloft. Eventually, the cyclone will become barotropically cold and begin to weaken.
Atmospheric pressure can fall very rapidly when there are strong upper level forces on the system. When pressures fall more than per hour, the process is called explosive cyclogenesis, and the cyclone can be described as a bomb. These bombs rapidly drop in pressure to below under favorable conditions such as near a natural temperature gradient like the Gulf Stream, or at a preferred quadrant of an upper-level jet streak, where upper level divergence is best. The stronger the upper level divergence over the cyclone, the deeper the cyclone can become. Hurricane-force extratropical cyclones are most likely to form in the northern Atlantic and northern Pacific oceans in the months of December and January. On 14 and 15 December 1986, an extratropical cyclone near Iceland deepened to below , which is a pressure equivalent to a category 5 hurricane. In the Arctic, the average pressure for cyclones is during the winter, and during the summer.
Extratropical transition
Tropical cyclones often transform into extratropical cyclones at the end of their tropical existence, usually between 30° and 40° latitude, where there is sufficient forcing from upper-level troughs or shortwaves riding the Westerlies for the process of extratropical transition to begin. During this process, a cyclone in extratropical transition (known across the eastern North Pacific and North Atlantic oceans as the post-tropical stage), will invariably form or connect with nearby fronts and/or troughs consistent with a baroclinic system. Due to this, the size of the system will usually appear to increase, while the core weakens. However, after transition is complete, the storm may re-strengthen due to baroclinic energy, depending on the environmental conditions surrounding the system. The cyclone will also distort in shape, becoming less symmetric with time.
During extratropical transition, the cyclone begins to tilt back into the colder airmass with height, and the cyclone's primary energy source converts from the release of latent heat from condensation (from thunderstorms near the center) to baroclinic processes. The low pressure system eventually loses its warm core and becomes a cold-core system.
The peak time of subtropical cyclogenesis (the midpoint of this transition) in the North Atlantic is in the months of September and October, when the difference between the temperature of the air aloft and the sea surface temperature is the greatest, leading to the greatest potential for instability. On rare occasions, an extratropical cyclone can transform into a tropical cyclone if it reaches an area of ocean with warmer waters and an environment with less vertical wind shear. An example of this happening is in the 1991 Perfect Storm. The process known as "tropical transition" involves the usually slow development of an extratropically cold core vortex into a tropical cyclone.
The Joint Typhoon Warning Center uses the extratropical transition (XT) technique to subjectively estimate the intensity of tropical cyclones becoming extratropical based on visible and infrared satellite imagery. Loss of central convection in transitioning tropical cyclones can cause the Dvorak technique to fail; the loss of convection results in unrealistically low estimates using the Dvorak technique. The system combines aspects of the Dvorak technique, used for estimating tropical cyclone intensity, and the Hebert-Poteat technique, used for estimating subtropical cyclone intensity. The technique is applied when a tropical cyclone interacts with a frontal boundary or loses its central convection while maintaining its forward speed or accelerating. The XT scale corresponds to the Dvorak scale and is applied in the same way, except that "XT" is used instead of "T" to indicate that the system is undergoing extratropical transition. Also, the XT technique is only used once extratropical transition begins; the Dvorak technique is still used if the system begins dissipating without transition. Once the cyclone has completed transition and become cold-core, the technique is no longer used.
Structure
Surface pressure and wind distribution
The windfield of an extratropical cyclone constricts with distance in relation to surface level pressure, with the lowest pressure being found near the center, and the highest winds typically just on the cold/poleward side of warm fronts, occlusions, and cold fronts, where the pressure gradient force is highest. The area poleward and west of the cold and warm fronts connected to extratropical cyclones is known as the cold sector, while the area equatorward and east of its associated cold and warm fronts is known as the warm sector.
The wind flow around an extratropical cyclone is counterclockwise in the northern hemisphere, and clockwise in the southern hemisphere, due to the Coriolis effect (this manner of rotation is generally referred to as cyclonic). Near this center, the pressure gradient force (from the pressure at the center of the cyclone compared to the pressure outside the cyclone) and the Coriolis force must be in an approximate balance for the cyclone to avoid collapsing in on itself as a result of the difference in pressure. The central pressure of the cyclone will lower with increasing maturity, while outside of the cyclone, the sea-level pressure is about average. In most extratropical cyclones, the part of the cold front ahead of the cyclone will develop into a warm front, giving the frontal zone (as drawn on surface weather maps) a wave-like shape. Due to their appearance on satellite images, extratropical cyclones can also be referred to as frontal waves early in their life cycle. In the United States, an old name for such a system is "warm wave".
In the northern hemisphere, once a cyclone occludes, a trough of warm air aloft—or "trowal" for short—will be caused by strong southerly winds on its eastern periphery rotating aloft around its northeast, and ultimately into its northwestern periphery (also known as the warm conveyor belt), forcing a surface trough to continue into the cold sector on a similar curve to the occluded front. The trowal creates the portion of an occluded cyclone known as its comma head, due to the comma-like shape of the mid-tropospheric cloudiness that accompanies the feature. It can also be the focus of locally heavy precipitation, with thunderstorms possible if the atmosphere along the trowal is unstable enough for convection.
Vertical structure
Extratropical cyclones slant back into colder air masses and strengthen with height, sometimes exceeding 30,000 feet (approximately 9 km) in depth.
Above the surface of the earth, the air temperature near the center of the cyclone is increasingly colder than the surrounding environment. These characteristics are the direct opposite of those found in their counterparts, tropical cyclones; thus, they are sometimes called "cold-core lows". Various charts can be examined to check the characteristics of a cold-core system with height, such as the chart, which is at about altitude. Cyclone phase diagrams are used to tell whether a cyclone is tropical, subtropical, or extratropical.
Cyclone evolution
There are two models of cyclone development and life cycles in common use: the Norwegian model and the Shapiro–Keyser model.
Norwegian cyclone model
Of the two theories on extratropical cyclone structure and life cycle, the older is the Norwegian Cyclone Model, developed during World War I. In this theory, cyclones develop as they move up and along a frontal boundary, eventually occluding and reaching a barotropically cold environment. It was developed completely from surface-based weather observations, including descriptions of clouds found near frontal boundaries. This theory still retains merit, as it is a good description for extratropical cyclones over continental landmasses.
Shapiro–Keyser model
A second competing theory for extratropical cyclone development over the oceans is the Shapiro–Keyser model, developed in 1990. Its main differences with the Norwegian Cyclone Model are the fracture of the cold front, treating warm-type occlusions and warm fronts as the same, and allowing the cold front to progress through the warm sector perpendicular to the warm front. This model was based on oceanic cyclones and their frontal structure, as seen in surface observations and in previous projects which used aircraft to determine the vertical structure of fronts across the northwest Atlantic.
Warm seclusion
A warm seclusion is the mature phase of the extratropical cyclone life cycle. This was conceptualized after the ERICA field experiment of the late 1980s, which produced observations of intense marine cyclones that indicated an anomalously warm low-level thermal structure, secluded (or surrounded) by a bent-back warm front and a coincident chevron-shaped band of intense surface winds. The Norwegian Cyclone Model, as developed by the Bergen School of Meteorology, largely observed cyclones at the tail end of their lifecycle and used the term occlusion to identify the decaying stages.
Warm seclusions may have cloud-free, eye-like features at their center (reminiscent of tropical cyclones), significant pressure falls, hurricane-force winds, and moderate to strong convection. The most intense warm seclusions often attain pressures less than 950 millibars (28.05 inHg) with a definitive lower to mid-level warm core structure. A warm seclusion, the result of a baroclinic lifecycle, occurs at latitudes well poleward of the tropics.
As latent heat flux releases are important for their development and intensification, most warm seclusion events occur over the oceans; they may impact coastal nations with hurricane force winds and torrential rain. Climatologically, the Northern Hemisphere sees warm seclusions during the cold season months, while the Southern Hemisphere may see a strong cyclone event such as this during all times of the year.
In all tropical basins, except the Northern Indian Ocean, the extratropical transition of a tropical cyclone may result in reintensification into a warm seclusion. For example, Hurricane Maria (2005) and Hurricane Cristobal (2014) each re-intensified into a strong baroclinic system and achieved warm seclusion status at maturity (or lowest pressure).
Motion
Extratropical cyclones are generally driven, or "steered", by deep westerly winds in a general west to east motion across both the Northern and Southern hemispheres of the Earth. This general motion of atmospheric flow is known as "zonal". Where this general trend is the main steering influence of an extratropical cyclone, it is known as a "zonal flow regime".
When the general flow pattern buckles from a zonal pattern to the meridional pattern, a slower movement in a north or southward direction is more likely. Meridional flow patterns feature strong, amplified troughs and ridges, generally with more northerly and southerly flow.
Changes in direction of this nature are most commonly observed as a result of a cyclone's interaction with other low pressure systems, troughs, ridges, or with anticyclones. A strong and stationary anticyclone can effectively block the path of an extratropical cyclone. Such blocking patterns are quite normal, and will generally result in a weakening of the cyclone, the weakening of the anticyclone, a diversion of the cyclone towards the anticyclone's periphery, or a combination of all three to some extent depending on the precise conditions. It is also common for an extratropical cyclone to strengthen as the blocking anticyclone or ridge weakens in these circumstances.
Where an extratropical cyclone encounters another extratropical cyclone (or almost any other kind of cyclonic vortex in the atmosphere), the two may combine to become a binary cyclone, where the vortices of the two cyclones rotate around each other (known as the "Fujiwhara effect"). This most often results in a merging of the two low pressure systems into a single extratropical cyclone, or can less commonly result in a mere change of direction of either one or both of the cyclones. The precise results of such interactions depend on factors such as the size of the two cyclones, their strength, their distance from each other, and the prevailing atmospheric conditions around them.
Effects
General
Extratropical cyclones can bring little rain and surface winds of , or they can be dangerous with torrential rain and winds exceeding , and so they are sometimes referred to as windstorms in Europe. The band of precipitation that is associated with the warm front is often extensive. In mature extratropical cyclones, an area known as the comma head on the northwest periphery of the surface low can be a region of heavy precipitation, frequent thunderstorms, and thundersnows. Cyclones tend to move along a predictable path at a moderate rate of progress. During fall, winter, and spring, the atmosphere over continents can be cold enough through the depth of the troposphere to cause snowfall.
Severe weather
Squall lines, or solid bands of strong thunderstorms, can form ahead of cold fronts and lee troughs due to the presence of significant atmospheric moisture and strong upper level divergence, leading to hail and high winds. When significant directional wind shear exists in the atmosphere ahead of a cold front in the presence of a strong upper-level jet stream, tornado formation is possible. Although tornadoes can form anywhere on Earth, the greatest number occur in the Great Plains in the United States, because downsloped winds off the north–south oriented Rocky Mountains, which can form a dry line, aid their development at any strength.
Explosive development of extratropical cyclones can be sudden. The storm known in Great Britain and Ireland as the "Great Storm of 1987" deepened to with a highest recorded wind of , resulting in the loss of 19 lives, 15 million trees, widespread damage to homes and an estimated economic cost of £1.2 billion (US$2.3 billion).
Although most tropical cyclones that become extratropical quickly dissipate or are absorbed by another weather system, they can still retain winds of hurricane or gale force. In 1954, Hurricane Hazel became extratropical over North Carolina as a strong Category 3 storm. The Columbus Day Storm of 1962, which evolved from the remains of Typhoon Freda, caused heavy damage in Oregon and Washington, with widespread damage equivalent to at least a Category 3. In 2005, Hurricane Wilma began to lose tropical characteristics while still sporting Category 3-force winds (and became fully extratropical as a Category 1 storm).
In summer, extratropical cyclones are generally weak, but some of the systems can cause significant floods overland because of torrential rainfall. The July 2016 North China cyclone never brought gale-force sustained winds, but it caused devastating floods in mainland China, resulting in at least 184 deaths and ¥33.19 billion (US$4.96 billion) of damage.
An emerging topic is the co-occurrence of wind and precipitation extremes, so-called compound extreme events, induced by extratropical cyclones. Such compound events account for 3–5% of the total number of cyclones.
Climate and general circulation
In the classic analysis by Edward Lorenz (the Lorenz energy cycle), extratropical cyclones (so-called atmospheric transients) acts as a mechanism in converting potential energy that is created by pole to equator temperature gradients to eddy kinetic energy. In the process, the pole-equator temperature gradient is reduced (i.e. energy is transported poleward to warm up the higher latitudes).
The existence of such transients are also closely related to the formation of the Icelandic and Aleutian Low — the two most prominent general circulation features in the mid- to sub-polar northern latitudes. The two lows are formed by both the transport of kinetic energy and the latent heating (the energy released when water phase changed from vapor to liquid during precipitation) from the mid- latitude cyclones.
Historic storms
The most intense extratropical cyclone on record was a cyclone in the Southern Ocean in October 2022. An analysis by the European Centre for Medium-Range Weather Forecasts estimated a pressure of and a subsequent analysis published in Geophysical Research Letters estimated a pressure of . The same Geophysical Research Letters article notes at least five other extratropical cyclones in the Southern Ocean with a pressure under .
In the North Atlantic Ocean, the most intense extratropical cyclone was the Braer Storm, which reached a pressure of in early January 1993. Before the Braer Storm, an extratropical cyclone near Greenland in December 1986 reached a minimum pressure of at least . The West German Meteorological Service marked a pressure of , with the possibility of a pressure between , lower than the Braer Storm.
The most intense extratropical cyclone across the North Pacific Ocean occurred in November 2014, when a cyclone partially related to Typhoon Nuri reached a record low pressure of . In October 2021, the most intense Pacific Northwest windstorm occurred off the coast of Oregon, peaking with a pressure of . One of the strongest nor'easters occurred in January 2018, in which a cyclone reached a pressure of .
Extratropical cyclones have been responsible for some of the most damaging floods in European history. The Great storm of 1703 killed over 8,000 people and the North Sea flood of 1953 killed over 2,500 and destroyed 3,000 houses. In 2002, floods in Europe caused by two genoa lows caused $27.115 billion in damages and 232 fatalities, the most damaging flood in European since at least 1985. In late December 1999, Cyclones Lothar and Martin caused 140 deaths combined and over $23 billion in damages in Central Europe, the costliest European windstorms in history.
In October 2012, Hurricane Sandy transitioned into an extratropical cyclone off the coast of the Northeastern United States. The storm killed over 100 people and caused $65 billion in damages, the second costliest tropical cyclone at the time. Other extratropical cyclones have been related to major tornado outbreaks. The tornado outbreaks of April 1965, April 1974 and April 2011 were all large, violent, and deadly tornado outbreaks related to extratropical cyclones. Similarly, winter storms in March 1888, November 1950 and March 1993 were responsible for over 300 deaths each.
In December 1960 a nor'easter caused at least 286 deaths in the Northeastern United States, one of the deadliest nor'easters on record. 62 years later in 2022, a winter storm caused $8.5 billion in damages and 106 deaths across the United States and Canada.
In September 1954, the extratropical remnants of Typhoon Marie caused the Tōya Maru to run aground and capsize in the Tsugaru Strait. 1,159 out of the 1,309 on board were killed, making it one of the deadliest typhoons in Japanese history. In July 2016, a cyclone in Northern China left 184 dead, 130 missing, and caused over $4.96 billion in damages.
For older extratropical storms occurring before the 20th century, new paleotempestological methods can be used to assess their intensity. Cross-referencing environmental and historical records in Western Europe has highlighted the intense storms of 1351-1352, 1469, 1645, 1711 and 1751, which caused severe damage and long-lasting flooding along much of Europe's coastline.
| Physical sciences | Atmospheric circulation | null |
8210888 | https://en.wikipedia.org/wiki/Medical%20procedure | Medical procedure | A medical procedure is a course of action intended to achieve a result in the delivery of healthcare.
A medical procedure with the intention of determining, measuring, or diagnosing a patient condition or parameter is also called a medical test. Other common kinds of procedures are therapeutic (i.e., intended to treat, cure, or restore function or structure), such as surgical and physical rehabilitation procedures.
Definition
"An activity directed at or performed on an individual with the object of improving health, treating disease or injury, or making a diagnosis." - International Dictionary of Medicine and Biology
"The act or conduct of diagnosis, treatment, or operation." - Stedman's Medical Dictionary by Thomas Lathrop Stedman
"A series of steps by which a desired result is accomplished." - Dorland's Medical Dictionary by William Alexander Newman Dorland
"The sequence of steps to be followed in establishing some course of action." - Mosby's Medical, Nursing, & Allied Health Dictionary
List of medical procedures
Propaedeutic
Auscultation
Medical inspection (body features)
Palpation
Percussion (medicine)
Vital signs measurement, such as blood pressure, body temperature, or pulse (or heart rate)
Diagnostic
Lab tests
Biopsy test
Blood test
Stool test
Urinalysis
Cardiac stress test
Electrocardiography
Electrocorticography
Electroencephalography
Electromyography
Electroneuronography
Electronystagmography
Electrooculography
Electroretinography
Endoluminal capsule monitoring
Endoscopy
Colonoscopy
Colposcopy
Cystoscopy
Gastroscopy
Laparoscopy
Laryngoscopy
Ophthalmoscopy
Otoscopy
Sigmoidoscopy
Esophageal motility study
Evoked potential
Magnetoencephalography
Medical imaging
Angiography
Aortography
Cerebral angiography
Coronary angiography
Lymphangiography
Pulmonary angiography
Ventriculography
Chest photofluorography
Computed tomography
Echocardiography
Electrical impedance tomography
Fluoroscopy
Magnetic resonance imaging
Diffuse optical imaging
Diffusion tensor imaging
Diffusion-weighted imaging
Functional magnetic resonance imaging
Positron emission tomography
Radiography
Scintillography
SPECT
Ultrasonography
Contrast-enhanced ultrasound
Gynecologic ultrasonography
Intravascular ultrasound
Obstetric ultrasonography
Thermography
Virtual colonoscopy
Neuroimaging
Posturography
Therapeutic
Thrombosis prophylaxis
Precordial thump
Politzerization
Hemodialysis
Hemofiltration
Plasmapheresis
Apheresis
Extracorporeal membrane oxygenation (ECMO)
Cancer immunotherapy
Cancer vaccine
Cervical conization
Chemotherapy
Cytoluminescent therapy
Insulin potentiation therapy
Low-dose chemotherapy
Monoclonal antibody therapy
Photodynamic therapy
Radiation therapy
Targeted therapy
Tracheal intubation
Unsealed source radiotherapy
Virtual reality therapy
Physical therapy/Physiotherapy
Speech therapy
Phototerapy
Hydrotherapy
Heat therapy
Shock therapy
Insulin shock therapy
Electroconvulsive therapy
Symptomatic treatment
Fluid replacement therapy
Palliative care
Hyperbaric oxygen therapy
Oxygen therapy
Gene therapy
Enzyme replacement therapy
Intravenous therapy
Phage therapy
Respiratory therapy
Vision therapy
Electrotherapy
Transcutaneous electrical nerve stimulation (TENS)
Laser therapy
Combination therapy
Occupational therapy
Immunization
Vaccination
Immunosuppressive therapy
Psychotherapy
Drug therapy
Acupuncture
Antivenom
Magnetic therapy
Craniosacral therapy
Chelation therapy
Hormonal therapy
Hormone replacement therapy
Opiate replacement therapy
Cell therapy
Stem cell treatments
Intubation
Nebulization
Inhalation therapy
Particle therapy
Proton therapy
Fluoride therapy
Cold compression therapy
Animal-Assisted Therapy
Negative Pressure Wound Therapy
Nicotine replacement therapy
Oral rehydration therapy
Surgical
Ablation
Amputation
Biopsy
Cardiopulmonary resuscitation (CPR)
Cryosurgery
Endoscopic surgery
Facial rejuvenation
General surgery
Hand surgery
Hemilaminectomy
Image-guided surgery
Knee cartilage replacement therapy
Laminectomy
Laparoscopic surgery
Lithotomy
Lithotriptor
Lobotomy
Neovaginoplasty
Radiosurgery
Stereotactic surgery
Vaginoplasty
Xenotransplantation
Anesthesia
Dissociative anesthesia
General anesthesia
Local anesthesia
Topical anesthesia (surface)
Epidural (extradural) block
Spinal anesthesia (subarachnoid block)
Regional anesthesia
Other
Interventional radiology
Screening (medicine)
| Biology and health sciences | Medical procedures: General | Health |
18597893 | https://en.wikipedia.org/wiki/Sleep%20deprivation | Sleep deprivation | Sleep deprivation, also known as sleep insufficiency or sleeplessness, is the condition of not having adequate duration and/or quality of sleep to support decent alertness, performance, and health. It can be either chronic or acute and may vary widely in severity. All known animals sleep or exhibit some form of sleep behavior, and the importance of sleep is self-evident for humans, as nearly a third of a person's life is spent sleeping. Sleep deprivation is common as it affects about one-third of the population.
The National Sleep Foundation recommends that adults aim for 7–9 hours of sleep per night, while children and teenagers require even more. For healthy individuals with normal sleep, the appropriate sleep duration for school-aged children is between 9 and 11 hours. Acute sleep deprivation occurs when a person sleeps less than usual or does not sleep at all for a short period, typically lasting one to two days. However, if the sleepless pattern persists without external factors, it may lead to chronic sleep issues. Chronic sleep deprivation occurs when a person routinely sleeps less than the amount required for proper functioning. The amount of sleep needed can depend on sleep quality, age, pregnancy, and level of sleep deprivation. Sleep deprivation is linked to various adverse health outcomes, including cognitive impairments, mood disturbances, and increased risk for chronic conditions. A meta-analysis published in Sleep Medicine Reviews indicates that individuals who experience chronic sleep deprivation are at a higher risk for developing conditions such as obesity, diabetes, and cardiovascular diseases.
Insufficient sleep has been linked to weight gain, high blood pressure, diabetes, depression, heart disease, and strokes. Sleep deprivation can also lead to high anxiety, irritability, erratic behavior, poor cognitive functioning and performance, and psychotic episodes. A chronic sleep-restricted state adversely affects the brain and cognitive function. However, in a subset of cases, sleep deprivation can paradoxically lead to increased energy and alertness; although its long-term consequences have never been evaluated, sleep deprivation has even been used as a treatment for depression.
To date, most sleep deprivation studies have focused on acute sleep deprivation, suggesting that acute sleep deprivation can cause significant damage to cognitive, emotional, and physical functions and brain mechanisms. Few studies have compared the effects of acute total sleep deprivation and chronic partial sleep restriction. A complete absence of sleep over a long period is not frequent in humans (unless they have fatal insomnia or specific issues caused by surgery); it appears that brief microsleeps cannot be avoided. Long-term total sleep deprivation has caused death in lab animals.
Terminology
Sleep deprivation vs sleep restriction
Reviews differentiate between having no sleep over a short-term period, such as one night ('sleep deprivation'), and having less than required sleep over a longer period ('sleep restriction'). Sleep deprivation was seen as more impactful in the short term, but sleep restriction had similar effects over a longer period. A 2022 study found that in most cases the changes induced by chronic or acute sleep loss waxed or waned across the waking day.
Sleep debt
Sleep debt refers to a build up of lost optimum sleep. Sleep deprivation is known to be cumulative. This means that the fatigue and sleep one lost as a result of, for example, staying awake all night, would be carried over to the following day. Not getting enough sleep for a couple of days cumulatively builds up a deficiency and causes symptoms of sleep deprivation to appear. A well-rested and healthy individual will generally spend less time in the REM stage of sleep. Studies have shown an inverse relationship between time spent in the REM stage of sleep and subsequent wakefulness during waking hours. Short-term insomnia can be induced by stress or when the body experiences changes in environment and regimen.
Insomnia
Insomnia is a sleep disorder where people have difficulty falling asleep, or staying asleep for as long as desired. Insomnia may be a factor in causing sleep deprivation.
Effects and consequences
Introduction and overview
Effects of sleep deprivation can include
reduced ability to put an emotional event in perspective
inattentiveness (including reduced driving ability)
reduced working memory
mood effects
feeling older
microsleeps.
Negative effects
Brain
Temporary
One study suggested, based on neuroimaging, that 35 hours of total sleep deprivation in healthy controls negatively affected the brain's ability to put an emotional event into the proper perspective and make a controlled, suitable response to the event.
According to the latest research, lack of sleep may cause more harm than previously thought and may lead to the permanent loss of brain cells. The negative effects of sleep deprivation on alertness and cognitive performance suggest decreases in brain activity and function. These changes primarily occur in two regions: the thalamus, a structure involved in alertness and attention, and the prefrontal cortex, a region subserving alertness, attention, and higher-order cognitive processes. Interestingly, the effects of sleep deprivation appear to be constant across "night owls" and "early birds", or different sleep chronotypes, as revealed by fMRI and graph theory.
Lasting
Studies on rodents show that the response to neuronal injury due to acute sleep deprivation is adaptative before three hours of sleep loss per night and becomes maladaptative, and apoptosis occurs after. Studies in mice show neuronal death (in the hippocampus, locus coeruleus, and medial PFC) occurs after two days of REM sleep deprivation. However, mice do not model the effects in humans well since they sleep a third of the duration of REM sleep of humans and caspase-3, the main effector of apoptosis, kills three times the number of cells in humans than in mice. Also not accounted for in nearly all of the studies is that acute REM sleep deprivation induces lasting (> 20 days) neuronal apoptosis in mice, and the apoptosis rate increases on the day following its end, so the amount of apoptosis is often undercounted in mice because experiments nearly always measure it the day the sleep deprivation ends. For these reasons, both the time before cells degenerate and the extent of degeneration could be greatly underevaluated in humans.
Such histological studies cannot be performed on humans for ethical reasons, but long-term studies show that sleep quality is more associated with gray matter volume reduction than age, occurring in areas like the precuneus.
Sleep is necessary to repair cellular damage caused by reactive oxygen species and DNA damage. During long-term sleep deprivation, cellular damage aggregates up to a tipping point that triggers cellular degeneration and apoptosis.
REM sleep deprivation causes an increase in noradrenaline (which incidentally causes the person sleep deprived to be stressed) due to the neurons in the locus coeruleus producing it not ceasing to do so, which causes an increase in the activity of the Na⁺/K⁺-ATPase pump, which itself activates the intrinsic pathway of apoptosis and prevents autophagy, which also induces the mitochondrial pathway of apoptosis.
Sleep outside of the REM phase may allow enzymes to repair brain cell damage caused by free radicals. High metabolic activity while awake damages the enzymes themselves, preventing efficient repair. This study observed the first evidence of brain damage in rats as a direct result of sleep deprivation.
Cognitive and neurobehavioural effects
A 2009 review found that sleep loss had a wide range of cognitive and neurobehavioral effects including unstable attention, slowing of response times, decline of memory performance, reduced learning of cognitive tasks, deterioration of performance in tasks requiring divergent thinking, perseveration with ineffective solutions, performance deterioration as task duration increases; and growing neglect of activities judged to be nonessential.
Attention
Attentional lapses also extend into more critical domains in which the consequences can be life or death; car crashes and industrial disasters can result from inattentiveness attributable to sleep deprivation. To empirically measure the magnitude of attention deficits, researchers typically employ the psychomotor vigilance task (PVT), which requires the subject to press a button in response to a light at random intervals. Failure to press the button in response to the stimulus (light) is recorded as an error, attributable to the microsleeps that occur as a product of sleep deprivation.
Crucially, individuals' subjective evaluations of their fatigue often do not predict actual performance on the PVT. While totally sleep-deprived individuals are usually aware of the degree of their impairment, lapses from chronic (lesser) sleep deprivation can build up over time so that they are equal in number and severity to the lapses occurring from total (acute) sleep deprivation. Chronically sleep-deprived people, however, continue to rate themselves considerably less impaired than totally sleep-deprived participants. Since people usually evaluate their capability on tasks like driving subjectively, their evaluations may lead them to the false conclusion that they can perform tasks that require constant attention when their abilities are in fact impaired.
Driving ability
According to a 2000 study, sleep deprivation can have some of the same hazardous effects as being drunk. People who drove after being awake for 17–19 hours performed worse than those with a blood alcohol level of 0.05 percent, which is the legal limit for drunk driving in most western European countries and Australia. Another study suggested that performance begins to degrade after 16 hours awake, and 21 hours awake was equivalent to a blood alcohol content of 0.08 percent, which is the blood alcohol limit for drunk driving in Canada, the U.S., and the U.K.
The fatigue of drivers of goods trucks and passenger vehicles has come to the attention of authorities in many countries, where specific laws have been introduced with the aim of reducing the risk of traffic accidents due to driver fatigue. Rules concerning minimum break lengths, maximum shift lengths, and minimum time between shifts are common in the driving regulations used in different countries and regions, such as the drivers' working hours regulations in the European Union and hours of service regulations in the United States. The American Academy of Sleep Medicine (AASM) reports that one in every five serious motor vehicle injuries are related to driver fatigue.
The National Sleep Foundation identifies several warning signs that a driver is dangerously fatigued. These include rolling down the window, turning up the radio, having trouble keeping eyes open, head-nodding, drifting out of their lane, and daydreaming. At particular risk are lone drivers between midnight and 6:00 a.m.
Sleep deprivation can negatively impact overall performance and has led to major fatal accidents. Due largely to the February 2009 crash of Colgan Air Flight 3407, which killed 50 people and was partially attributed to pilot fatigue, the FAA reviewed its procedures to ensure that pilots are sufficiently rested. Air traffic controllers were under scrutiny when, in 2010, there were 10 incidents of controllers falling asleep while on shift. The common practice of turn-around shifts caused sleep deprivation and was a contributing factor to all air traffic control incidents. The FAA reviewed its practices for shift changes, and the findings showed that controllers were not well rested. A 2004 study also found medical residents with less than four hours of sleep a night made more than twice as many errors as the 11% of surveyed residents who slept for more than seven hours a night.
Impacts on reasoning and decision-making
Twenty-four hours of continuous sleep deprivation results in the choice of less difficult math tasks without a decrease in subjective reports of effort applied to the task. Naturally occurring sleep loss affects the choice of everyday tasks, such that low-effort tasks are mostly commonly selected. Adolescents who experience less sleep show a decreased willingness to engage in sports activities that require effort through fine motor coordination and attention to detail.
Astronauts have reported performance errors and decreased cognitive ability during periods of extended working hours and wakefulness, as well as sleep loss caused by circadian rhythm disruption and environmental factors.
Working memory
Deficits in attention and working memory are one of the most important; such lapses in mundane routines can lead to unfortunate results, from forgetting ingredients while cooking to missing a sentence while taking notes. Performing tasks that require attention appears to be correlated with the number of hours of sleep received each night, declining as a function of hours of sleep deprivation. Working memory is tested by methods such as choice-reaction time tasks.
Mood
Sleep deprivation can have a negative impact on mood. Staying up all night or taking an unexpected night shift can make one feel irritable. Once one catches up on sleep, one's mood will often return to baseline or normal. Even partial sleep deprivation can have a significant impact on mood. In one study, subjects reported increased sleepiness, fatigue, confusion, tension, and total mood disturbance, which all recovered to their baseline after one to two full nights of sleep.
Depression and sleep are in a bidirectional relationship. Poor sleep can lead to the development of depression, and depression can cause insomnia, hypersomnia, or obstructive sleep apnea. About 75% of adult patients with depression can present with insomnia. Sleep deprivation, whether total or not, can induce significant anxiety, and longer sleep deprivations tend to result in an increased level of anxiety.
Sleep deprivation has also shown some positive effects on mood and can be used to treat depression. Chronotype can affect how sleep deprivation influences mood. Those with morningness (advanced sleep period or "lark") preference become more depressed after sleep deprivation, while those with eveningness (delayed sleep period or "owl") preference show an improvement in mood.
Mood and mental states can affect sleep as well. Increased agitation and arousal from anxiety or stress can keep one more aroused, awake, and alert.
Subjective age
One study found that sleepiness increases the subjective sense one is old, with extreme sleepiness leading people to feel 10 years older. Other studies have also shown a correlation between relatively old subjective age and poor sleep quality.
Fatigue
Sleep deprivation and disruption is associated with subsequent fatigue. Fatigue has different effects and characteristics from sleep deprivation.
Sleep
Propensity
Sleep propensity can be defined as the readiness to transition from wakefulness to sleep or the ability to stay asleep if already sleeping. Sleep deprivation increases this propensity, which can be measured by polysomnography (PSG) as a reduction in sleep latency (the time needed to fall asleep). An indicator of sleep propensity can also be seen in the shortening of the transition from light stages of non-REM sleep to deeper slow-wave oscillations.
On average, the latency in healthy adults decreases by a few minutes after a night without sleep, and the latency from sleep onset to slow-wave sleep is halved. Sleep latency is generally measured with the multiple sleep latency test (MSLT). In contrast, the maintenance of wakefulness test (MWT) also uses sleep latency, but this time as a measure of the capacity of the participants to stay awake (when asked to) instead of falling asleep.
Impact on the sleep-wake cycle
Some research shows that sleep deprivation dysregulates the sleep-wake cycle. Multiple studies that identified the role of the hypothalamus and multiple neural systems controlling circadian rhythms and homeostasis have been helpful in understanding sleep deprivation better.
To describe the temporal course of the sleep-wake cycle, a two-process model of sleep regulation can be mentioned. This model proposes a homeostatic process (Process S) and a circadian process (Process C) that interact to define the time and intensity of sleep. Process S represents the drive for sleep, increasing during wakefulness and decreasing during sleep until a defined threshold level, while Process C is the oscillator responsible for these levels. When being sleep deprived, homeostatic pressure accumulates to the point that waking functions will be degraded even at the highest circadian drive for wakefulness.
Microsleeps
Microsleeps are periods of brief sleep that most frequently occur when a person has a significant level of sleep deprivation. Microsleeps usually last for a few seconds, usually no longer than 15 seconds, and happen most frequently when a person is trying to stay awake when they are feeling sleepy. The person usually falls into microsleep while doing a monotonous task like driving, reading a book, or staring at a computer. Microsleeps are similar to blackouts, and a person experiencing them is not consciously aware that they are occurring.
An even lighter type of sleep has been seen in rats that have been kept awake for long periods of time. In a process known as local sleep, specific localized brain regions went into periods of short (~80 ms) but frequent (~40/min) NREM-like states. Despite the on-and-off periods where neurons shut off, the rats appeared to be awake, although they performed poorly at tests.
Cardiovascular morbidity
Decreased sleep duration is associated with many adverse cardiovascular consequences. The American Heart Association has stated that sleep restriction is a risk factor for adverse cardiometabolic profiles and outcomes. The organization recommends healthy sleep habits for ideal cardiac health, along with other well-known factors like blood pressure, cholesterol, diet, glucose, weight, smoking, and physical activity. The Centers for Disease Control and Prevention has noted that adults who sleep less than seven hours per day are more likely to have chronic health conditions, including heart attack, coronary heart disease, and stroke, compared to those with an adequate amount of sleep.
In a study that followed over 160,000 healthy, non-obese adults, the subjects who self-reported sleep duration less than six hours a day were at increased risk for developing multiple cardiometabolic risk factors. They presented with increased central obesity, elevated fasting glucose, hypertension, low high-density lipoprotein, hypertriglyceridemia, and metabolic syndrome. The presence or lack of insomnia symptoms did not modify the effects of sleep duration in this study.
The United Kingdom Biobank studied nearly 500,000 adults who had no cardiovascular disease, and the subjects who slept less than six hours a day were associated with a 20 percent increase in the risk of developing myocardial infarction (MI) over a seven-year follow-up period. Interestingly, a long sleep duration of more than nine hours a night was also a risk factor.
Immunosuppression
Among the myriad of health consequences that sleep deprivation can cause, disruption of the immune system is one of them. While it is not clearly understood, researchers believe that sleep is essential to providing sufficient energy for the immune system to work and allowing inflammation to take place during sleep. Also, just as sleep can reinforce memory in a person's brain, it can help consolidate the memory of the immune system, or adaptive immunity.
Sleep quality is directly related to immunity levels. The team, led by Professor Cohen of Carnegie Mellon University in the United States, found that even a slight disturbance of sleep may affect the body's response to the cold virus. Those with better sleep quality had significantly higher blood T and B lymphocytes than those with poor sleep quality. These two lymphocytes are the main body of immune function in the human body.
An adequate amount of sleep improves the effects of vaccines that utilize adaptive immunity. When vaccines expose the body to a weakened or deactivated antigen, the body initiates an immune response. The immune system learns to recognize that antigen and attacks it when exposed again in the future. Studies have found that people who don't sleep the night after getting a vaccine are less likely to develop a proper immune response to the vaccine and sometimes even require a second dose. People who are sleep deprived in general also do not provide their bodies with sufficient time for an adequate immunological memory to form and, thus, can fail to benefit from vaccination.
People who sleep less than six hours a night are more susceptible to infection and are more likely to catch a cold or flu. A lack of sleep can also prolong the recovery time of patients in the intensive care unit (ICU).
Weight gain
A lack of sleep can cause an imbalance in several hormones that are critical for weight gain. Sleep deprivation increases the level of ghrelin (hunger hormone) and decreases the level of leptin (fullness hormone), resulting in an increased feeling of hunger and a desire for high-calorie foods. Sleep loss is also associated with decreased growth hormone and elevated cortisol levels, which are connected to obesity. People who do not get sufficient sleep can also feel sleepy and fatigued during the day and get less exercise. Obesity can cause poor sleep quality as well. Individuals who are overweight or obese can experience obstructive sleep apnea, gastroesophageal reflux disease (GERD), depression, asthma, and osteoarthritis, all of which can disrupt a good night's sleep.
In rats, prolonged, complete sleep deprivation increased both food intake and energy expenditure, with a net effect of weight loss and ultimately death. This study hypothesizes that the moderate chronic sleep debt associated with habitual short sleep is associated with increased appetite and energy expenditure, with the equation tipped towards food intake rather than expenditure in societies where high-calorie food is freely available.
Type 2 diabetes
It has been suggested that people experiencing short-term sleep restrictions process glucose more slowly than individuals receiving a full 8 hours of sleep, increasing the likelihood of developing type 2 diabetes. Poor sleep quality is linked to high blood sugar levels in diabetic and prediabetic patients, but the causal relationship is not clearly understood. Researchers suspect that sleep deprivation affects insulin, cortisol, and oxidative stress, which subsequently influence blood sugar levels. Sleep deprivation can increase the level of ghrelin and decrease the level of leptin. People who get insufficient amounts of sleep are more likely to crave food in order to compensate for the lack of energy. This habit can raise blood sugar and put them at risk of obesity and diabetes.
In 2005, a study of over 1400 participants showed that participants who habitually slept fewer hours were more likely to have associations with type 2 diabetes. However, because this study was merely correlational, the direction of cause and effect between little sleep and diabetes is uncertain. The authors point to an earlier study that showed that experimental rather than habitual restriction of sleep resulted in impaired glucose tolerance (IGT).
Other effects
Sleep deprivation may facilitate or intensify:
aching muscles
confusion, memory lapses or loss
depression
development of false memory
hypnagogic and hypnopompic hallucinations during falling asleep and waking, which are entirely normal
hand tremor
headaches
malaise
stye
periorbital puffiness, commonly known as "bags under eyes" or eye bags
increased blood pressure
increased stress hormone levels
increased risk of type 2 diabetes
lowering of immunity, increased susceptibility to illness
increased risk of fibromyalgia
irritability
nystagmus (rapid involuntary rhythmic eye movement)
obesity
seizures
mania
Sleep inertia
tachycardia risk. One study found that a single night of sleep deprivation may cause tachycardia, a condition in which the heartrate exceeds 100 beats per minute (in the following day).
temper tantrums in children
violent behavior
yawning
Sleep deprivation may cause symptoms similar to:
attention-deficit hyperactivity disorder (ADHD)
psychosis
Positive effects
Increased energy and alertness in some cases
In a subset of cases, sleep deprivation can paradoxically lead to increased energy and alertness.
Other
See the Uses section below for possible beneficial benefits of sleep deprivation on treating depression and insomnia.
Causes
People aged 18 to 64 need seven to nine hours of sleep per night. Sleep deprivation occurs when this is not achieved. Causes of this can be as follows:
Environmental Factors
Environmental factors significantly influence sleep quality and can contribute to sleep deprivation in various ways. Noise pollution from traffic, construction, and loud neighbors can disrupt sleep by causing awakenings and preventing deeper sleep stages. Similarly, light exposure, particularly from artificial sources like screens, interferes with the body’s natural circadian rhythms by suppressing melatonin production, making it challenging to fall asleep. Air quality, odours and temperatures can all affect sleep quality and duration as well.
To mitigate the effects of these environmental influences, individuals can consider strategies, such as using soundproofing measures, installing blackout curtains, adjusting room temperatures, investing in comfortable bedding, and improving air quality with purifiers. By addressing these environmental factors, individuals can enhance their sleep hygiene and overall health.
Insomnia
Insomnia, one of the six types of dyssomnia, affects 21–37% of the adult population. Many of its symptoms are easily recognizable, including excessive daytime sleepiness; frustration or worry about sleep; problems with attention, concentration, or memory; extreme mood changes or irritability; lack of energy or motivation; poor performance at school or work; and tension headaches or stomach aches.
Insomnia can be grouped into primary and secondary, or comorbid, insomnia.
Primary insomnia is a sleep disorder not attributable to a medical, psychiatric, or environmental cause. There are three main types of primary insomnia. These include psychophysiological, idiopathic insomnia, and sleep state misperception (paradoxical insomnia). Psychophysiological insomnia is anxiety-induced. Idiopathic insomnia generally begins in childhood and lasts for the rest of a person's life. It's suggested that idiopathic insomnia is a neurochemical problem in a part of the brain that controls the sleep-wake cycle, resulting in either under-active sleep signals or over-active wake signals. Sleep state misperception is diagnosed when people get enough sleep but inaccurately perceive that their sleep is insufficient.
Secondary insomnia, or comorbid insomnia, occurs concurrently with other medical, neurological, psychological, and psychiatric conditions. Causation is not necessarily implied. Causes can be from depression, anxiety, and personality disorders.
Sleep apnea
Sleep apnea is a serious disorder that has symptoms of both insomnia and sleep deprivation, among other symptoms like excessive daytime sleepiness, abrupt awakenings, and difficulty concentrating. It is a sleep related breathing disorder that can cause partial or complete obstruction of the upper airways during sleep. One billion people worldwide are affected by obstructive sleep apnea. Those with sleep apnea may experience symptoms such as awakening gasping or choking, restless sleep, morning headaches, morning confusion or irritability, and restlessness. This disorder affects 1 to 10 percent of Americans. It has many serious health outcomes if left untreated. Positive airway pressure therapy using CPAP (continuous positive airway pressure), APAP, or BPAP devices is considered the first-line treatment option for sleep apnea.
Central sleep apnea is caused by a failure of the central nervous system to signal the body to breathe during sleep. Treatments similar to obstructive sleep apnea may be used, as well as other treatments such as adaptive servo ventilation and certain medications. Some medications, such as opioids, may contribute to or cause central sleep apnea.
Self-imposed
Sleep deprivation can sometimes be self-imposed due to a lack of desire to sleep or the habitual use of stimulant drugs. Revenge Bedtime Procrastination is a need to stay up late after a busy day to feel like the day is longer, leading to sleep deprivation from staying up and wanting to make the day "seem/feel" longer.
Caffeine
Consumption of caffeine in large quantities can have negative effects on one's sleep cycle.
Caffeine consumption, usually in the form of coffee, is one of the most widely used stimulants in the world. While there are short-term performance benefits to caffeine consumption, overuse can lead to insomnia symptoms or worsen pre-existing insomnia. Consuming caffeine to stay awake at night may lead to sleeplessness, anxiety, frequent nighttime awakenings, and overall poorer sleep quality. The main metabolite of melatonin (6-sulfatoxymelatonin) gets reduced with consumption of caffeine in the day, which is one of the mechanisms by which sleep is interrupted.
Studying
The U.S. National Sleep Foundation cites a 1996 paper showing that college/university-aged students get an average of less than 6 hours of sleep each night. A 2018 study highlights the need for a good night's sleep for students, finding that college students who averaged eight hours of sleep for the five nights of finals week scored higher on their final exams than those who did not.
In the study, 70.6% of students reported obtaining less than 8 hours of sleep, and up to 27% of students may be at risk for at least one sleep disorder. Sleep deprivation is common in first-year college students as they adjust to the stress and social activities of college life.
Estevan et al. studied the relationships between sleep and test performance. They found that students tend to sleep less than usual the night before an exam and that exam performance was positively correlated with sleep duration.
A study performed by the Department of Psychology at the National Chung Cheng University in Taiwan concluded that freshmen received the least amount of sleep during the week.
Studies of later start times in schools have consistently reported benefits to adolescent sleep, health, and learning using a wide variety of methodological approaches. In contrast, there are no studies showing that early start times have any positive impact on sleep, health, or learning. Data from international studies demonstrate that "synchronized" start times for adolescents are far later than the start times in the overwhelming majority of educational institutions. In 1997, University of Minnesota researchers compared students who started school at 7:15 a.m. with those who started at 8:40 a.m. They found that students who started at 8:40 got higher grades and more sleep on weekday nights than those who started earlier. One in four U.S. high school students admits to falling asleep in class at least once a week.
It is known that during human adolescence, circadian rhythms and, therefore, sleep patterns typically undergo marked changes. Electroencephalogram (EEG) studies indicate a 50% reduction in deep (stage 4) sleep and a 75% reduction in the peak amplitude of delta waves during NREM sleep in adolescence. School schedules are often incompatible with a corresponding delay in sleep offset, leading to a less than optimal amount of sleep for the majority of adolescents.
Mental illness
Chronic sleep problems affect 50% to 80% of patients in a typical psychiatric practice, compared with 10% to 18% of adults in the general U.S. population. Sleep problems are particularly common in patients with anxiety, depression, bipolar disorder, and attention deficit hyperactivity disorder (ADHD).
The specific causal relationships between sleep loss and effects on psychiatric disorders have been most extensively studied in patients with mood disorders. Shifts into mania in bipolar patients are often preceded by periods of insomnia, and sleep deprivation has been shown to induce a manic state in about 30% of patients. Sleep deprivation may represent a final common pathway in the genesis of mania, and manic patients usually have a continuous reduced need for sleep.
The symptoms of sleep deprivation and those of schizophrenia are parallel, including those of positive and cognitive symptoms.
Hospital stay
A study performed nationwide in the Netherlands found that general ward patients staying at the hospital experienced shorter total sleep (83 min. less), more night-time awakenings, and earlier awakenings compared to sleeping at home. Over 70% experienced being woken up by external causes, such as hospital staff (35.8%). Sleep-disturbing factors included the noise of other patients, medical devices, pain, and toilet visits. Sleep deprivation is even more severe in ICU patients, where the naturally occurring nocturnal peak of melatonin secretion was found to be absent, possibly causing the disruption in the normal sleep-wake cycle. However, as the personal characteristics and the clinical picture of hospital patients are so diverse, the possible solutions to improve sleep and circadian rhythmicity should be tailored to the individual and within the possibilities of the hospital ward. Multiple interventions could be considered to aid patient characteristics, improve hospital routines, or improve the hospital environment.
Time online
A 2018 study published in the Journal of Economic Behavior and Organization found that broadband internet connection was associated with sleep deprivation. The study concluded that people with a broadband connection tend to sleep 25 minutes less than those without a broadband connection; hence, they are less likely to get the scientifically recommended 7–9 hours of sleep. Another study conducted on 435 non-medical staff at King Saud University Medical City reported that 9 out of 10 of the respondents used their smartphones at bedtime, with social media being the most used service (80.5%). The study found participants who spent more than 60 minutes using their smartphones at bedtime were 7.4 times more likely to have poor sleep quality than participants who spent less than 15 minutes. Overall, internet usage an hour before bedtime has been found to disrupt sleeping patterns.
Shift work
Many businesses are operational 24/7, such as airlines, hospitals, etc., where workers perform their duties in different shifts. Shift work patterns cause sleep deprivation and lead to poor concentration, detrimental health effects, and fatigue. Shift work can disrupt the normal circadian rhythms of biologic functions, which is associated with the sleep/wake cycle. Both the sleep length and quality can be affected. A “shift-work sleep disorder” has been diagnosed in approximately 10% of shift workers aged between 18-65 years old according to the International Classification of Sleep Disorders, version 2 (ICSD-2). Shift work remains an unspoken challenge within industries, often disregarded by both employers and employees alike, leading to an increase in occupational injuries. A worker experiencing fatigue poses a potential danger, not only to themselves, but also to others around them. Both employers and employees must acknowledge the risks associated with sleep deprivation and on-the-job fatigue to effectively mitigate the chances of occupational injuries.
Assessment
Patients with sleep deprivation may present with complaints of symptoms and signs of insufficient sleep, such as fatigue, sleepiness, drowsy driving, and cognitive difficulties. Sleep insufficiency can easily go unrecognized and undiagnosed unless patients are specifically asked about it by their clinicians.
Several questions are critical in evaluating sleep duration and quality, as well as the cause of sleep deprivation. Sleep patterns (typical bed time or rise time on weekdays and weekends), shift work, and frequency of naps can reveal the direct cause of poor sleep, and quality of sleep should be discussed to rule out any diseases such as obstructive sleep apnea and restless leg syndrome.
Sleep diaries
Sleep diaries are useful in providing detailed information about sleep patterns. They are inexpensive, readily available, and easy to use. The diaries can be as simple as a 24-hour log to note the time of being asleep or can be detailed to include other relevant information.
Sleep questionnaires
Sleep questionnaires such as the Sleep Timing Questionnaire (STQ) and Tayside children’s sleep questionnaire can be used instead of sleep diaries if there is any concern for patient adherence.
Sleep quality can be assessed using the Pittsburgh Sleep Quality Index (PSQI), a self-report questionnaire designed to measure sleep quality and disturbances over a one-month period.
Actigraphy
Actigraphy is a useful, objective wrist-worn tool if the validity of self-reported sleep diaries or questionnaires is questionable. Actigraphy works by recording movements and using computerized algorithms to estimate total sleep time, sleep onset latency, the amount of wake after sleep onset, and sleep efficiency. Some devices have light sensors to detect light exposure.
Wearable devices
Wearable devices such as Fitbits and Apple Watches monitor various body signals, including heart rate, skin temperature, and movement, to provide information about sleep patterns. They operate continuously, collecting extensive data which can be used to offer insights on sleep improvement. These devices are user-friendly and have increased awareness about the significance of quality sleep for health.
Prevention
Although there are numerous causes of sleep deprivation, there are some fundamental measures that promote quality sleep, as suggested by organizations such as the Centers for Disease Control and Prevention, the National Institute of Health, the National Institute of Aging, and the American Academy of Family Physicians.
Sleep hygiene
Historically, sleep hygiene, as first medically defined by Hauri in 1977, was the standard for promoting healthy sleep habits, but evidence that has emerged since the 2010s suggests they are ineffective, both for people with insomnia and for people without. The key is to implement healthier sleep habits, also known as sleep hygiene.
Sleep hygiene recommendations include
setting a fixed sleep schedule
taking naps with caution
maintaining a sleep environment that promotes sleep (cool temperature, limited exposure to light and noise)
comfortable mattresses and pillows
exercising daily
avoiding alcohol, cigarettes and caffeine
avoiding heavy meals in the evening
winding down and avoiding electronic use or physical activities close to bedtime
getting out of bed if unable to fall asleep.
CBT
For long-term involuntary sleep deprivation, cognitive behavioral therapy for insomnia (CBT-i) is recommended as a first-line treatment after the exclusion of a physical diagnosis (e.g., sleep apnea).
CBT-i contains five different components:
cognitive therapy
stimulus control
sleep restriction
sleep hygiene
relaxation.
As this approach has minimal adverse effects and long-term benefits, it is often preferred to (chronic) drug therapy.
Management
Measures to increase alertness
There are several strategies that help increase alertness and counteract the effects of sleep deprivation.
Caffeine is often used over short periods to boost wakefulness when acute sleep deprivation is experienced; however, caffeine is less effective if taken routinely.
Other strategies recommended by the American Academy of Sleep Medicine include
prophylactic sleep before deprivation,
naps,
other stimulants,
and combinations thereof.
However, the American Academy of Sleep Medicine has said that the only sure and safe way to combat sleep deprivation is to increase nightly sleep time.
Uses
Treating depression
Studies show that sleep restriction has some potential for treating depression. Those with depression tend to have earlier occurrences of REM sleep with an increased number of rapid eye movements; therefore, monitoring patients' EEG and awakening them during occurrences of REM sleep appear to have a therapeutic effect, alleviating depressive symptoms. This kind of treatment is known as wake therapy. Although as many as 60% of patients show an immediate recovery when sleep-deprived, most patients relapse the following night. The effect has been shown to be linked to an increase in brain-derived neurotrophic factor (BDNF). A comprehensive evaluation of the human metabolome in sleep deprivation in 2014 found that 27 metabolites are increased after 24 waking hours and suggested serotonin, tryptophan, and taurine may contribute to the antidepressive effect.
The incidence of relapse can be decreased by combining sleep deprivation with medication or a combination of light therapy and phase advance (going to bed substantially earlier than one's normal time). Many tricyclic antidepressants suppress REM sleep, providing additional evidence for a link between mood and sleep. Similarly, tranylcypromine has been shown to completely suppress REM sleep at adequate doses.
Sleep deprivation has been used as a treatment for depression.
Treating insomnia
Sleep deprivation can be implemented for a short period of time in the treatment of insomnia. Some common sleep disorders have been shown to respond to cognitive behavioral therapy for insomnia. Cognitive behavioral therapy for insomnia is a multicomponent process that is composed of stimulus control therapy, sleep restriction therapy (SRT), and sleep hygiene therapy. One of the components is a controlled regime of "sleep restriction" in order to restore the homeostatic drive to sleep and encourage normal "sleep efficiency". Stimulus control therapy is intended to limit behaviors intended to condition the body to sleep while in bed. The main goal of stimulus control and sleep restriction therapy is to create an association between bed and sleep. Although sleep restriction therapy shows efficacy when applied as an element of cognitive-behavioral therapy, its efficacy is yet to be proven when used alone. Sleep hygiene therapy is intended to help patients develop and maintain good sleeping habits. Sleep hygiene therapy is not helpful, however, when used as a monotherapy without the pairing of stimulus control therapy and sleep restriction therapy. Light stimulation affects the supraoptic nucleus of the hypothalamus, controlling circadian rhythm and inhibiting the secretion of melatonin from the pineal gland. Light therapy can improve sleep quality, improve sleep efficiency, and extend sleep duration by helping to establish and consolidate regular sleep-wake cycles. Light therapy is a natural, simple, low-cost treatment that does not lead to residual effects or tolerance. Adverse reactions include headaches, eye fatigue, and even mania.
In addition to the cognitive behavioral treatment of insomnia, there are also generally four approaches to treating insomnia medically. These are through the use of barbiturates, benzodiazepines, and benzodiazepine receptor agonists. Barbiturates are not considered to be a primary source of treatment due to the fact that they have a low therapeutic index, while melatonin agonists are shown to have a higher therapeutic index.
Military uses
Military training
Sleep deprivation has become hardwired into the military culture. It is prevalent in the entire force and especially severe for servicemembers deployed in high-conflict environments.
Sleep deprivation has been used by the military in training programs to prepare personnel for combat experiences when proper sleep schedules are not realistic. Sleep deprivation is used to create a different schedule pattern that is beyond a typical 24-hour day. Sleep deprivation is pivotal in training games such as "Keep in Memory" exercises, where personnel practice memorizing everything they can while under intense stress physically and mentally and being able to describe in as much detail as they can remember of what they remember seeing days later. Sleep deprivation is used in training to create soldiers who are used to only going off of a few hours or minutes of sleep randomly when available.
DARPA initiated sleep research to create a highly resilient soldier capable of sustaining extremely prolonged wakefulness, inspired by the white-crowned sparrow's week-long sleeplessness during migration, at a time when it was not understood that migration birds actually slept with half of their brain. This pursuit aimed both to produce a "super soldier" able "to go for a minimum of seven days without sleep, and in the longer term perhaps at least double that time frame, while preserving high levels of mental and physical performance", and to enhance productivity in sleep-deprived personnel. Military experiments on sleep have been conducted on combatants and prisoners, such as those in Guantánamo, where controlled lighting is combined with torture techniques to manipulate sensory experiences. Crary highlights how constant illumination and the removal of day-night distinctions create what he defines as a "time of indifference," utilizing light management as a form of psychological control.
However, studies have since evaluated the impact of the sleep deprivation imprint on the military culture. Personnel surveys reveal common challenges such as inadequate sleep, fatigue, and impaired daytime functioning, impacting operational effectiveness and post-deployment reintegration. These sleep issues elevate the risk of severe mental health disorders, including PTSD and depression. Early intervention is crucial. Though promising, implementing cognitive-behavioral and imagery-rehearsal therapies for insomnia remains a challenge. Several high-profile military accidents caused in part or fully by sleep deprivation of personnel have been documented. The military has prioritized sleep education, with recent Army guidelines equating sleep importance to nutrition and exercise. The Navy, particularly influenced by retired Captain John Cordle, has actively experimented with watch schedules to align shipboard life with sailors' circadian needs, leading to improved sleep patterns, especially in submarines, supported by ongoing research efforts at the Naval Postgraduate School. Watch schedules with longer and more reliable resting intervals are nowadays the norm on U.S. submarines and a recommended option for surface ships.
In addition to sleep deprivation, circadian misalignment, as commonly experienced by submarine crews, causes several long-term health issues and a decrease in cognitive performance.
To facilitate abusive control
Sleep deprivation can be used to disorient abuse victims to help set them up for abusive control.
Interrogation
Sleep deprivation can be used as a means of interrogation, which has resulted in court trials over whether or not the technique is a form of torture.
Under one interrogation technique, a subject might be kept awake for several days and, when finally allowed to fall asleep, suddenly awakened and questioned. Menachem Begin, the Prime Minister of Israel from 1977 to 1983, described his experience of sleep deprivation as a prisoner of the NKVD in the Soviet Union as follows:
Sleep deprivation was one of the five techniques used by the British government in the 1970s. The European Court of Human Rights ruled that the five techniques "did not occasion suffering of the particular intensity and cruelty implied by the word torture ... [but] amounted to a practice of inhuman and degrading treatment", in breach of the European Convention on Human Rights.
The United States Justice Department released four memos in August 2002 describing interrogation techniques used by the Central Intelligence Agency. They first described 10 techniques used in the interrogation of Abu Zubaydah, described as a terrorist logistics specialist, including sleep deprivation. Memos signed by Steven G. Bradbury in May 2005 claimed that forced sleep deprivation for up to 180 hours ( days) by shackling a diapered prisoner to the ceiling did not constitute torture, nor did the combination of multiple interrogation methods (including sleep deprivation) constitute torture under United States law. These memoranda were repudiated and withdrawn during the first months of the Obama administration.
The question of the extreme use of sleep deprivation as torture has advocates on both sides of the issue. In 2006, Australian Federal Attorney-General Philip Ruddock argued that sleep deprivation does not constitute torture. Nicole Bieske, a spokeswoman for Amnesty International Australia, has stated the opinion of her organization as follows: "At the very least, sleep deprivation is cruel, inhumane and degrading. If used for prolonged periods of time it is torture."
Changes in American sleep habits
National Geographic Magazine has reported that the demands of work, social activities, and the availability of 24-hour home entertainment and Internet access have caused people to sleep less now than in premodern times. USA Today reported in 2007 that most adults in the USA get about an hour less than the average sleep time 40 years ago.
Other researchers have questioned these claims. A 2004 editorial in the journal Sleep stated that, according to the available data, the average number of hours of sleep in a 24-hour period has not changed significantly in recent decades among adults. Furthermore, the editorial suggests that there is a range of normal sleep time required by healthy adults, and many indicators used to suggest chronic sleepiness among the population as a whole do not stand up to scientific scrutiny.
A comparison of data collected from the Bureau of Labor Statistics' American Time Use Survey from 1965 to 1985 and 1998–2001 has been used to show that the median amount of sleep, napping, and resting done by the average adult American has changed by less than 0.7%, from a median of 482 minutes per day from 1965 through 1985 to 479 minutes per day from 1998 through 2001.
Longest periods without sleep
Randy Gardner holds the scientifically documented record for the longest period of time a human being has intentionally gone without sleep not using stimulants of any kind. Gardner stayed awake for 264 hours (11 days), breaking the previous record of 260 hours held by Tom Rounds of Honolulu. Lieutenant Commander John J. Ross of the U.S. Navy Medical Neuropsychiatric Research Unit later published an account of this event, which became well known among sleep-deprivation researchers.
The Guinness World Record stands at 449 hours (18 days, 17 hours), held by Maureen Weston of Peterborough, Cambridgeshire, in April 1977, in a rocking-chair marathon.
Claims of total sleep deprivation lasting years have been made several times, but none are scientifically verified. Claims of partial sleep deprivation are better documented. For example, Rhett Lamb of St. Petersburg, Florida, was initially reported to not sleep at all but actually had a rare condition permitting him to sleep only one to two hours per day in the first three years of his life. He had a rare abnormality called an Arnold–Chiari malformation, where brain tissue protrudes into the spinal canal and the skull puts pressure on the protruding part of the brain. The boy was operated on at All Children's Hospital in St. Petersburg in May 2008. Two days after surgery, he slept through the night.
French sleep expert Michel Jouvet and his team reported the case of a patient who was quasi-sleep-deprived for four months, as confirmed by repeated polygraphic recordings showing less than 30 minutes (of stage-1 sleep) per night, a condition they named "agrypnia". The 27-year-old man had Morvan's fibrillary chorea, a rare disease that leads to involuntary movements, and in this particular case, extreme insomnia. The researchers found that treatment with 5-HTP restored almost normal sleep stages. However, some months after this recovery, the patient died during a relapse that was unresponsive to 5-HTP. The cause of death was pulmonary edema. Despite the extreme insomnia, psychological investigation showed no sign of cognitive deficits, except for some hallucinations.
Fatal insomnia is a neurodegenerative disease that eventually results in a complete inability to go past stage 1 of NREM sleep. In addition to insomnia, patients may experience panic attacks, paranoia, phobias, hallucinations, rapid weight loss, and dementia. Death usually occurs between 7 and 36 months from onset.
| Biology and health sciences | Symptoms and signs | Health |
18597983 | https://en.wikipedia.org/wiki/Seaweed | Seaweed | Seaweed, or macroalgae, refers to thousands of species of macroscopic, multicellular, marine algae. The term includes some types of Rhodophyta (red), Phaeophyta (brown) and Chlorophyta (green) macroalgae. Seaweed species such as kelps provide essential nursery habitat for fisheries and other marine species and thus protect food sources; other species, such as planktonic algae, play a vital role in capturing carbon and producing at least 50% of Earth's oxygen.
Natural seaweed ecosystems are sometimes under threat from human activity. For example, mechanical dredging of kelp destroys the resource and dependent fisheries. Other forces also threaten some seaweed ecosystems; for example, a wasting disease in predators of purple urchins has led to an urchin population surge which has destroyed large kelp forest regions off the coast of California.
Humans have a long history of cultivating seaweeds for their uses. In recent years, seaweed farming has become a global agricultural practice, providing food, source material for various chemical uses (such as carrageenan), cattle feeds and fertilizers. Due to their importance in marine ecologies and for absorbing carbon dioxide, recent attention has been on cultivating seaweeds as a potential climate change mitigation strategy for biosequestration of carbon dioxide, alongside other benefits like nutrient pollution reduction, increased habitat for coastal aquatic species, and reducing local ocean acidification. The IPCC Special Report on the Ocean and Cryosphere in a Changing Climate recommends "further research attention" as a mitigation tactic.
Taxonomy
"Seaweed" lacks a formal definition, but seaweed generally lives in the ocean and is visible to the naked eye. The term refers to both flowering plants submerged in the ocean, like eelgrass, as well as larger marine algae. Generally, it is one of several groups of multicellular algae; red, green and brown. They lack one common multicellular ancestor, forming a polyphyletic group. In addition, blue-green algae (Cyanobacteria) are occasionally considered in seaweed literature.
The number of seaweed species is still a topic of discussion among scientists, but it is most likely that there are several thousand species of seaweed.
Genera
The following table lists a very few example genera of seaweed.
Anatomy
Seaweed's appearance resembles non-woody terrestrial plants. Its anatomy includes:
Thallus: algal body
Lamina or blade: flattened structure that is somewhat leaf-like
Sorus: spore cluster
pneumatocyst, air bladder: a flotation-assisting organ on the blade
Kelp, float: a flotation-assisting organ between the lamina and stipe
Stipe: stem-like structure, may be absent
Holdfast: basal structure providing attachment to a substrate
Haptera: finger-like extension of the holdfast that anchors to a benthic substrate
The stipe and blade are collectively known as the frond.
Ecology
Two environmental requirements dominate seaweed ecology. These are seawater (or at least brackish water) and light sufficient to support photosynthesis. Another common requirement is an attachment point, and therefore seaweed most commonly inhabits the littoral zone (nearshore waters) and within that zone, on rocky shores more than on sand or shingle. In addition, there are few genera (e.g., Sargassum and Gracilaria) which do not live attached to the sea floor, but float freely.
Seaweed occupies various ecological niches. At the surface, they are only wetted by the tops of sea spray, while some species may attach to a substrate several meters deep. In some areas, littoral seaweed colonies can extend miles out to sea. The deepest living seaweed are some species of red algae. Others have adapted to live in tidal rock pools. In this habitat, seaweed must withstand rapidly changing temperature and salinity and occasional drying.
Macroalgae and macroalgal detritus have also been shown to be an important food source for benthic organisms, because macroalgae shed old fronds.
These macroalgal fronds tend to be utilized by benthos in the intertidal zone close to the shore.
Alternatively, pneumatocysts (gas filled "bubbles") can keep the macroalgae thallus afloat; fronds are transported by wind and currents from the coast into the deep ocean. It has been shown that benthic organisms also at several 100 m tend to utilize these macroalgae remnants.
As macroalgae takes up carbon dioxide and releases oxygen in the photosynthesis, macroalgae fronds can also contribute to carbon sequestration in the ocean, when the macroalgal fronds drift offshore into the deep ocean basins and sink to the sea floor without being remineralized by organisms. The importance of this process for blue carbon storage is currently a topic of discussion among scientists.
Biogeographic expansion
Nowadays a number of vectors—e.g., transport on ship hulls, exchanges among shellfish farmers, global warming, opening of trans-oceanic canals—all combine to enhance the transfer of exotic seaweeds to new environments. Since the piercing of the Suez Canal, the situation is particularly acute in the Mediterranean Sea, a 'marine biodiversity hotspot' that now registers over 120 newly introduced seaweed species -the largest number in the world.
Production
As of 2019, 35,818,961 tonnes were produced, of which 97.38% were produced in Asian countries.
Farming
Uses
Seaweed has a variety of uses, for which it is farmed or foraged.
Food
Seaweed is consumed across the world, particularly in East Asia, e.g., Japan, China, Korea, Taiwan and Southeast Asia, e.g. Brunei, Singapore, Thailand, Burma, Cambodia, Vietnam, Indonesia, the Philippines, and Malaysia, as well as in South Africa, Belize, Peru, Chile, the Canadian Maritimes, Scandinavia, South West England, Ireland, Wales, Hawaii and California, and Scotland.
Gim (김, Korea), nori (, Japan) and zicai (, China) are sheets of dried Porphyra used in soups, sushi or onigiri (rice balls). Gamet in the Philippines, from dried Pyropia, is also used as a flavoring ingredient for soups, salads and omelettes. Chondrus crispus ('Irish moss' or carrageenan moss) is used in food additives, along with Kappaphycus and Gigartinoid seaweed. Porphyra is used in Wales to make laverbread (sometimes with oat flour). In northern Belize, seaweed is mixed with milk, nutmeg, cinnamon and vanilla to make "" ("sweet").
Alginate, agar and carrageenan are gelatinous seaweed products collectively known as hydrocolloids or phycocolloids. Hydrocolloids are food additives. The food industry exploits their gelling, water-retention, emulsifying and other physical properties. Agar is used in foods such as confectionery, meat and poultry products, desserts and beverages and moulded foods. Carrageenan is used in salad dressings and sauces, dietetic foods, and as a preservative in meat and fish, dairy items and baked goods.
Seaweeds are used as animal feeds. They have long been grazed by sheep, horses and cattle in Northern Europe, even though their nutritional benefits are questionable. Their protein content is low and their heavy metal content is high, especially for arsenic and iodine, which are respectively toxic and nutritious.
They are valued for fish production. Adding seaweed to livestock feed can substantially reduce methane emissions from cattle, but only from their feedlot emissions. As of 2021, feedlot emissions account for 11% of overall emissions from cattle.
Medicine and herbs
Alginates are used in wound dressings (see alginate dressing), and dental moulds. In microbiology, agar is used as a culture medium. Carrageenans, alginates and agaroses, with other macroalgal polysaccharides, have biomedicine applications. Delisea pulchra may interfere with bacterial colonization. Sulfated saccharides from red and green algae inhibit some DNA and RNA-enveloped viruses.
Seaweed extract is used in some diet pills. Other seaweed pills exploit the same effect as gastric banding, expanding in the stomach to make the stomach feel more full.
Climate change mitigation
Other uses
Other seaweed may be used as fertilizer, compost for landscaping, or to combat beach erosion through burial in beach dunes.
Seaweed is under consideration as a potential source of bioethanol.
Alginates are used in industrial products such as paper coatings, adhesives, dyes, gels, explosives and in processes such as paper sizing, textile printing, hydro-mulching and drilling. Seaweed is an ingredient in toothpaste, cosmetics and paints. Seaweed is used for the production of bio yarn (a textile).
Several of these resources can be obtained from seaweed through biorefining.
Seaweed collecting is the process of collecting, drying and pressing seaweed. It was a popular pastime in the Victorian era and remains a hobby today. In some emerging countries, seaweed is harvested daily to support communities.
Seaweed is sometimes used to build roofs on houses on Læsø in Denmark.
Health risks
Rotting seaweed is a potent source of hydrogen sulfide, a highly toxic gas, and has been implicated in some incidents of apparent hydrogen sulfide poisoning. It can cause vomiting and diarrhea.
The so-called "stinging seaweed" Microcoleus lyngbyaceus is a filamentous cyanobacteria which contains toxins including lyngbyatoxin-a and debromoaplysiatoxin. Direct skin contact can cause seaweed dermatitis characterized by painful, burning lesions that last for days.
Threats
Bacterial disease ice-ice infects Kappaphycus (red seaweed), turning its branches white. The disease caused heavy crop losses in the Philippines, Tanzania and Mozambique.
Sea urchin barrens have replaced kelp forests in multiple areas. They are "almost immune to starvation". Lifespans can exceed 50 years. When stressed by hunger, their jaws and teeth enlarge, and they form "fronts" and hunt for food collectively.
| Biology and health sciences | Other organisms | null |
1250090 | https://en.wikipedia.org/wiki/Blood%20culture | Blood culture | A blood culture is a medical laboratory test used to detect bacteria or fungi in a person's blood. Under normal conditions, the blood does not contain microorganisms: their presence can indicate a bloodstream infection such as bacteremia or fungemia, which in severe cases may result in sepsis. By culturing the blood, microbes can be identified and tested for resistance to antimicrobial drugs, which allows clinicians to provide an effective treatment.
To perform the test, blood is drawn into bottles containing a liquid formula that enhances microbial growth, called a culture medium. Usually, two containers are collected during one draw, one of which is designed for aerobic organisms that require oxygen, and one of which is for anaerobic organisms, that do not. These two containers are referred to as a set of blood cultures. Two sets of blood cultures are sometimes collected from two different blood draw sites. If an organism only appears in one of the two sets, it is more likely to represent contamination with skin flora than a true bloodstream infection. False negative results can occur if the sample is collected after the person has received antimicrobial drugs or if the bottles are not filled with the recommended amount of blood. Some organisms do not grow well in blood cultures and require special techniques for detection.
The containers are placed in an incubator for several days to allow the organisms to multiply. If microbial growth is detected, a Gram stain is conducted from the culture bottle to confirm that organisms are present and provide preliminary information about their identity. The blood is then subcultured, meaning it is streaked onto an agar plate to isolate microbial colonies for full identification and antimicrobial susceptibility testing. Because it is essential that bloodstream infections are diagnosed and treated quickly, rapid testing methods have been developed using technologies like polymerase chain reaction and MALDI-TOF MS.
Procedures for culturing the blood were published as early as the mid-19th century, but these techniques were labour-intensive and bore little resemblance to contemporary methods. Detection of microbial growth involved visual examination of the culture bottles until automated blood culture systems, which monitor gases produced by microbial metabolism, were introduced in the 1970s. In developed countries, manual blood culture methods have largely been made obsolete by automated systems.
Medical uses
Blood is normally sterile. The presence of bacteria in the blood is termed bacteremia, and the presence of fungi is called fungemia. Minor damage to the skin or mucous membranes, which can occur in situations like toothbrushing or defecation, can introduce bacteria into the bloodstream, but this bacteremia is normally transient and is rarely detected in cultures because the immune system and reticuloendothelial system quickly sequester and destroy the organisms. Bacteria can enter the blood from infections such as cellulitis, UTIs and pneumonia; and infections within the vascular system, such as bacterial endocarditis or infections associated with intravenous lines, may result in a constant bacteremia. Fungemia occurs most commonly in people with poorly functioning immune systems. If bacteria or fungi are not cleared from the bloodstream, they can spread to other organs and tissues, or evoke an immune response that leads to a systemic inflammatory condition called sepsis, which can be life-threatening.
When sepsis is suspected, it is necessary to draw blood cultures to identify the causative agent and provide targeted antimicrobial therapy. People who are hospitalized and have a fever, a low body temperature, a high white blood cell count or a low count of granulocytes (a category of white blood cells) commonly have cultures drawn to detect a possible bloodstream infection. Blood cultures are used to detect bloodstream infections in febrile neutropenia, a common complication of chemotherapy in which fever occurs alongside a severely low count of neutrophils (white blood cells that defend against bacterial and fungal pathogens). Bacteremia is common in some types of infections, such as meningitis, septic arthritis and epidural abscesses, so blood cultures are indicated in these conditions. In infections less strongly associated with bacteremia, blood culture may still be indicated if the individual is at high risk of acquiring an intravascular infection or if cultures cannot be promptly obtained from the main site of infection (for example, a urine culture in pyelonephritis or a sputum culture in severe community-acquired pneumonia). Blood culture can identify an underlying microbial cause in cases of endocarditis and fever of unknown origin.
The pathogens most frequently identified in blood cultures include Staphylococcus aureus, Escherichia coli and other members of the family Enterobacteriaceae, Enterococcus species, Pseudomonas aeruginosa and Candida albicans. Coagulase-negative staphylococci (CNS) are also commonly encountered, although it is often unclear whether these organisms, which constitute part of the normal skin flora, are true pathogens or merely contaminants. In blood cultures taken from newborn babies and children, CNS can indicate significant infections. The epidemiology of bloodstream infections varies with time and place; for instance, Gram-positive organisms overtook Gram-negative organisms as the predominant cause of bacteremia in the United States during the 1980s and 1990s, and rates of fungemia have greatly increased in association with a growing population of people receiving immunosuppressive treatments such as chemotherapy. Gram-negative sepsis is more common in Central and South America, Eastern Europe, and Asia than in North America and Western Europe; and in Africa, Salmonella enterica is a leading cause of bacteremia.
Procedure
Collection
Blood cultures are typically drawn through venipuncture. Collecting the sample from an intravenous line is not recommended, as this is associated with higher contamination rates, although cultures may be collected from both venipuncture and an intravenous line to diagnose catheter-associated infections. Prior to the blood draw, the top of each collection bottle is disinfected using an alcohol swab to prevent contamination. The skin around the puncture site is then cleaned and left to dry; some protocols recommend disinfection with an alcohol-based antiseptic followed by either chlorhexidine or an iodine-based preparation, while others consider using only an alcohol-containing antiseptic to be sufficient. If blood must be drawn for other tests at the same time as a blood culture, the culture bottles are drawn first to minimize the risk of contamination. Because antimicrobial therapy can cause false negative results by inhibiting the growth of microbes, it is recommended that blood cultures are drawn before antimicrobial drugs are given, although this may be impractical in people who are critically ill.
A typical blood culture collection involves drawing blood into two bottles, which together form one "culture" or "set". One bottle is designed to enhance the growth of aerobic organisms, and the other is designed to grow anaerobic organisms. In children, infection with anaerobic bacteria is uncommon, so a single aerobic bottle may be collected to minimize the amount of blood required. It is recommended that at least two sets are collected from two separate venipuncture locations. This helps to distinguish infection from contamination, as contaminants are less likely to appear in more than one set than true pathogens. Additionally, the collection of larger volumes of blood increases the likelihood that microorganisms will be detected if present.
Blood culture bottles contain a growth medium, which encourages microorganisms to multiply, and an anticoagulant that prevents blood from clotting. Sodium polyanethol sulfonate (SPS) is the most commonly used anticoagulant because it does not interfere with the growth of most organisms. The exact composition of the growth medium varies, but aerobic bottles use a broth that is enriched with nutrients, such as brain-heart infusion or trypticase soy broth, and anaerobic bottles typically contain a reducing agent such as thioglycollate. The empty space in an anaerobic bottle is filled with a gas mixture that does not contain oxygen.
Many commercially manufactured bottles contain a resin that absorbs antibiotics to reduce their action on the microorganisms in the sample. Bottles intended for paediatric use are designed to accommodate lower blood volumes and have additives that enhance the growth of pathogens more commonly found in children. Other specialized bottles may be used to detect fungi and mycobacteria. In low and middle income countries, pre-formulated culture bottles can be prohibitively expensive, and it may be necessary to prepare the bottles manually. It can be difficult to access the proper supplies and facilities, and in some regions, it may not be possible to perform blood cultures at all.
It is important that the bottles are neither underfilled nor overfilled: underfilling can lead to false negative results as fewer organisms are present in the sample, while overfilling can inhibit microbial growth because the ratio of growth medium to blood is comparatively lower. A 1:10 to 1:5 ratio of blood to culture medium is suggested to optimize microbial growth. For routine blood cultures in adults, the Clinical and Laboratory Standards Institute (CLSI) recommends the collection of two sets of bottles from two different draws, with 20–30 mL of blood drawn in each set. In children, the amount of blood to be drawn is often based on the child's age or weight. If endocarditis is suspected, a total of six bottles may be collected.
Culturing
After the blood is collected, the bottles are incubated at body temperature to encourage the growth of microorganisms. Bottles are usually incubated for up to five days in automated systems, although most common bloodstream pathogens are detected within 48 hours. The incubation time may be extended further if manual blood culture methods are used or if slower-growing organisms, such as certain bacteria that cause endocarditis, are suspected. In manual systems, the bottles are visually examined for indicators of microbial growth, which might include cloudiness, the production of gas, the presence of visible microbial colonies, or a change in colour from the digestion of blood, which is called hemolysis. Some manual blood culture systems indicate growth using a compartment that fills with fluid when gases are produced, or a miniature agar plate which is periodically inoculated by tipping the bottle. To ensure that positive blood cultures are not missed, a sample from the bottle is often inoculated onto an agar plate (subcultured) at the end of the incubation period regardless of whether or not indicators of growth are observed.
In developed countries, manual culture methods have largely been replaced by automated systems that provide continuous computerized monitoring of the culture bottles. These systems, such as the BACTEC, BacT/ALERT and VersaTrek, consist of an incubator in which the culture bottles are continuously mixed. Growth is detected by sensors that measure the levels of gases inside the bottle—most commonly carbon dioxide—which serve as an indicator of microbial metabolism. An alarm or a visual indicator alerts the microbiologist to the presence of a positive blood culture bottle. If the bottle remains negative at the end of the incubation period, it is generally discarded without being subcultured.
A technique called the lysis-centrifugation method can be used for improved isolation of slow-growing or fastidious organisms, such as fungi, mycobacteria, and Legionella. Rather than incubating the blood in a bottle filled with growth medium, this method involves collecting blood into a tube containing an agent that destroys (lyses) red and white blood cells, then spinning the sample in a centrifuge. This process concentrates the solid contents of the sample, including microorganisms if present, into a pellet, which is used to inoculate the subculture media. While lysis-centrifugation offers greater sensitivity than conventional blood culture methods, it is prone to contamination because it requires extensive manipulation of the sample.
Identification
If growth is detected, a microbiologist will perform a Gram stain on a sample of blood from the bottle for a rapid preliminary identification of the organism. The Gram stain classifies bacteria as Gram-positive or Gram-negative and provides information about their shape—whether they are rod-shaped (referred to as bacilli), spherical (referred to as cocci), or spiral-shaped (spirochetes)—as well as their arrangement. Gram-positive cocci in clusters, for example, are typical of Staphylococcus species. Yeast and other fungi may also be identified from the Gram stain. A Gram stain identifying microbial growth from a blood culture is considered a critical result and must immediately be reported to the clinician. The Gram stain provides information about the possible identity of the organism, which assists the clinician in the selection of a more appropriate antimicrobial treatment before the full culture and sensitivity results are complete.
In traditional methods, the blood is then subcultured onto agar plates to isolate the organism for further testing. The Gram stain results inform microbiologists about what types of agar plates should be used and what tests might be appropriate to identify the organism. In some cases, no organisms are seen on the Gram stain despite the culture bottle showing indicators of growth or being reported as positive by automated instruments. This may represent a false positive result, but it is possible that organisms are present but cannot easily be visualized microscopically. Positive bottles with negative Gram stains are subcultured before being returned to the incubator, often using special culture media that promotes the growth of slow-growing organisms.
It typically takes 24 to 48 hours for sufficient growth to occur on the subculture plates for definitive identification to be possible. At this point, the microbiologist will assess the appearance of the bacterial or fungal colonies and carry out tests that provide information about the metabolic and biochemical features of the organism, which permit identification to the genus or species level. For example, the catalase test can distinguish streptococci and staphylococci (two genera of Gram-positive cocci) from each other, and the coagulase test can differentiate Staphylococcus aureus, a common culprit of bloodstream infections, from the less pathogenic coagulase-negative staphylococci.
Microorganisms may also be identified using automated systems, such as instruments that perform panels of biochemical tests, or matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS), in which microbial proteins are ionized and characterized on the basis of their mass-to-charge ratios; each microbial species exhibits a characteristic pattern of proteins when analyzed through mass spectrometry.
Because bloodstream infections can be life-threatening, timely diagnosis and treatment is critical, and to this end several rapid identification methods have been developed. MALDI-TOF can be used to identify organisms directly from positive blood culture bottles after separation and concentration procedures, or from preliminary growth on the agar plate within a few hours of subculturing. Genetic methods such as polymerase chain reaction (PCR) and microarrays can identify microorganisms by detection of DNA sequences specific to certain species in blood culture samples. Several systems designed for the identification of common blood culture pathogens are commercially available. Some biochemical and immunologic tests can be performed directly on positive blood cultures, such as the tube coagulase test for identification of S. aureus or latex agglutination tests for Streptococcus pneumoniae, and unlike PCR and MALDI-TOF, these methods may be practical for laboratories in low and middle income countries. It is also possible to directly inoculate microbial identification panels with blood from a positive culture bottle, although this is not as reliable as testing subcultured bacteria because additives from the growth media can interfere with the results.
Even faster diagnosis could be achieved through bypassing culture entirely and detecting pathogens directly from blood samples. A few direct testing systems are commercially available as of 2018, but the technology is still in its infancy. Most panels detect only a limited number of pathogens, and the sensitivity can be poor compared to conventional blood culture methods. Culturing remains necessary in order to carry out full antimicrobial sensitivity testing.
Antibiotic susceptibility testing
Antimicrobial treatment of bloodstream infections is initially empiric, meaning it is based on the clinician's suspicion about the causative agent of the disease and local patterns of antimicrobial resistance. Carrying out antibiotic susceptibility testing (AST) on pathogens isolated from a blood culture allows clinicians to provide a more targeted treatment and to discontinue broad-spectrum antibiotics, which can have undesirable side effects. In traditional AST methods, such as the disk diffusion test, pure colonies of the organism are selected from the subculture plate and used to inoculate a secondary medium. These methods require overnight incubation before results can be obtained. There are automated systems which use pre-formulated antibiotic panels, measure microbial growth automatically, and determine the sensitivity results using algorithms; some of these can provide results in as little as five hours, but others require overnight incubation as well.
Rapid administration of effective antimicrobial drugs is crucial in the treatment of sepsis, so several methods have been developed to provide faster antibiotic sensitivity results. Conventional AST methods can be carried out on young growth from the subculture plate, pellets of microorganisms obtained from concentration and purification of the positive blood culture, or directly from the culture bottle. Because direct testing methods do not isolate the organisms, they do not provide accurate results if more than one microorganism is present, although this is an infrequent occurrence in blood cultures. Another source of error is the difficulty in standardizing the amount of bacteria in the sample (the inoculum), which has a profound effect on the test results.
Genetic testing can be used for rapid detection of certain antimicrobial resistance markers. Methods such as PCR and microarrays, which can be performed directly on positive blood culture samples, detect DNA sequences associated with genes that confer resistance, such as the mecA gene found in methicillin-resistant Staphylococcus aureus or the vanA and vanB genes of vancomycin-resistant enterococci. MALDI-TOF has been explored as a rapid antimicrobial sensitivity testing method; principles involve measuring microbial growth in the presence of antibiotics, identifying the breakdown of antibiotics by microbial enzymes, and detecting protein spectra associated with bacterial strains that exhibit antibiotic resistance. Some of these methods can be performed on pellets from positive blood culture bottles. However, the lack of established methodologies for AST by MALDI-TOF limits its use in clinical practice, and direct AST by MALDI-TOF, unlike genetic testing methods, had not been approved by the Food and Drug Administration as of 2018.
Limitations
Blood cultures are subject to both false positive and false negative errors. In automated culture systems, identification of positive bottles is based on the detection of gases produced by cellular metabolism, so samples with high numbers of white blood cells may be reported as positive when no bacteria are present. Inspection of the growth curve produced by the instrument can help to distinguish between true and false positive cultures, but Gram staining and subculturing are still necessary for any sample that is flagged as positive.
Blood cultures can become contaminated with microorganisms from the skin or the environment, which multiply inside the culture bottle, giving the false impression that those organisms are present in the blood. Contamination of blood cultures can lead to unnecessary antibiotic treatment and longer hospital stays. The frequency of contamination can be reduced by following established protocols for blood culture collection, but it cannot be eliminated; for instance, bacteria can survive in deeper layers of the skin even after meticulous disinfection of the blood draw site. The CLSI defines an acceptable contamination rate as no greater than 3% of all blood cultures. The frequency of contamination varies widely between institutions and between different departments in the same hospital; studies have found rates ranging from 0.8 to 12.5 percent.
When faced with a positive blood culture result, clinicians must decide whether the finding represents contamination or genuine infection. Some organisms, such as S. aureus or Streptococcus pneumoniae, are usually considered to be pathogenic when detected in a blood culture, while others are more likely to represent contamination with skin flora; but even common skin organisms such as coagulase-negative staphylococci can cause bloodstream infections under certain conditions. When such organisms are present, interpretation of the culture result involves taking into account the person's clinical condition and whether or not multiple cultures are positive for the same organism.
False negatives may be caused by drawing blood cultures after the person has received antibiotics or collecting an insufficient amount of blood. The volume of blood drawn is considered the most important variable in ensuring that pathogens are detected: the more blood that is collected, the more pathogens are recovered. However, if the amount of blood collected far exceeds the recommended volume, bacterial growth may be inhibited by natural inhibitors present in the blood and an inadequate amount of growth medium in the bottle. Over-filling of blood culture bottles may also contribute to iatrogenic anemia.
Not all pathogens are easily detected by conventional blood culture methods. Particularly fastidious organisms, such as Brucella and Mycobacterium species, may require prolonged incubation times or special culture media. Some organisms are exceedingly difficult to culture or do not grow in culture at all, so serology testing or molecular methods such as PCR are preferred if infection with these organisms is suspected.
History
Early blood culture methods were labour-intensive. One of the first known procedures, published in 1869, recommended that leeches be used to collect blood from the patient. A microbiology textbook from 1911 noted that decontamination of the draw site and equipment could take over an hour, and that due to a lack of effective methods for preserving blood, the cultures would sometimes have to be prepared at the patient's bedside. In addition to subculturing the broth, some protocols specified that the blood be mixed with melted agar and the mixture poured into a petri dish. In 1915, a blood culture collection system consisting of glass vacuum tubes containing glucose broth and an anticoagulant was described. Robert James Valentine Pulvertaft published a seminal work on blood cultures in 1930, specifying—among other insights—an optimal blood-to-broth ratio of 1:5, which is still accepted today. The use of SPS as an anticoagulant and preservative was introduced in the 1930s and 40s and resolved some of the logistical issues with earlier methods. From the 1940s through the 1980s, a great deal of research was carried out on broth formulations and additives, with the goal of creating a growth medium that could accommodate all common bloodstream pathogens.
In 1947, M.R. Castañeda invented a "biphasic" culture bottle for the identification of Brucella species, which contained both broth and an agar slant, allowing the agar to be easily subcultured from the broth; this was a precursor of some contemporary systems for manual blood cultures. E.G. Scott in 1951 published a protocol described as "the advent of the modern blood culture set". Scott's method involved inoculating blood into two rubber-sealed glass bottles; one for aerobes and one for anaerobes. The aerobic bottle contained trypticase soy broth and an agar slant, and the anaerobic bottle contained thioglycollate broth. The lysis-centrifugation method was introduced in 1917 by Mildred Clough, but it was rarely used in clinical practice until commercial systems were developed in the mid-1970s.
Automated blood culture systems first became available in the 1970s. The earliest of these—the BACTEC systems, produced by Johnston Laboratories (now Becton Dickinson)—used culture broths containing nutrients labelled with radioactive isotopes. Microbes that fed on these substrates would produce radioactive carbon dioxide, and growth could be detected by monitoring its concentration. Before this technique was applied to blood cultures, it had been proposed by NASA as a method for detecting life on Mars. Throughout the 1970s and 80s several manufacturers attempted to detect microbial growth by measuring changes in the electrical conductivity of the culture medium, but none of these methods were commercially successful.
A major issue with the early BACTEC systems was that they produced radioactive waste, which required special disposal procedures, so in 1984 a new generation of BACTEC instruments was released that used spectrophotometry to detect CO2. The BacT/ALERT system, which indirectly detects production of CO2 by measuring the decrease in the medium's pH, was approved for use in the US in 1991. Unlike the BACTEC systems available at the time, the BacT/ALERT did not require a needle to be introduced into the bottle for sampling; this reduced the frequency of contamination and made it the first system to provide truly continuous monitoring of blood cultures. This non-invasive measurement method was adopted in 1992 by the BACTEC 9000 series, which used fluorescent indicators to detect pH changes. The Difco ESP, a direct predecessor of the contemporary VersaTREK system which detects gas production by measuring pressure changes, was also first approved in 1992. By 1996, an international study found that 55% of 466 laboratories surveyed were using the BACTEC or BacT/ALERT systems, with other automated systems accounting for 10% of the total.
| Biology and health sciences | Diagnostics | Health |
1250206 | https://en.wikipedia.org/wiki/Intermetallic | Intermetallic | An intermetallic (also called intermetallic compound, intermetallic alloy, ordered intermetallic alloy, long-range-ordered alloy) is a type of metallic alloy that forms an ordered solid-state compound between two or more metallic elements. Intermetallics are generally hard and brittle, with good high-temperature mechanical properties. They can be classified as stoichiometric or nonstoichiometic.
The term "intermetallic compounds" applied to solid phases has long been in use. However, Hume-Rothery argued that it misleads, suggesting a fixed stoichiometry and a clear decomposition into species.
Definitions
Research definition
In 1967 Schulze defined intermetallic compounds as solid phases containing two or more metallic elements, with optionally one or more non-metallic elements, whose crystal structure differs from that of the other constituents. This definition includes:
Electron (or Hume-Rothery) compounds
Size packing phases. e.g. Laves phases, Frank–Kasper phases and Nowotny phases
Zintl phases
The definition of metal includes:
Post-transition metals, i.e. aluminium, gallium, indium, thallium, tin, lead, and bismuth.
Metalloids, e.g. silicon, germanium, arsenic, antimony and tellurium.
Homogeneous and heterogeneous solid solutions of metals, and interstitial compounds such as carbides and nitrides are excluded under this definition. However, interstitial intermetallic compounds are included, as are alloys of intermetallic compounds with a metal.
Common use
In common use, the research definition, including post-transition metals and metalloids, is extended to include compounds such as cementite, Fe3C. These compounds, sometimes termed interstitial compounds, can be stoichiometric, and share properties with the above intermetallic compounds.
Complexes
The term intermetallic is used to describe compounds involving two or more metals such as the cyclopentadienyl complex Cp6Ni2Zn4.
B2
A B2 intermetallic compound has equal numbers of atoms of two metals such as aluminum and iron, arranged as two interpenetrating simple cubic lattices of the component metals.
Properties
Intermetallic compounds are generally brittle at room temperature and have high melting points. Cleavage or intergranular fracture modes are typical of intermetallics due to limited independent slip systems required for plastic deformation. However, some intermetallics have ductile fracture modes such as Nb–15Al–40Ti. Others can exhibit improved ductility by alloying with other elements to increase grain boundary cohesion. Alloying of other materials such as boron to improve grain boundary cohesion can improve ductility. They may offer a compromise between ceramic and metallic properties when hardness and/or resistance to high temperatures is important enough to sacrifice some toughness and ease of processing. They can display desirable magnetic and chemical properties, due to their strong internal order and mixed (metallic and covalent/ionic) bonding, respectively. Intermetallics have given rise to various novel materials developments.
Applications
Examples include alnico and the hydrogen storage materials in nickel metal hydride batteries. Ni3Al, which is the hardening phase in the familiar nickel-base super alloys, and the various titanium aluminides have attracted interest for turbine blade applications, while the latter is also used in small quantities for grain refinement of titanium alloys. Silicides, intermetallics involving silicon, serve as barrier and contact layers in microelectronics. Others include:
Magnetic materials e.g. alnico, sendust, Permendur, FeCo, Terfenol-D
Superconductors e.g. A15 phases, niobium-tin
Hydrogen storage e.g. AB5 compounds (nickel metal hydride batteries)
Shape memory alloys e.g. Cu-Al-Ni (alloys of Cu3Al and nickel), Nitinol (NiTi)
Coating materials e.g. NiAl
High-temperature structural materials e.g. nickel aluminide, Ni3Al
Dental amalgams, which are alloys of intermetallics Ag3Sn and Cu3Sn
Gate contact/ barrier layer for microelectronics e.g. TiSi2
Laves phases (AB2), e.g., MgCu2, MgZn2 and MgNi2.
The unintended formation of intermetallics can cause problems. For example, intermetallics of gold and aluminium can be a significant cause of wire bond failures in semiconductor devices and other microelectronics devices. The management of intermetallics is a major issue in the reliability of solder joints between electronic components.
Intermetallic particles
Intermetallic particles often form during solidification of metallic alloys, and can be used as a dispersion strengthening mechanism.
History
Examples of intermetallics through history include:
Roman yellow brass, CuZn
Chinese high tin bronze, Cu31Sn8
Type metal, SbSn
Chinese white copper, CuNi
German type metal is described as breaking like glass, without bending, softer than copper, but more fusible than lead. The chemical formula does not agree with the one above; however, the properties match with an intermetallic compound or an alloy of one.
| Physical sciences | Alloys and ceramic compounds | Chemistry |
1250419 | https://en.wikipedia.org/wiki/Cummingtonite | Cummingtonite | Cummingtonite ( ) is a metamorphic amphibole with the chemical composition , magnesium iron silicate hydroxide.
Monoclinic cummingtonite is compositionally similar and polymorphic with orthorhombic anthophyllite, which is a much more common form of magnesium-rich amphibole, the latter being metastable.
Cummingtonite shares few compositional similarities with alkali amphiboles such as arfvedsonite, glaucophane-riebeckite. There is little solubility between these minerals due to different crystal habit and inability of substitution between alkali elements and ferro-magnesian elements within the amphibole structure.
Name and discovery
Cummingtonite was named after the town of Cummington, Massachusetts, where it was discovered in 1824. It is also found in Sweden, South Africa, Scotland, and New Zealand.
Chemistry
Cummingtonite is a member of the cummingtonite-grunerite solid solution series which ranges from for magnesiocummingtonite to the iron rich grunerite endmember . Cummingtonite is used to describe minerals of this formula with between 30 and 70 per cent . Thus, cummingtonite is the series intermediate.
Manganese also substitutes for ) within cummingtonite amphibole, replacing B site atoms. These minerals are found in high-grade metamorphic banded iron formation and form a compositional series between (tirodite) and (dannemorite).
Calcium, sodium and potassium concentrations in cummingtonite are low. Cummingtonite tends toward more calcium substitution than related anthophyllite. Similarly, cummingtonite has lower ferric iron and aluminium than anthophyllite.
Amosite is a rare asbestiform variety of grunerite that was mined as asbestos only in the eastern part of the Transvaal Province of South Africa. The origin of the name is Amosa, the acronym for the mining company "Asbestos Mines of South Africa".
Occurrence
Cummingtonite is commonly found in metamorphosed magnesium-rich rocks and occurs in amphibolites. Usually it coexists with hornblende or actinolite, magnesium clinochlore chlorite, talc, serpentine-antigorite minerals or metamorphic pyroxene. Magnesium-rich cummingtonite can also coexist with anthophyllite.
Cummingtonite has also been found in some felsic volcanic rocks such as dacites. Manganese rich species can be found in metamorphosed Mn-rich rock units. The grunerite end member is characteristic of the metamorphosed iron formations of the Lake Superior region and the Labrador Trough. With prograde metamorphism cummingtonite and grunerite morph to members of the olivine and pyroxene series.
| Physical sciences | Silicate minerals | Earth science |
1250925 | https://en.wikipedia.org/wiki/Gerbillinae | Gerbillinae | Gerbillinae is one of the subfamilies of the rodent family Muridae and includes the gerbils, jirds, and sand rats. Once known as desert rats, the subfamily includes about 110 species of African, Indian, and Asian rodents, including sand rats and jirds, all of which are adapted to arid habitats. Most are primarily active during the day, making them diurnal (but some species, including the common household pet, exhibit crepuscular behavior), and almost all are omnivorous.
The gerbil got its name as a diminutive form of "jerboa", an unrelated group of rodents occupying a similar ecological niche. Gerbils are typically between long, including the tail, which makes up about half of their total length. One species, the great gerbil (Rhombomys opimus), originally native to Turkmenistan, can grow to more than . The average adult gerbil weighs about .
One species, the Mongolian gerbil (Meriones unguiculatus), also known as the clawed jird, is a gentle and hardy animal that has become a popular small house pet. It is also used in some scientific research.
Classification
SUBFAMILY GERBILLINAE
Genus Ammodillus
Ammodile, Ammodillus imbellis
Tribe Desmodilliscini
Genus Desmodilliscus
Pouched gerbil, Desmodilliscus braueri
Genus Pachyuromys
Fat-tailed gerbil, Pachyuromys duprasi
Tribe Gerbillini
Subtribe Gerbillina
Genus Dipodillus
Botta's gerbil, Dipodillus bottai
North African gerbil, Dipodillus campestris
Wagner's gerbil, Dipodillus dasyurus
Harwood's gerbil, Dipodillus harwoodi
James's gerbil, Dipodillus jamesi
Lowe's gerbil, Dipodillus lowei
Mackilligin's gerbil, Dipodillus mackilligini
Greater short-tailed gerbil, Dipodillus maghrebi
Rupicolous gerbil, Dipodillus rupicola
Lesser short-tailed gerbil, Dipodillus simoni
Somalian gerbil, Dipodillus somalicus
Khartoum gerbil, Dipodillus stigmonyx
Kerkennah Islands gerbil, Dipodillus zakariai
Genus Gerbillus
Subgenus Hendecapleura
Pleasant gerbil, Gerbillus amoenus
Brockman's gerbil, Gerbillus brockmani
Black-tufted gerbil, Gerbillus famulus
Algerian gerbil, Gerbillus garamantis
Grobben's gerbil, Gerbillus grobbeni
Pygmy gerbil, Gerbillus henleyi
Mauritanian gerbil, Gerbillus mauritaniae (sometimes considered a separate genus Monodia)
Harrison's gerbil, Gerbillus mesopotamiae
Darfur gerbil, Gerbillus muriculus
Balochistan gerbil, Gerbillus nanus
Large Aden gerbil, Gerbillus poecilops
Principal gerbil, Gerbillus principulus
Least gerbil, Gerbillus pusillus
Sand gerbil, Gerbillus syrticus
Waters's gerbil, Gerbillus watersi
Subgenus Gerbillus
Berbera gerbil, Gerbillus acticola
Agag gerbil, Gerbillus agag
Anderson's gerbil, Gerbillus andersoni
Swarthy gerbil, Gerbillus aquilus
Burton's gerbil, Gerbillus burtoni
Cheesman's gerbil, Gerbillus cheesmani
Dongola gerbil, Gerbillus dongolanus
Somalia gerbil, Gerbillus dunni
Flower's gerbil, Gerbillus floweri
Lesser Egyptian gerbil, Gerbillus gerbillus
Indian hairy-footed gerbil, Gerbillus gleadowi
Western gerbil, Gergbillus hesperinus
Hoogstraal's gerbil, Gerbillus hoogstraali
Lataste's gerbil, Gerbillus latastei
Sudan gerbil, Gerbillus nancillus
Nigerian gerbil, Gerbillus nigeriae
Occidental gerbil, Gerbillus occiduus
Pale gerbil, Gerbillus perpallidus
Cushioned gerbil, Gerbillus pulvinatus
Greater Egyptian gerbil, Gerbillus pyramidum
Rosalinda gerbil, Gerbillus rosalinda
Tarabul's gerbil, Gerbillus tarabuli
Genus Microdillus
Somali pygmy gerbil, Microdillus peeli
Subtribe Rhombomyina
Genus Brachiones
Przewalski's gerbil, Brachiones przewalskii
Genus Meriones
Subgenus Meriones
Tamarisk jird, Meriones tamariscinus
Subgenus Parameriones
Persian jird, Meriones persicus
King jird, Meriones rex
Subgenus Pallasiomys
Arabian jird, Meriones arimalius
Cheng's jird, Meriones chengi
Sundevall's jird, Meriones crassus
Dahl's jird, Meriones dahli
Moroccan jird, Meriones grandis
Libyan jird, Meriones libycus
Midday jird, Meriones meridianus
Buxton's jird, Meriones sacramenti
Shaw's jird, Meriones shawi
Tristram's jird, Meriones tristrami
Mongolian jird (Mongolian Gerbil), Meriones unguiculatus
Vinogradov's jird, Meriones vinogradovi
Zarudny's jird, Meriones zarudnyi
Subgenus Cheliones
Indian desert jird, Meriones hurrianae
Genus Psammomys
Fat sand rat, Psammomys obesus
Thin sand rat, Psammomys vexillaris
Genus Rhombomys
Great gerbil, Rhombomys opimus
Incertae sedis
Genus Sekeetamys
Bushy-tailed jird, Sekeetamys calurus
Tribe Gerbillurini
Genus Desmodillus
Cape short-eared gerbil, Desmodillus auricularis
Genus Gerbilliscus
Cape gerbil, Gerbilliscus afra
Boehm's gerbil, Gerbilliscus boehmi
Highveld gerbil, Gerbilliscus brantsii
Guinean gerbil, Gerbilliscus guineae
Gorongoza gerbil, Gerbilliscus inclusus
Kemp's gerbil, Gerbilliscus kempi
Bushveld gerbil, Gerbilliscus leucogaster
Black-tailed gerbil, Gerbilliscus nigricaudus
Phillips's gerbil, Gerbilliscus phillipsi
Fringe-tailed gerbil, Gerbilliscus robustus
Savanna gerbil, Gerbilliscus validus
Genus Gerbillurus
Hairy-footed gerbil, Gerbillurus paeba
Namib brush-tailed gerbil, Gerbillurus setzeri
Dune hairy-footed gerbil, Gerbillurus tytonis
Bushy-tailed hairy-footed gerbil, Gerbillurus vallinus
Genus Tatera
Indian gerbil, Tatera indica
Tribe Taterillini
Genus Taterillus
Robbins's tateril, Taterillus arenarius
Congo gerbil, Taterillus congicus
Emin's gerbil, Taterillus emini
Gracile tateril, Taterillus gracilis
Harrington's gerbil, Taterillus harringtoni
Lake Chad gerbil, Taterillus lacustris
Petter's gerbil, Taterillus petteri
Senegal gerbil, Taterillus pygargus
Tranieri's tateril, Taterillus tranieri
| Biology and health sciences | Rodents | Animals |
1251051 | https://en.wikipedia.org/wiki/Carnotaurus | Carnotaurus | Carnotaurus (; ) is a genus of theropod dinosaur that lived in South America during the Late Cretaceous period, probably sometime between 72 and 69 million years ago. The only species is Carnotaurus sastrei. Known from a single well-preserved skeleton, it is one of the best-understood theropods from the Southern Hemisphere. The skeleton, found in 1984, was uncovered in the Chubut Province of Argentina from rocks of the La Colonia Formation. Carnotaurus is a derived member of the Abelisauridae, a group of large theropods that occupied the large predatorial niche in the southern landmasses of Gondwana during the late Cretaceous. Within the Abelisauridae, the genus is often considered a member of the Brachyrostra, a clade of short-snouted forms restricted to South America.
Carnotaurus was a lightly built, bipedal predator, measuring in length and weighing . As a theropod, Carnotaurus was highly specialized and distinctive. It had two thick horns above the eyes, a unique feature unseen in all other carnivorous dinosaurs, and a very deep skull sitting on a muscular neck. Carnotaurus was further characterized by small, vestigial forelimbs and long, slender hind limbs. The skeleton is preserved with extensive skin impressions, showing a mosaic of small, non-overlapping scales approximately 5 mm in diameter. The mosaic was interrupted by large bumps that lined the sides of the animal, and there are no hints of feathers.
The distinctive horns and the muscular neck may have been used in fighting others of its species. According to separate studies, rivaling individuals may have combated each other with quick head blows, by slow pushes with the upper sides of their skulls, or by ramming each other head-on, using their horns as shock absorbers. The feeding habits of Carnotaurus remain unclear: some studies suggested the animal was able to hunt down very large prey such as sauropods, while other studies found it preyed mainly on relatively small animals. Its brain cavity suggests an acute sense of smell, while hearing and sight were less well developed. Carnotaurus was probably well adapted for running and was possibly one of the fastest large theropods.
Discovery
The only skeleton (holotype MACN-CH 894) was unearthed in 1984 by an expedition led by Argentinian paleontologist José Bonaparte. This expedition also recovered the peculiar spiny sauropod Amargasaurus. It was the eighth expedition within the project named "Jurassic and Cretaceous Terrestrial Vertebrates of South America", which started in 1976 and was sponsored by the National Geographic Society. The skeleton is well-preserved and (still connected together), with only the posterior two thirds of the tail, much of the lower leg, and the hind feet being destroyed by weathering. The skeleton belonged to an adult individual, as indicated by the fused sutures in the . It was found lying on its right side, showing a typical death pose with the neck bent back over the torso. Unusually, it is preserved with extensive skin impressions. In view of the significance of these impressions, a second expedition was started to reinvestigate the original excavation site, leading to the recovery of several additional skin patches. The skull was deformed during fossilization, with the snout bones of the left side displaced forwards relative to the right side, the nasal bones pushed upwards, and the pushed backwards onto the . Deformation also exaggerated the upward curvature of the upper jaw. The snout was more strongly affected by deformation than the rear part of the skull, possibly due to the higher rigidity of the latter. In top or bottom view, the upper jaws were less U-shaped than the lower jaws, resulting in an apparent mismatch. This mismatch is the result of deformation acting from the sides, which affected the upper jaws but not the lower jaws, possibly due to the greater flexibility of the joints within the latter.
The skeleton was collected on a farm named "Pocho Sastre" near Bajada Moreno in the Telsen Department of Chubut Province, Argentina. Because it was embedded in a large hematite concretion, a very hard kind of rock, preparation was complicated and progressed slowly. In 1985, Bonaparte published a note presenting Carnotaurus sastrei as a new genus and species and briefly describing the skull and lower jaw. The generic name Carnotaurus is derived from the Latin carno [carnis] ("flesh") and taurus ("bull") and can be translated with "meat-eating bull", an allusion to the animal's bull-like horns. The specific name sastrei honors Angel Sastre, the owner of the ranch where the skeleton was found. A comprehensive description of the whole skeleton followed in 1990. After Abelisaurus, Carnotaurus was the second member of the family Abelisauridae that was discovered. For years, it was by far the best-understood member of its family, and also the best-understood theropod from the Southern Hemisphere. It was not until the 21st century that similar well-preserved abelisaurids were described, including Aucasaurus, Majungasaurus and Skorpiovenator, allowing scientists to re-evaluate certain aspects of the anatomy of Carnotaurus. The holotype skeleton is displayed in the Argentine Museum of Natural Sciences, Bernardino Rivadavia; replicas can be seen in this and other museums around the world. Sculptors Stephen and Sylvia Czerkas manufactured a life-sized sculpture of Carnotaurus that was previously on display at the Natural History Museum of Los Angeles County. This sculpture, ordered by the museum during the mid-1980s, is probably the first life restoration of a theropod showing accurate skin.
Description
Carnotaurus was a large but lightly built predator. The only known individual was about in length, making Carnotaurus one of the largest abelisaurids. Ekrixinatosaurus and possibly Abelisaurus, which are highly incomplete, might have been similar or larger in size. A 2016 study found that only Pycnonemosaurus, at , was longer than Carnotaurus; it was estimated at . Its mass is estimated to have been , , , , and in separate studies that used different estimation methods. Carnotaurus was a highly specialized theropod, as seen especially in characteristics of the skull, the vertebrae and the forelimbs. The pelvis and hind limbs, on the other hand, remained relatively conservative, resembling those of the more basal Ceratosaurus. Both the pelvis and hind limb were long and slender. The left (thigh bone) of the individual measures 103 cm in length, but shows an average diameter of only 11 cm.
Skull
The skull, measuring in length, was proportionally shorter and deeper than in any other large carnivorous dinosaur. The snout was moderately broad, not as tapering as seen in more basal theropods like Ceratosaurus, and the jaws were curved upwards. A prominent pair of horns protruded obliquely above the eyes. These horns, formed by the frontal bones, were thick and cone-shaped, internally solid, somewhat vertically flattened in cross-section, and measured in length. Bonaparte, in 1990, suggested that these horns would probably have formed the bony cores of much longer keratinous sheaths. Mauricio Cerroni and colleagues, in 2020, agreed that the horns supported keratinous sheaths, but argued that these sheaths would not have been greatly longer than the bony cores.
As in other dinosaurs, the skull was perforated by six major skull openings on each side. The frontmost of these openings, the (bony nostril), was subrectangular and directed sidewards and forwards, but was not sloping in side view as in some other ceratosaurs such as Ceratosaurus. This opening was formed by the nasal and premaxilla only, while in some related ceratosaurs the maxilla also contributed to this opening. Between the bony nostril and the orbit (eye opening) was the antorbital fenestra. In Carnotaurus, this opening was higher than long, while it was longer than high in related forms such as Skorpiovenator and Majungasaurus. The antorbital fenestra was bounded by a larger depression, the , which was formed by recessed parts of the maxilla in front and the behind. As in all abelisaurids, this depression was small in Carnotaurus. The lower front corner of the antorbital fossa contained a smaller opening, the , which led into an air-filled cavity within the maxilla. The eye was situated in the upper part of the keyhole-shaped orbit. This upper part was proportionally small and subcircular, and separated from the lower part of the orbit by the forward-projecting . It was slightly rotated forward, probably permitting some degree of binocular vision. The keyhole-like shape of the orbit was possibly related to the marked skull shortening, and is also found in related short-snouted abelisaurids. As in all abelisaurids, the (on the skull roof between the eyes) was excluded from the orbit. Behind the orbit were two openings, the on the side and the on the top of the skull. The infratemporal fenestra was tall, short, and kidney-shaped, while the supratemporal fenestra was short and square-shaped. Another opening, the , was located in the lower jaw – in Carnotaurus, this opening was comparatively large.
On each side of the upper jaws there were four premaxillary and twelve maxillary teeth, while the lower jaws were equipped with fifteen dentary teeth per side. The teeth had been described as being long and slender, as opposed to the very short teeth seen in other abelisaurids. However, Cerroni and colleagues, in their 2020 description of the skull, stated that all erupted teeth have been severely damaged during excavation and were later reconstructed with plaster (Bonaparte, in 1990, only noted that some lower jaw teeth had been fragmented). Reliable information on the shape of the teeth is therefore limited to replacement teeth and tooth roots that are still enclosed by the jaw, and can be studied using CT imaging. The replacement teeth had low, flattened crowns, were closely spaced, and inclined forwards at approximately 45°. In his 1990 description, Bonaparte noted that the lower jaw was shallow and weakly constructed, with the (the foremost jaw bone) connected to the hindmost jaw bones by only two contact points; this contrasts to the robust-looking skull. Cerroni and colleagues instead found multiple but loose connections between the dentary and the hindmost jaw bones. This articulation, therefore, was very flexible but not necessarily weak. The bottom margin of the dentary was convex, while it was straight in Majungasaurus.
The lower jaw was found with ossified hyoid bones, in the position they would be in if the animal was alive. These slender bones, supporting the tongue musculature and several other muscles, are rarely found in dinosaurs because they are often cartilaginous and not connected to other bones and therefore get lost easily. In Carnotaurus, three hyoid bones are preserved: a pair of curved, rod-like ceratobranchials that articulate with a single, trapezoidal element, the basihyal. Carnotaurus is the only known non-avian theropod from which a basihyal is known. The back of the skull had well-developed, air-filled chambers surrounding the braincase, as in other abelisaurids. Two separate chamber systems were present, the paratympanic system, which was connected to the middle ear cavity, as well as chambers resulting from outgrowths of the air sacs of the neck.
A number of autapomorphies (distinguishing features) can be found in the skull, including the pair of horns and the very short and deep skull. The maxilla had excavations above the promaxillary fenestra, which would have been excavated by the antorbital air sinus (air passages in the snout). The nasolacrimal duct, which transported eye fluid, exited on the medial (inner) surface of the lacrimal through a canal of uncertain function. Other proposed autapomorphies include a deep and long, air-filled excavation in the and an elongated depression on the of the .
Vertebrae
The vertebral column consisted of ten cervical (neck), twelve dorsal, six fused sacral and an unknown number of caudal (tail) vertebrae. The neck was nearly straight, rather than having the S-curve seen in other theropods, and also unusually wide, especially towards its base. The top of the neck's spinal column featured a double row of enlarged, upwardly directed bony processes called epipophyses, creating a smooth trough on the top of the neck vertebrae. These processes were the highest points of the spine, towering above the unusually low spinous processes. The epipophyses probably provided attachment areas for a markedly strong neck musculature. A similar double row was also present in the tail, formed there by highly modified caudal ribs, in front view protruding upwards in a V-shape, their inner sides creating a smooth, flat, top surface of the front tail vertebrae. The end of each caudal rib was furnished with a forward projecting hook-shaped expansion that connected to the caudal rib of the preceding vertebra.
Forelimbs
The forelimbs were proportionally shorter than in any other large carnivorous dinosaurs, including tyrannosaurids. The forearm was only a quarter the size of the upper arm. There were no carpalia in the hand, so that the metacarpals articulated directly with the forearm. The hand showed four basic digits, though apparently only the middle two of these ended in finger bones, while the fourth consisted of a single splint-like metacarpal that may have represented an external 'spur'. The fingers themselves were fused and immobile, and may have lacked claws. Carnotaurus differed from all other abelisaurids in having proportionally shorter and more robust forelimbs, and in having the fourth, splint-like metacarpal as the longest bone in the hand. A 2009 study suggests that the arms were vestigial in abelisaurids, because nerve fibers responsible for stimulus transmission were reduced to an extent seen in today's emus and kiwis, which also have vestigial forelimbs.
Skin
Carnotaurus was the first theropod dinosaur discovered with a significant number of fossil skin impressions. These impressions, found beneath the skeleton's right side, come from different body parts, including the lower jaw, the front of the neck, the shoulder girdle, and the rib cage. The largest patch of skin corresponds to the anterior part of the tail. Originally, the right side of the skull also was covered with large patches of skin—this was not recognized when the skull was prepared, and these patches were accidentally destroyed. However, the surface texture of several skull bones allows for inferences on their probable covering. A hummocky surface with grooves, pits, and small openings is found on the sides and front of the snout and indicates a scaly covering, possibly with flat scales as in today's crocodilians. The top of the snout was sculptured with numerous small holes and spikes – this texture can probably be correlated with a cornified pad (horny covering). Such a pad also occurred in Majungasaurus but was absent in Abelisaurus and Rugops. A row of large scales did probably surround the eye, as indicated by a hummocky surface with longitudinal grooves on the lacrimal and postorbital bones.
The skin was built up of a mosaic of polygonal, non-overlapping scales measuring approximately in diameter. This mosaic was divided by thin, parallel grooves. Scalation was similar across different body parts with the exception of the head, which apparently showed a different, irregular pattern of scales. There is no evidence of feathers. Larger bump-like structures were distributed over the sides of the neck, back and tail in irregular rows. These bumps were in diameter and up to in height and often showed a low midline ridge. They were set apart from each other and became larger towards the animal's top. The bumps probably represent feature scales – clusters of condensed scutes – similar to those seen on the soft frill running along the body midline in hadrosaurid ("duck-billed") dinosaurs. These structures did not contain bone. Stephen Czerkas (1997) suggested that these structures may have protected the animal's sides while fighting members of the same species (conspecifics) and other theropods, arguing that similar structures can be found on the neck of the modern iguana where they provide limited protection in combat.
More recent studies of the skin of Carnotaurus published in 2021 suggest that previous depictions of the scales on the body are inaccurate, and the larger feature scales were randomly distributed along the body, not distributed in discrete rows like in older artistic depictions and illustrations. There is also no sign of progressive size variation in feature scales along different areas along the body. The basement scales of Carnotaurus were by comparison highly variable, ranging in size from small and elongated, to large and polygonal, and from circular-to-lenticular in the thoracic, scapular, and tail regions, respectively. This scale differentiation may have been related to regulating body heat and shedding excess heat via thermoregulation due to its large body size and active lifestyle.
Classification
Carnotaurus is one of the best-understood genera of the Abelisauridae, a family of large theropods restricted to the ancient southern supercontinent Gondwana. Abelisaurids were the dominant predators in the Late Cretaceous of Gondwana, replacing the carcharodontosaurids and occupying the ecological niche filled by the tyrannosaurids in the northern continents. Several notable traits that evolved within this family, including shortening of the skull and arms as well as peculiarities in the cervical and caudal vertebrae, were more pronounced in Carnotaurus than in any other abelisaurid.
Though relationships within the Abelisauridae are debated, Carnotaurus is consistently shown to be one of the most derived members of the family by cladistical analyses. Its nearest relative might have been Aucasaurus or Majungasaurus. A 2008 review, in contrast, suggested that Carnotaurus was not closely related to either genus, and instead proposed Ilokelesia as its sister taxon. Juan Canale and colleagues, in 2009, erected the new clade Brachyrostra to include Carnotaurus but not Majungasaurus; this classification has been followed by a number of studies since.
Carnotaurus is eponymous for two subgroups of the Abelisauridae: the Carnotaurinae and the Carnotaurini. Paleontologists do not universally accept these groups. The Carnotaurinae was defined to include all derived abelisaurids with the exclusion of Abelisaurus, which is considered a basal member in most studies. However, a 2008 review suggested that Abelisaurus was a derived abelisaurid instead. Carnotaurini was proposed to name the clade formed by Carnotaurus and Aucasaurus; only those paleontologists who consider Aucasaurus as the nearest relative of Carnotaurus use this group. A 2024 study recovered Carnotaurini as a valid clade consisting of Carnotaurus, Aucasaurus, Niebla and Koleken.
Below is a cladogram published by Canale and colleagues in 2009.
Paleobiology
Function of the horns
Carnotaurus is the only known carnivorous bipedal animal with a pair of horns on the frontal bone. The use of these horns is not entirely clear. Several interpretations have revolved around use in fighting conspecifics or in killing prey, though a use in display for courtship or recognition of members of the same species is possible as well.
Greg Paul (1988) proposed that the horns were butting weapons and that the small orbita would have minimized the possibility of hurting the eyes while fighting. Gerardo Mazzetta and colleagues (1998) suggested that Carnotaurus used its horns in a way similar to rams. They calculated that the neck musculature was strong enough to absorb the force of two individuals colliding with their heads frontally at a speed of 5.7 m/s each. Fernando Novas (2009) interpreted several skeletal features as adaptations for delivering blows with the head. He suggested that the shortness of the skull might have made head movements quicker by reducing the moment of inertia, while the muscular neck would have allowed strong head blows. He also noted an enhanced rigidity and strength of the spinal column that may have evolved to withstand shocks conducted by the head and neck.
Other studies suggest that rivaling Carnotaurus did not deliver rapid head blows, but pushed slowly against each other with the upper sides of their skulls. Mazzetta and colleagues, in 2009, argued that the horns may have been a device for the distribution of compression forces without damage to the brain. This is supported by the flattened upper sides of the horns, the strongly fused bones of the top of the skull, and the inability of the skull to survive rapid head blows. Rafael Delcourt, in 2018, suggested that the horns could have been used either in slow headbutting and shoving, as seen in the modern marine iguana, or in blows to the opponent's neck and flanks, as seen in the modern giraffe. The latter possibility had been previously proposed for the related Majungasaurus in a 2011 conference paper.
Gerardo Mazzetta and colleagues (1998) propose that the horns might also have been used to injure or kill small prey. Though horn cores are blunt, they may have had a similar form to modern bovid horns if there was a keratinous covering. However, this would be the only reported example of horns being used as hunting weapons in animals.
Jaw function and diet
Analyses of the jaw structure of Carnotaurus by Mazzetta and colleagues, in 1998, 2004, and 2009, suggest that the animal was capable of quick bites, but not strong ones. Quick bites are more important than strong bites when capturing small prey, as shown by studies of modern-day crocodiles. These researchers also noted a high degree of flexibility (kinesis) within the skull and especially the lower jaw, somewhat similar to modern snakes. Elasticity of the jaw would have allowed Carnotaurus to swallow small prey items whole. In addition, the front part of the lower jaw was hinged, and thus able to move up and down. When pressed downwards, the teeth would have projected forward, allowing Carnotaurus to spike small prey items; when the teeth were curved upwards, the now backward projecting teeth would have hindered the caught prey from escaping. Mazzetta and colleagues also found that the skull was able to withstand forces that appear when tugging on large prey items. Carnotaurus may therefore have fed mainly on relatively small prey, but also was able to hunt large dinosaurs. In 2009, Mazzetta and colleagues estimated a bite force of around 3,341 newtons. A 2022 study estimating bite force for 33 different dinosaurs suggests that the bite force in Carnotaurus was around 3,392 newtons at the anterior portion of the jaws; slightly higher than the previous estimate. The posterior bite force at the back of the jaws meanwhile, was estimated at 7,172 newtons.
This interpretation was questioned by François Therrien and colleagues (2005), who found that the biting force of Carnotaurus was twice that of the American alligator, which may have the strongest bite of any living tetrapod. These researchers also noted analogies with modern Komodo dragons: the flexural strength of the lower jaw decreases towards the tip linearly, indicating that the jaws were not suited for high precision catching of small prey but for delivering slashing wounds to weaken big prey. As a consequence, according to this study, Carnotaurus must have mainly preyed upon large animals, possibly by ambush. Cerroni and colleagues, in 2020, argued that flexibility was restricted to the lower jaw, while the thickened skull roof and the ossification of several cranial joints suggest that the skull had no or only little kinesis.
Robert Bakker (1998) found that Carnotaurus mainly fed upon very large prey, especially sauropods. As he noted, several adaptations of the skull—the short snout, the relatively small teeth and the strong back of the skull (occiput)—had independently evolved in Allosaurus. These features suggest that the upper jaw was used like a serrated club to inflict wounds; big sauropods would have been weakened by repeated attacks.
Locomotion
Mazzetta and colleagues (1998, 1999) presumed that Carnotaurus was a swift runner, arguing that the thigh bone was adapted to withstand high bending moments while running; The ability of an animal's leg to withstand those forces limits its top speed. The running adaptations of Carnotaurus would have been better than those of a human, although not nearly as good as those of an ostrich. Scientists calculate that Carnotaurus had a top speed of up to per hour.
In dinosaurs, the most important locomotor muscle was located in the tail. This muscle, called the caudofemoralis, attaches to the fourth trochanter, a prominent ridge on the thigh bone, and pulls the thigh bone backwards when contracted. Scott Persons and Phil Currie (2011) argued that in the tail vertebrae of Carnotaurus, the caudal ribs did not protrude horizontally ("T-shaped"), but were angled against the vertical axis of the vertebrae, forming a "V". This would have provided additional space for a caudofemoralis muscle larger than in any other theropod—the muscle mass was calculated at per leg. Therefore, Carnotaurus could have been one of the fastest large theropods. While the caudofemoralis muscle was enlarged, the epaxial muscles situated above the caudal ribs would have been proportionally smaller. These muscles, called the longissimus and spinalis muscle, were responsible for tail movement and stability. To maintain tail stability in spite of reduction of these muscles, the caudal ribs bear forward projecting processes interlocking the vertebrae with each other and with the pelvis, stiffening the tail. As a consequence, the ability to make tight turns would have been diminished, because the hip and tail had to be turned simultaneously, unlike in other theropods.
Brain and senses
Cerroni and Paulina-Carabajal, in 2019, used a CT scan to study the endocranial cavity that contained the brain. The volume of the endocranial cavity was 168.8 cm3, although the brain would only have filled a fraction of this space. The authors used two different brain size estimates, assuming a brain size of 50% and 37% of the endocranial cavity, respectively. This results in a reptile encephalization quotient (a measure of intelligence) larger than that of the related Majungasaurus but smaller than in tyrannosaurids. The pineal gland, which produces hormones, might have been smaller than in other abelisaurids, as indicated by a low dural expansion – a space on top of the forebrain in which the pineal gland is thought to have been located.
The olfactory bulbs, which housed the sense of smell, were large, while the optic lobes, which were responsible for sight, were relatively small. This indicates that the sense of smell might have been better developed than the sense of sight, while the opposite is the case in modern birds. The front end of the olfactory tracts and bulbs were curved downwards, a feature only shared by Indosaurus; in other abelisaurids, these structures were oriented horizontally. As hypothesized by Cerroni and Paulina-Carabajal, this downward-curvature, together with the large size of the bulbs, might indicate that Carnotaurus relied more on the sense of smell than other abelisaurids. The flocculus, a brain lobe thought to be correlated with gaze stabilization (coordination between eyes and body), was large in Carnotaurus and other South American abelisaurids. This could indicate that these forms frequently used quick movements of the head and body. Hearing might have been poorly developed in Carnotaurus and other abelisaurids, as indicated by the short lagena of the inner ear. The hearing range was estimated to be below 3 kHz.
Age and paleoenvironment
Originally, the rocks in which Carnotaurus was found were assigned to the upper part of the Gorro Frigio Formation, which was considered to be approximately 100 million years old (Albian or Cenomanian stage). Later, they were realized to pertain to the much younger La Colonia Formation, dating to the Campanian and Maastrichtian stages (83.6 to 66 million years ago). Novas, in a 2009 book, gave a narrower time span of
72 to 69.9 million years ago (lower Maastrichtian stage). Carnotaurus therefore was the latest South American abelisaurid known. By the Late Cretaceous, South America was already isolated from both Africa and North America.
The La Colonia Formation is exposed over the southern slope of the North Patagonian Massif. Most vertebrate fossils, including Carnotaurus, come from the formation's middle section (called the middle facies association). This part likely represents the deposits of an environment of estuaries, tidal flats or coastal plains. The climate would have been seasonal with both dry and humid periods. The most common vertebrates collected include ceratodontid lungfish, turtles, plesiosaurs, crocodiles, dinosaurs, lizards, snakes and mammals. Other dinosaurs include Koleken inakayali, which is closely related to Carnotaurus; the saltasauroid titanosaur Titanomachya gimenezi; an unnamed ankylosaur; and an unnamed hadrosauroid, among others. Some of the snakes that have been found belong to the families Boidae and Madtsoidae, such as Alamitophis argentinus. Turtles are represented by at least five taxa, four from Chelidae (Pleurodira) and one from Meiolaniidae (Cryptodira). Plesiosaurs include two elasmosaurs (Kawanectes and Chubutinectes) and a polycotylid (Sulcusuchus). Mammals are represented by Reigitherium bunodontum and Coloniatherium cilinskii, the former of which was considered the first record of a South American docodont, and the possible gondwanatherians or multituberculates Argentodites coloniensis and Ferugliotherium windhauseni. Remains of an enantiornithine and, possibly, of a neornithine bird have been discovered.
| Biology and health sciences | Theropods | Animals |
1251566 | https://en.wikipedia.org/wiki/Transplanting | Transplanting | In agriculture and gardening, transplanting or replanting is the technique of moving a plant from one location to another. Most often this takes the form of starting a plant from seed in optimal conditions, such as in a greenhouse or protected nursery bed, then replanting it in another, usually outdoor, growing location. The agricultural machine that does this is called a transplanter. This is common in market gardening and truck farming, where setting out or planting out are synonymous with transplanting. In the horticulture of some ornamental plants, transplants are used infrequently and carefully because they carry with them a significant risk of killing the plant.
Transplanting has a variety of applications, including:
Extending the growing season by starting plants indoors, before outdoor conditions are favorable;
Protecting young plants from diseases and pests until they are sufficiently established;
Avoiding germination problems by setting out seedlings instead of direct seeding.
Different species and varieties react differently to transplanting; for some, it is not recommended. In all cases, avoiding transplant shock—the stress or damage received in the process—is the principal concern. Plants raised in protected conditions usually need a period of acclimatization, known as hardening off (see also frost hardiness). Also, root disturbance should be minimized. The stage of growth at which transplanting takes place, the weather conditions during transplanting, and treatment immediately after transplanting are other important factors.
Transplant production systems
Commercial growers employ what are called containerized and non-containerized transplant production.
Containerized transplants or plugs allow separately grown plants to be transplanted with the roots and soil intact. Typically grown in peat pots (a pot made of compressed peat), soil blocks (compressed blocks of soil), paper pots or multiple-cell containers such as plastic packs (four to twelve cells) or larger plug trays made of plastic or styrofoam.
Non-containerized transplants are typically grown in greenhouse ground beds or benches, outdoors in-ground with row covers and hotbeds, and in-ground in the open field. The plants are pulled with bare roots for transplanting, which are less-expensive than containerized transplants, but with lower yields due to poorer plant reestablishment.
Containerized stock
Containerized planting stock is classified by the type and size of container used. A great variety of containers has been used, with various degrees of success. Some containers are designed to be planted with the tree e.g., the tar paper pot, the Alberta peat sausage, the Walters square bullet, and paper pot systems, are filled with rooting medium and planted with the tree (Tinus and McDonald 1979). Also planted with the tree are other containers that are not filled with rooting medium, but in which the container is a molded block of growing medium, as with Polyloam, Tree Start, and BR-8 Blocks.
Designs of containers for raising planting stock have been many and various. Containerized white spruce stock is now the norm. Most containers are tube-like; both diameter and volume affect white spruce growth (Hocking and Mitchell 1975, Carlson and Endean 1976). White spruce grown in a container having a 1:1 height:diameter produced significantly greater dry weight than those in containers of 3:1 and 6:1 height:diameter configurations. Total dry weight and shoot length increased with increasing container volume.
The larger the bag the fewer deployed per unit area. However, the biological advantage of size has been enough to influence a pronounced swing towards larger containers in British Columbia (Coates et al. 1994). The number of PSB211 (2 cm top diameter, 11 cm long) styroblock plugs ordered in British Columbia decreased from 14,246,000 in 1981 to zero in 1990, while orders for PSB415 (4 cm top diameter, 15 cm long) styroblock plugs increased in the same period from 257 000 to 41 008 000, although large stock is more expensive than small to raise, distribute, and plant.
Other containers are not planted with the tree, e.g., Styroblock, Superblock, Copperblock, and Miniblock container systems, produce Styroplug seedlings with roots in a cohesive plug of growing medium. The plug cavities vary in volume by various combinations of top diameter and depth, from 39 to 3260 mL, but those most commonly used, at least in British Columbia, are in the range 39 mL to 133 mL (Van Eerden and Gates 1990). The BC-CFS Styroblock plug, developed in 1969/70, has become the dominant stock type for interior spruce in British Columbia (Van Eerden and Gates 1990, Coates et al. 1994). Plug sizes are indicated by a 3-figure designation, the 1st figure of which gives the top diameter and the other 2 figures the depth of the plug cavity, both dimensions approximations in centimetres. The demand for larger plugs has been increasing strongly (Table 6.24; Coates et al. 1994). Stock raised in some sizes of plug can vary in age class. In British Columbia, for example, PSB 415 and PSB 313 plugs are raised as 1+0 or 2+0. PSB 615 plugs are seldom raised other than as 2+0.
Initially, the intention was to leave the plugs in situ in the Styroblocks until immediately before planting. But this led to logistic problems and reduced the efficiency of planting operations. Studies to compare the performance of extracted, packaged stock versus in situ stock seem not to have been carried out, but packaged stock has performed well and given no indication of distress.
Forestry
Field storage
As advocated by Coates et al. (1994), thawed planting stock taken to the field should optimally be kept cool at 1 °C to 2 °C in relative humidities over 90% (Ronco 1972a). For a few days, storage temperatures around 4.5 °C and humidities about 50% can be tolerated. Binder and Fielder (1988) recommended that boxed seedlings retrieved from cold storage should not be exposed to temperatures above 10 °C. Refrigerator vans commonly used for transportation and on-site storage normally ‘maintain seedlings at 2 °C to 4 °C (Mitchell et al. 1980). Ronco (1972a, b) cautioned against using dry ice (solid carbon dioxide) to cool seedlings; he claimed that respiration and water transport in seedlings are disrupted by high concentrations of gaseous carbon dioxide.
Coniferous planting stock is often held in frozen storage, mostly at −2 °C, for extended periods and then cool-stored (+2 °C) to thaw the root plug prior to outplanting. Thawing is necessary if frozen seedlings cannot be separated from one another and has been advocated by some in order to avoid possible loss of contact between plug and soil with shrinkage of the plug with melting of ice in the plug. Physiological activity is also greater under cool rather than frozen storage, but seedlings of interior spruce and Engelmann spruce that were planted while still frozen had only brief and transient physiological effects, including xylem water potential, (Camm et al. 1995, Silem and Guy 1998). After 1 growing season, growth parameters did not differ between seedlings planted frozen and those planted thawed.
Studies of storage and planting practices have generally focussed on the effects of duration of frozen storage and the effects of subsequent cool storage (e.g., Ritchie et al. 1985, Chomba et al. 1993, Harper and Camm 1993). Reviews of colds storage techniques have paid little attention to the thawing process (Camm et al. 1994), or have merely noted that the rate of thawing is unlikely to cause damage (McKay 1997).
Kooistra and Bakker (2002) noted several lines of evidence suggesting that cool storage can have negative effects on seedling health. The rate of respiration is faster during cool storage than in frozen storage, so depleting carbohydrate reserves more rapidly. Certainly in the absence of light during cool storage, and to an indeterminate extent if seedlings are exposed to light (unusual), carbohydrate reserves are depleted (Wang and Zwiacek 1999). As well, Silem and Guy (1998), for instance, found that interior spruce seedlings had significantly lower total carbohydrate reserves if stored for 2 weeks at 2 °C than if thawed rapidly for 24 hours at 15 °C. Seedlings can rapidly lose cold hardiness in cool storage through increased respiration and consumption of intracellular sugars that function as cryoprotectants (Ogren 1997). Also, depletion of carbohydrate reserves impairs the ability of seedlings to make root growth. Finally, storage moulds are much more of a problem during cool than frozen storage.
Kooistra and Bakker (2002), therefore, tested the hypothesis that such thawing is unnecessary. Seedlings of 3 species, including interior spruce were planted with frozen root plugs (frozen seedlings) and with thawed root plugs (thawed seedlings). Thawed root plugs warmed to soil temperature in about 20 minutes; frozen root plugs took about 2 hours, ice in the plug having to melt before the temperature could rise above zero. Size of root plug influenced thawing time. These outplantings were into warm soil by boreal standards, and seedlings with frozen plugs might fare differently if outplanted into soil at temperatures more typical of planting sites in spring and at high elevations. Variable fluorescence did not differ between thawed and frozen seedlings. Bud break was no faster among thawed interior spruce seedlings than among frozen. Field performance did not differ between thawed and frozen seedlings.
Gallery
| Technology | Horticulture | null |
1251821 | https://en.wikipedia.org/wiki/Plant%20embryonic%20development | Plant embryonic development | Plant embryonic development, also plant embryogenesis, is a process that occurs after the fertilization of an ovule to produce a fully developed plant embryo. This is a pertinent stage in the plant life cycle that is followed by dormancy and germination. The zygote produced after fertilization must undergo various cellular divisions and differentiations to become a mature embryo. An end stage embryo has five major components including the shoot apical meristem, hypocotyl, root meristem, root cap, and cotyledons. Unlike the embryonic development in animals, and specifically in humans, plant embryonic development results in an immature form of the plant, lacking most structures like leaves, stems, and reproductive structures. However, both plants and animals including humans, pass through a phylotypic stage that evolved independently and that causes a developmental constraint limiting morphological diversification.
Morphogenic events
Embryogenesis occurs naturally as a result of single, or double fertilization, of the ovule, giving rise to two distinct structures: the plant embryo and the endosperm which go on to develop into a seed. The zygote goes through various cellular differentiations and divisions in order to produce a mature embryo. These morphogenic events form the basic cellular pattern for the development of the shoot-root body and the primary tissue layers; it also programs the regions of meristematic tissue formation. The following morphogenic events are only particular to eudicots, and not monocots.
Plant
Following fertilization, the zygote and endosperm are present within the ovule, as seen in stage I of the illustration on this page. Then the zygote undergoes an asymmetric transverse cell division that gives rise to two cells - a small apical cell resting above a large basal cell.
These two cells are very different, and give rise to different structures, establishing polarity in the embryo.
apical cellThe small apical cell is on the top and contains most of the cytoplasm, the aqueous substance found within cells, from the original zygote. It gives rise to the hypocotyl, shoot apical meristem, and cotyledons.
basal cellThe large basal cell is on the bottom and consists of a large vacuole and gives rise to the hypophysis and the suspensor.
Eight cell stage
After two rounds of longitudinal division and one round of transverse division, an eight-celled embryo is the result. Stage II, in the illustration above, indicates what the embryo looks like during the eight cell stage. According to Laux et al., there are four distinct domains during the eight cell stage. The first two domains contribute to the embryo proper. The apical embryo domain, gives rise to the shoot apical meristem and cotyledons. The second domain, the central embryo domain, gives rise to the hypocotyl, root apical meristem, and parts of the cotyledons. The third domain, the basal embryo domain, contains the hypophysis. The hypophysis will later give rise to the radicle and the root cap. The last domain, the suspensor, is the region at the very bottom, which connects the embryo to the endosperm for nutritional purposes.
Sixteen cell stage
Additional cell divisions occur, which leads to the sixteen cell stage. The four domains are still present, but they are more defined with the presence of more cells. The important aspect of this stage is the introduction of the protoderm, which is meristematic tissue that will give rise to the epidermis. The protoderm is the outermost layer of cells in the embryo proper.
Globular stage
The name of this stage is indicative of the embryo's appearance at this point in embryogenesis; it is spherical or globular. Stage III, in the photograph above, depicts what the embryo looks like during the globular stage. 1 is indicating the location of the endosperm. The important component of the globular phase is the introduction of the rest of the primary meristematic tissue. The protoderm was already introduced during the sixteen cell stage. According to Evert and Eichhorn, the ground meristem and procambium are initiated during the globular stage. The ground meristem will go on to form the ground tissue, which includes the pith and cortex. The procambium will eventually form the vascular tissue, which includes the xylem and phloem.
Heart stage
According to Evert and Eichhorn, the heart stage is a transition period where the cotyledons finally start to form and elongate. It is given this name in eudicots because most plants from this group have two cotyledons, giving the embryo a heart shaped appearance. The shoot apical meristem is between the cotyledons. Stage IV, in the illustration above, indicates what the embryo looks like at this point in development. 5 indicates the position of the cotyledons.
Proembryo stage
The proembryo stage is defined by the continued growth of the cotyledons and axis elongation. In addition, programmed cell death must occur during this stage. This is carried out throughout the entire growth process, like any other development. However, in the torpedo stage of development, parts of the suspensor complex must be terminated. The suspensor complex is shortened because at this point in development most of the nutrition from the endosperm has been utilized, and there must be space for the mature embryo. After the suspensor complex is gone, the embryo is fully developed. Stage V, in the illustration above, indicates what the embryo looks like at this point in development.
Maturation
The second phase, or postembryonic development, involves the maturation of cells, which involves cell growth and the storage of macromolecules (such as oils, starches and proteins) required as a 'food and energy supply' during germination and seedling growth. In this stage, the seed coat hardens to help protect the embryo and store available nutrients. The appearance of a mature embryo is seen in Stage VI, in the illustration above.
Dormancy
The end of embryogenesis is defined by an arrested development phase, or stop in growth. This phase usually coincides with a necessary component of growth called dormancy. Dormancy is a period in which a seed cannot germinate, even under optimal environmental conditions, until a specific requirement is met. Breaking dormancy, or finding the specific requirement of the seed, can be rather difficult. For example, a seed coat can be extremely thick. According to Evert and Eichhorn, very thick seed coats must undergo a process called scarification, in order to deteriorate the coating. In other cases, seeds must experience stratification. This process exposes the seed to certain environmental conditions, like cold or smoke, to break dormancy and initiate germination.
The role of auxin
Auxin is a hormone related to the elongation and regulation of plants. It also plays an important role in the establishment polarity with the plant embryo. Research has shown that the hypocotyl from both gymnosperms and angiosperms show auxin transport to the root end of the embryo. They hypothesized that the embryonic pattern is regulated by the auxin transport mechanism and the polar positioning of cells within the ovule. The importance of auxin was shown, in their research, when carrot embryos, at different stages, were subjected to auxin transport inhibitors. The inhibitors that these carrots were subjected to made them unable to progress to later stages of embryogenesis. During the globular stage of embryogenesis, the embryos continued spherical expansion. In addition, oblong embryos continued axial growth, without the introduction of cotyledons. During the heart embryo stage of development, there were additional growth axes on hypocotyls. Further auxin transport inhibition research, conducted on Brassica juncea, shows that after germination, the cotyledons were fused and not two separate structures.
Alternative forms of embryogenesis
Somatic embryogenesis
Somatic embryos are formed from plant cells that are not normally involved in the development of embryos, i.e. ordinary plant tissue. No endosperm or seed coat is formed around a somatic embryo. Applications of this process include: clonal propagation of genetically uniform plant material; elimination of viruses; provision of source tissue for genetic transformation; generation of whole plants from single cells called protoplasts; development of synthetic seed technology. Cells derived from competent source tissue are cultured to form an undifferentiated mass of cells called a callus. Plant growth regulators in the tissue culture medium can be manipulated to induce callus formation and subsequently changed to induce embryos to form the callus. The ratio of different plant growth regulators required to induce callus or embryo formation varies with the type of plant. Asymmetrical cell division also seems to be important in the development of somatic embryos, and while failure to form the suspensor cell is lethal to zygotic embryos, it is not lethal for somatic embryos.
Androgenesis
The process of androgenesis allows a mature plant embryo to form from a reduced, or immature, pollen grain. Androgenesis usually occurs under stressful conditions. Embryos that result from this mechanism can germinate into fully functional plants. As mentioned, the embryo results from a single pollen grain. Pollen grains consists of three cells - one vegetative cell containing two generative cells. According to Maraschin et al., androgenesis must be triggered during the asymmetric division of microspores. However, once the vegetative cell starts to make starch and proteins, androgenesis can no longer occur. Maraschin et al., indicates that this mode of embryogenesis consists of three phases. The first phase is the acquisition of embryonic potential, which is the repression of gametophyte formation, so that the differentiation of cells can occur. Then during the initiation of cell divisions, multicellular structures begin to form, which are contained by the exine wall. The last step of androgenesis is pattern formation, where the embryo-like structures are released out of the exile wall, in order for pattern formation to continue.
After these three phases occur, the rest of the process falls in line with the standard embryogenesis events.
Plant growth and buds
Embryonic tissue is made up of actively growing cells and the term is normally used to describe the early formation of tissue in the first stages of growth. It can refer to different stages of the sporophyte and gametophyte plant; including the growth of embryos in seedlings, and to meristematic tissues, which are in a persistently embryonic state, to the growth of new buds on stems.
In both gymnosperms and angiosperms, the young plant contained in the seed, begins as a developing egg-cell formed after fertilization (sometimes without fertilization in a process called apomixis) and becomes a plant embryo.
This embryonic condition also occurs in the buds that form on stems. The buds have tissue that has differentiated but not grown into complete structures. They can be in a resting state, lying dormant over winter or when conditions are dry, and then commence growth when conditions become suitable. Before they start growing into stem, leaves, or flowers, the buds are said to be in an embryonic state.
| Biology and health sciences | Plant reproduction | Biology |
1251925 | https://en.wikipedia.org/wiki/Soil%20fertility | Soil fertility | Soil fertility refers to the ability of soil to sustain agricultural plant growth, i.e. to provide plant habitat and result in sustained and consistent yields of high quality. It also refers to the soil's ability to supply plant/crop nutrients in the right quantities and qualities over a sustained period of time. A fertile soil has the following properties:
The ability to supply essential plant nutrients and water in adequate amounts and proportions for plant growth and reproduction; and
The absence of toxic substances which may inhibit plant growth e.g. Fe2+ which leads to nutrient toxicity.
The following properties contribute to soil fertility in most situations:
Sufficient soil depth for adequate root growth and water retention;
Good internal drainage, allowing sufficient aeration for optimal root growth (although some plants, such as rice, tolerate waterlogging);
Topsoil or horizon O is with sufficient soil organic matter for healthy soil structure and soil moisture retention;
Soil pH in the range 5.5 to 7.0 (suitable for most plants but some prefer or tolerate more acid or alkaline conditions);
Adequate concentrations of essential plant nutrients in plant-available forms;
Presence of a range of microorganisms that support plant growth.
In lands used for agriculture and other human activities, maintenance of soil fertility typically requires the use of soil conservation practices. This is because soil erosion and other forms of soil degradation generally result in a decline in quality with respect to one or more of the aspects indicated above.
Soil fertility and quality of land have been impacted by the effects of colonialism and slavery both in the U.S. and globally. The introduction of harmful land practices such as intensive and non-prescribed burnings and deforestation by colonists created long-lasting negative results to the environment.
Soil fertility and depletion have different origins and consequences in various parts of the world. The intentional creation of dark earth in the Amazon promotes the important relationship between indigenous communities and their land. In African and Middle Eastern regions, humans and the environment are also altered due to soil depletion.
Soil fertilization
Bioavailable phosphorus (available to soil life) is the element in soil that is most often lacking. Nitrogen and potassium are also needed in substantial amounts. For this reason these three elements are always identified on a commercial fertilizer analysis. For example, a 10-10-15 fertilizer has 10 percent nitrogen, 10 percent available phosphorus (P2O5) and 15 percent water-soluble potassium (K2O). Sulfur is the fourth element that may be identified in a commercial analysis—e.g. 21-0-0-24 which would contain 21% nitrogen and 24% sulfate.
Inorganic fertilizers are generally less expensive and have higher concentrations of nutrients than organic fertilizers. Also, since nitrogen, phosphorus and potassium generally must be in the inorganic forms to be taken up by plants, inorganic fertilizers are generally immediately bioavailable to plants without modification. However, studies suggest that chemical fertilizers have adverse health impacts on humans including the development of chronic disease from the toxins. As for the environment, over-reliance on inorganic fertilizers disrupts the natural nutrient balance in the soil, resulting in lower soil quality, loss of organic matter, and higher chances for erosion in the soil.
Additionally, the water-soluble nitrogen in inorganic fertilizers does not provide for the long-term needs of the plant and creates water pollution. Slow-release fertilizers may reduce leaching loss of nutrients and may make the nutrients that they provide available over a longer period of time.
Soil fertility is a complex process that involves the constant cycling of nutrients between organic and inorganic forms. As plant material and animal wastes are decomposed by micro-organisms, they release inorganic nutrients to the soil solution, a process referred to as mineralization. Those nutrients may then undergo further transformations which may be aided or enabled by soil micro-organisms. Like plants, many micro-organisms require or preferentially use inorganic forms of nitrogen, phosphorus or potassium and will compete with plants for these nutrients, tying up the nutrients in microbial biomass, a process often called immobilization. The balance between immobilization and mineralization processes depends on the balance and availability of major nutrients and organic carbon to soil microorganisms. Natural processes such as lightning strikes may fix atmospheric nitrogen by converting it to (NO2). Denitrification may occur under anaerobic conditions (flooding) in the presence of denitrifying bacteria. Nutrient cations, including potassium and many micronutrients, are held in relatively strong bonds with the negatively charged portions of the soil in a process known as cation exchange.
Phosphorus is a primary factor of soil fertility as it is an element of plant nutrients in the soil. It is essential for cell division and plant development, especially in seedlings and young plants. However, phosphorus is becoming increasingly harder to find and its reserves are starting to be depleted due to the excessive use as a fertilizer. The widespread use of phosphorus in fertilizers has led to pollution and eutrophication. Recently the term peak phosphorus has been coined, due to the limited occurrence of rock phosphate in the world.
A wide variety of materials have been described as soil conditioners due to their ability to improve soil quality, including biochar, offering multiple soil health benefits.
Food waste compost was found to have better soil improvement than manure based compost.
Light and CO2 limitations
Photosynthesis is the process whereby plants use light energy to drive chemical reactions which convert CO2 into sugars. As such, all plants require access to both light and carbon dioxide to produce energy, grow and reproduce.
While typically limited by nitrogen, phosphorus and potassium, low levels of carbon dioxide can also act as a limiting factor on plant growth. Peer-reviewed and published scientific studies have shown that increasing CO2 is highly effective at promoting plant growth up to levels over 300 ppm. Further increases in CO2 can, to a very small degree, continue to increase net photosynthetic output.
Soil depletion
Soil depletion occurs when the components which contribute to fertility are removed and not replaced, and the conditions which support soil's fertility are not maintained. This leads to poor crop yields. In agriculture, depletion can be due to excessively intense cultivation and inadequate soil management. Depletion may occur through a variety of other effects, including overtillage (which damages soil structure), underuse of nutrient inputs which leads to mining of the soil nutrient bank, and salinization of soil.
Colonial Impacts on Soil Depletion
Soil fertility can be severely challenged when land-use changes rapidly. For example, in Colonial New England, colonists made a number of decisions that depleted the soils, including: allowing herd animals to wander freely, not replenishing soils with manure, and a sequence of events that led to erosion. William Cronon wrote that "...the long-term effect was to put those soils in jeopardy. The removal of the forest, the increase in destructive floods, the soil compaction and close-cropping wrought by grazing animals, ploughing—all served to increase erosion." Cronon continues, explaining, “Where mowing was unnecessary and grazing among living trees was possible, settlers saved labor by simply burning the forest undergrowth...and turning loose their cattle...In at least one ill-favored area, the inhabitants of neighboring towns burned so frequently and graze so intensively that…the timber was greatly injured, and the land became hard to subdue...In the long run, cattle tended to encourage the growth of woody, thorn-bearing plants which they could not eat and which, once established, were very difficult to remove”. These practices were methods of simplifying labor for colonial settlers in new lands when they were not familiar with traditional Indigenous agricultural methods. Those Indigenous communities were not consulted but rather forced out of their homelands so European settlers could commodify their resources. The practice of intensive land burning and turning loose cattle ruined soil fertility and prohibited sustainable crop growth.
While colonists utilized fire to clear land, certain prescribed burning practices are common and valuable to increase biodiversity and in turn, benefit soil fertility. Without consideration of the intensity, seasonality, and frequency of the burns, the conservation of biodiversity and the overall health of the soil can be negatively impacted by fire.
In addition to soil erosion through using too much or too little fire, colonial agriculture also resulted in topsoil depletion. Topsoil depletion occurs when the nutrient-rich organic topsoil, which takes hundreds to thousands of years to build up under natural conditions, is eroded or depleted of its original organic material. The Dust Bowl in the Great Plains of North America is a great example of this with about one-half of the original topsoil of the great plains having disappeared since the beginning of agricultural production there in the 1880s. Outside of the context of colonialism topsoil depletion can historically be attributed to many past civilizations' collapses.
Soil Depletion and Enslavement
As historian David Silkenat explains, the goals of Southern plantation and slave owners, instead of measuring productivity based on outputs per acre, were to maximize the amount of labor that could be extracted from the enslaved workforce. The landscape was seen as disposable, and the African slaves were seen as expendable. Once these Southern farmers forced slaves to leach soils and engage in mass deforestation, they would discard the land and move towards more fertile prospects. The forced slave practices created extensive destruction on the land. The environmental impact included draining swamps, clearing forests for monocropping and fuel steamships, and introducing invasive species, all leading to fragile ecosystems. In the aftermath, these ecosystems left hillsides eroded, rivers clogged with sterile soil, and extinction of native species. Silkenat summarizes this phenomenon of the relationship between enslavement and soil, “Although typically treated separately, slavery and the environment naturally intersect in complex and powerful ways, leaving lasting effects from the period of emancipation through modern-day reckonings with racial justice…the land too fell victim to the slave owner’s lash”.
Global Soil Depletion
One of the most widespread occurrences of soil depletion is in tropical zones where nutrient content of soils is low. The depletion of soil has affected the state of plant life and crops in agriculture in many countries. In the Middle East for example, many countries find it difficult to grow produce because of droughts, lack of soil, and lack of irrigation. The Middle East has three countries that indicate a decline in crop production, the highest rates of productivity decline are found in hilly and dryland areas.
Many countries in Africa also undergo a depletion of fertile soil. In regions of dry climate like Sudan and the countries that make up the Sahara Desert, droughts and soil degradation is common. Cash crops such as teas, maize, and beans require a variety of nutrients in order to grow healthy. Soil fertility has declined in the farming regions of Africa and the use of artificial and natural fertilizers has been used to regain the nutrients of ground soil.
Dark Earths
South America
The details of Indigenous societies prior to European colonization in 1492 within the Amazonian regions of South America, particularly the size of the communities and the depth of interactions with the environment, are continually debated. Central to the debate is the influence of Dark Earth. Dark Earth is a type of soil found in the Amazon that has a darker color, higher organic carbon content, and higher fertility than soil in other regions of South America which makes it highly coveted even today. Dark Earth deposits have been found, through ethnographic and archaeological studies, to have been created through ancient Indigenous practices by intentional soil management.
Ethnoarchaeologist Morgan Schmidt outlines how this carbon-rich soil was intentionally created by communities in the Amazon. While Dark Earth, and other anthropic soils, can be found all throughout the world, Amazonian Dark Earth is particularly significant because “it contrasts too sharply with the especially poor fertility of typical highly weathered tropical upland soils in the Amazon”. There is much evidence to suggest that the development of ancient agricultural societies in the Amazon was strongly influenced by the formation of Dark Earth. As a result, Amazonian societies benefitted from the dark earth in terms of agricultural success and enhanced food production. Soil analyses have been completed on the modern and ancient Kuikuro Indigenous Territory in the Upper Xingu River basin in southeastern Amazonia through archaeological and ethnographic research to determine the human relation to the soil. The “results demonstrate the intentional creation of dark earth, highlighting how Indigenous knowledge can provide strategies for sustainable rainforest management”.
Africa
In Egypt, earthworms of the Nile River Valley contributed to the significant fertility of the soils. As a result, Cleopatra declared the earthworm and sacred animal to recognize the animal’s positive impact. No one, including farmers, was “allowed to harm or remove an earthworm for fear of offending the deity of fertility”. In Ghana and Liberia, it is a long-withstanding practice to combine different types of waste to create fertile soil that is referred to as African Dark Earths. This soil, contains high concentrations of calcium, phosphorus, and carbon.
Humans and Soil
Albert Howard is credited as the first Westerner to publish Native techniques of sustainable agriculture. As noted by Howard in 1944, “In all future studies of disease we must, therefore, always begin with the soil. This must be gotten into good condition first of all and then the reaction of the soil, the plant, animal, and man observed. Many diseases will then automatically disappear...Soil fertility is the basis of the public health system of the future...”. Howard connects the health crises of crops to the impacts of livestock and human health, ultimately spreading the message that humans must respect and restore the soil for the benefit of the human and non-human world. He continues that industrial agriculture disrupts the delicate balance of nature and irrevocably robs the soil of its fertility.
Irrigation effects
Irrigation is a process by which crops are watered by man-made means, such as bringing in water from pipes, canals, or sprinklers. Irrigation is used when the natural rainfall patterns of a region are not sustainable enough to maintain crops. Ancient civilizations heavily relied on irrigation and today about 18% of the world's cropland is irrigated. The quality of irrigation water is very important to maintain soil fertility and tilth, and for using more soil depth by the plants. When soil is irrigated with high alkaline water, unwanted sodium salts build up in the soil which would make soil draining capacity very poor. So plant roots can not penetrate deep into the soil for optimum growth in Alkali soils. When soil is irrigated with low pH / acidic water, the useful salts (Ca, Mg, K, P, S, etc.) are removed by draining water from the acidic soil and in addition unwanted aluminium and manganese salts to the plants are dissolved from the soil impeding plant growth. When soil is irrigated with high salinity water or sufficient water is not draining out from the irrigated soil, the soil would convert into saline soil or lose its fertility. Saline water enhance the turgor pressure or osmotic pressure requirement which impedes the off take of water and nutrients by the plant roots.
Top soil loss takes place in alkali soils due to erosion by rain water surface flows or drainage as they form colloids (fine mud) in contact with water. Plants absorb water-soluble inorganic salts only from the soil for their growth. Soil as such does not lose fertility just by growing crops but it lose its fertility due to accumulation of unwanted and depletion of wanted inorganic salts from the soil by improper irrigation and acid rain water (quantity and quality of water). The fertility of many soils which are not suitable for plant growth can be enhanced many times gradually by providing adequate irrigation water of suitable quality and good drainage from the soil.
Global distribution
| Physical sciences | Pedology | null |
1252036 | https://en.wikipedia.org/wiki/Damask | Damask | Damask (/ˈdæməsk/; Arabic: دمشق) is a woven, reversible patterned fabric. Damasks are woven by periodically reversing the action of the warp and weft threads. The pattern is most commonly created with a warp-faced satin weave and the ground with a weft-faced or sateen weave. Yarns used to create damasks include silk, wool, linen, cotton, and synthetic fibers, but damask is best shown in cotton and linen. Over time, damask has become a broader term for woven fabrics with a reversible pattern, not just silks.
There are a few types of damask: true, single, compound, and twill. True damask is made entirely of silk. Single damask has only one set of warps and wefts and thus is made of up to two colors. Compound damask has more than one set of warps and wefts and can include more than two colors. Twill damasks include a twill-woven ground or pattern.
History
A damask weave is one of the five basic weaving techniques—the others being tabby, twill, Lampas, and tapestry—of the early Middle Ages Byzantine and Middle Eastern weaving centers. Damask was named after the city Damascus, Syria a large trading center on the Silk Road.
Damask in China
In China, draw looms with a large number of heddles were developed to weave damasks with complicated patterns. The Chinese may have produced damasks as early as the Tang dynasty (618–907). Damasks became scarce after the 9th century outside Islamic Spain, but were revived in some places in the 13th century. Trade logs between The British East India Company and China often demonstrate an ongoing trade of Chinese silks, especially damask. Damask is documented as being the heaviest Chinese silk.
Damask in Europe
The word damask first appeared in a Western European language in mid-14th century French records. Shortly after its appearance in French language, damasks were being woven on draw looms in Italy. From the 14th to 16th century, most damasks were woven in one colour with a glossy warp-faced satin pattern against a duller ground. Two-colour damasks had contrasting colour warps and wefts and polychrome damasks added gold and other metallic threads or additional colours as supplemental brocading wefts. Medieval damasks were usually woven in silk, but weavers also produced wool and linen damasks.
Damask and Nomads
In daily nomadic life this form of weaving was generally employed by women, specifically in occupations such as carpet-making. Women collected raw material from pasture animals and dyes from local flora, such as berries, insects, or grasses, to use in production. Each woman would create a specialized pattern sequence and color scheme that aligned with her personal identity and ethnic group. These techniques were passed down generationally from mother to daughter.
Modern usage
In the 19th century, the invention of the Jacquard loom which was automated with a system of punched cards, made weaving damask faster and cheaper.
Modern damasks are woven on computerized Jacquard looms. Damask weaves are commonly produced in monochromatic (single-colour) weaves in silk, linen or synthetic fibres such as rayon and feature patterns of flowers, fruit and other designs. The long floats of satin-woven warp and weft threads cause soft highlights on the fabric which reflect light differently according to the position of the observer. Damask weaves appear most commonly in table linens and furnishing fabrics, but they are also used for clothing. The damask weave is prevalent in the fashion industry due to its versatility and high-quality finish. Damask is often used for mid-to-high-quality garments—associating itself with higher quality brands/labels.
| Technology | Weaving | null |
1252256 | https://en.wikipedia.org/wiki/Inelastic%20scattering | Inelastic scattering | In chemistry, nuclear physics, and particle physics, inelastic scattering is a process in which the internal states of a particle or a system of particles change after a collision. Often, this means the kinetic energy of the incident particle is not conserved (in contrast to elastic scattering). Additionally, relativistic collisions which involve a transition from one type of particle to another are referred to as inelastic even if the outgoing particles have the same kinetic energy as the incoming ones. Processes which are governed by elastic collisions at a microscopic level will appear to be inelastic if a macroscopic observer only has access to a subset of the degrees of freedom. In Compton scattering for instance, the two particles in the collision transfer energy causing a loss of energy in the measured particle.
Electrons
When an electron is the incident particle, the probability of inelastic scattering, depending on the energy of the incident electron, is usually smaller than that of elastic scattering. Thus in the case of gas electron diffraction (GED), reflection high-energy electron diffraction (RHEED), and transmission electron diffraction, because the energy of the incident electron is high, the contribution of inelastic electron scattering can be ignored. Deep inelastic scattering of electrons from protons provided the first direct evidence for the existence of quarks.
Photons
When a photon is the incident particle, there is an inelastic scattering process called Raman scattering. In this scattering process, the incident photon interacts with matter (gas, liquid, and solid) and the frequency of the photon is shifted towards red or blue. A red shift can be observed when part of the energy of the photon is transferred to the interacting matter, where it adds to its internal energy in a process called Stokes Raman scattering. The blue shift can be observed when internal energy of the matter is transferred to the photon; this process is called anti-Stokes Raman scattering.
Inelastic scattering is seen in the interaction between an electron and a photon. When a high-energy photon collides with a free electron (more precisely, weakly bound since a free electron cannot participate in inelastic scattering with a photon) and transfers energy, the process is called Compton scattering. Furthermore, when an electron with relativistic energy collides with an infrared or visible photon, the electron gives energy to the photon. This process is called inverse Compton scattering.
Neutrons
Neutrons undergo many types of scattering, including both elastic and inelastic scattering. Whether elastic or inelastic scatter occurs is dependent on the speed of the neutron, whether fast or thermal, or somewhere in between. It is also dependent on the nucleus it strikes and its neutron cross section. In inelastic scattering, the neutron interacts with the nucleus and the kinetic energy of the system is changed. This often activates the nucleus, putting it into an excited, unstable, short-lived energy state which causes it to quickly emit some kind of radiation to bring it back down to a stable or ground state. Alpha, beta, gamma, and protons may be emitted. Particles scattered in this type of nuclear reaction may cause the nucleus to recoil in the other direction.
Molecular collisions
Inelastic scattering is common in molecular collisions. Any collision which leads to a chemical reaction will be inelastic, but the term inelastic scattering is reserved for those collisions which do not result in reactions. There is a transfer of energy between the translational mode (kinetic energy) and rotational and vibrational modes.
If the transferred energy is small compared to the incident energy of the scattered particle, one speaks of quasielastic scattering.
| Physical sciences | Particle physics: General | Physics |
1252279 | https://en.wikipedia.org/wiki/Carnosauria | Carnosauria | Carnosauria is an extinct group of carnivorous theropod dinosaurs that lived during the Jurassic and Cretaceous periods.
While Carnosauria was historically considered largely synonymous with Allosauroidea, some recent studies have revived Carnosauria as clade including both Allosauroidea and Megalosauroidea (which is sometimes recovered as paraphyletic with respect to Allosauroidea), and thus including the majority of non-coleurosaurian members of theropod clade Tetanurae. Other researchers have found Allosauroidea and Megalosauroidea to be unrelated groups.
Distinctive characteristics of carnosaurs include large eye sockets, a long narrow skull and modifications of the legs and pelvis such as the thigh (femur) being longer than the shin (tibia).
Carnosaurs first appeared in the Middle Jurassic, around 176 mya. The last definite known carnosaurs, the carcharodontosaurs, became extinct in the Turonian epoch of the Cretaceous, roughly 90 mya; reportedly later remains of carcharodontosaurids, from the late Maastrichtian (70–66 mya) Bauru Group in Brazil, were later interpreted as those of abelisaurids. The phylogenetically problematic megaraptorans, which may or may not be carnosaurs, became extinct around 66 mya. Unquillosaurus, discovered in rocks dated to 75-70 mya, might potentially also be a carnosaur.
History of study
Carnosauria has traditionally been used as a dumping ground for all large theropods. Even non-dinosaurs, such as the rauisuchian Teratosaurus, were once considered carnosaurs. However, analysis in the 1980s and 1990s revealed that other than size, the group shared very few characteristics, making it polyphyletic. Most former carnosaurs (such as the megalosaurids, the spinosaurids, and the ceratosaurs) were reclassified as more primitive theropods. Others (such as the tyrannosaurids) that were more closely related to birds were placed in Coelurosauria. Modern cladistic analysis defines Carnosauria as those tetanurans sharing a more recent common ancestor with Allosaurus than with modern birds.
Anatomy
Carnosaurs share certain distinctive features, one of which is a triangular-shaped pubic boot. They also have 3 fingers per hand, with the second and third digit being approximately equal in length. The femur is larger than the tibia. Another defining feature of carnosaurs is that the chevron bases on their tails have anterior and posterior bone growth. The largest carnosaurs can reach up to 10 meters in length. The length of the body from the tail to the hip is between 54% and 62% of the total body length, and the length of the body from the head to the hip is between 38% and 46% of the total body length. Carnosaurs scaled their limbs relative to their body in a way similar to how other large theropods, like the tyrannosaurids, did. During the Cretaceous, some carnosaurs grew to sizes similar to those of the largest tyrannosaurids. These large carnosaurs lived in the same time period as the other large theropods found in the upper Morrison and Tendaguru formations.
Carnosaurs maintained a similar center of mass across all sizes, which is found to be between 37% and 58% of the femoral length anterior to the hip. Other similarities across all carnosaurs include the structure of their hind limb and pelvis. The pelvis in particular is thought to be designed to reduce stress regardless of body size. In particular, the way the femur is inclined reduces the bending and torsion stress. Furthermore, like other animals with tails, carnosaurs possess a caudofemoralis longus (CFL) muscle that allowed them to flex theirs. Larger carnosaurs are found to have a lower CFL muscle-to-body-mass proportion that smaller carnosaurs.
In addition to body similarities, most carnosaurs, especially most allosauroids are also united by certain skull features. Some of the defining ones include a smaller mandibular fenestra, a short quadrate bone, and a short connection between the braincase and the palate. Allosauroid skulls are about 2.5 to 3 times longer as they are tall. Their narrow skull along with their serrated teeth allow carnosaurs to better slice flesh off of their prey. Carnosaur teeth are flat and have equally-sized denticles on both edges. The flat side of the tooth face the sides of the skull, while the edges align on the same plane as the skull. From analyzing the skull of different carnosaurs, the volume of the cranial vault ranges between 95 milliliters in Sinraptor to 250 milliliters in Giganotosaurus.
Allosaurus and Concavenator preserve skin impressions showing their integument. In Allosaurus, skin impressions showing small scales measuring 1-3 mm are known from the side of the torso and the mandible. Another skin impression from the ventral side of the neck preserves broad scutate scales. An impression from the base of the tail preserves larger scales around 2 cm in diameter. However, it has been noted that these may be sauropod scales due to their similarity and the fact that non-theropod remains were discovered associated with the tail of this particular Allosaurus specimen. Concavenator preserves rectangular scutate scales on the underside of the tail, as well as scutate scales on the feet along with small scales. A series of knobs on the ulna of Concavenator have been interpreted by some authors as quill knobs theorized to have supported primitive quills; however this interpretation has been questioned, and they have been suggested to represent traces of ligaments instead.
Classification
Within Carnosauria, there is a slightly more exclusive clade, Allosauroidea. The clade Allosauroidea was originally named by Othniel Charles Marsh, but it was given a formal definition by Phil Currie and Zhao, and later used as a stem-based taxon by Paul Sereno in 1997. Sereno was the first to provide a stem-based definition for the Allosauroidea in 1998, defining the clade as "All neotetanurans closer to Allosaurus than to Neornithes." Kevin Padian used a node-based definition in his 2007 study which defined the Allosauroidea as Allosaurus, Sinraptor, their most recent common ancestor, and all of its descendants. Thomas R. Holtz and colleagues and Phil Currie and Ken Carpenter, among others, have followed this node-based definition. Depending on the study, Carnosauria and Allosauroidea are sometimes considered synonymous. In such cases, several researchers have elected to use Allosauroidea over Carnosauria.
Conventional phylogeny
The following family tree illustrates the position of Carnosauria within Theropoda. It is a simplified version of the tree presented in a synthesis of the relationships of the major theropod groups based on various studies conducted in the 2010s.
The cladogram presented below illustrates the interrelationships between the four major groups (or families) of carnosaurs. It is a simplified version of the tree presented in the 2012 analysis by Carrano, Benson and Sampson after they excluded three "wildcard" taxa Poekilopleuron, Xuanhanosaurus, and Streptospondylus.
Alternative hypotheses
The composition of the clade Carnosauria has been controversial among scientists since at least 2010. Different clades have been recovered by different authors, and a scientific consensus has yet to emerge.
One such clade is Neovenatoridae, a proposed clade of carcharodontosaurian carnosaurs uniting some primitive members of the group such as Neovenator with the Megaraptora, a group of theropods with controversial affinities. Other studies recover megaraptorans as basal coelurosaurs unrelated to carcharodontosaurs. Other theropods with uncertain affinities such as Gualicho, Chilantaisaurus and Deltadromeus are also sometimes included.
Neovenatoridae, as formulated by these authors, contained Neovenator, Chilantaisaurus, and a newly named clade: Megaraptora. Megaraptora contained Megaraptor, Fukuiraptor, Orkoraptor, Aerosteon, and Australovenator. These genera were allied with the other neovenatorids on the basis of several features spread out throughout the skeleton, particularly the large amount of pneumatization present. The pneumatic ilium of Aerosteon was particularly notable, as Neovenator was the only other taxon known to have that trait at the time. Neovenatorids were envisioned as the latest-surviving allosauroids, which were able to persist well into the Late Cretaceous due to their low profile and coelurosaur-like adaptations. Later studies supported this hypothesis, such as Carrano, Benson & Sampson large study of tetanuran relationships in 2012, and Zanno & Makovicky description of the newly discovered theropod Siats in 2013, which they placed within Megaraptora. Fukuiraptor and Australovenator were consistently found to be close relatives of each other; this was also the case for Aerosteon and Megaraptor. Orkoraptor was a "wildcard" taxon difficult to place with certainty.
Phylogenetic studies conducted by Benson, Carrano and Brusatte (2010) and Carrano, Benson and Sampson (2012) recovered the group Megaraptora and a few other taxa as members of the Neovenatoridae. This would make neovenatorids the latest-surviving allosauroids; at least one megaraptoran, Orkoraptor, lived near the end of the Mesozoic era, dating to the early Maastrichtian stage of the latest Cretaceous period, about 70 million years ago.
The cladogram below follows a 2016 analysis by Sebastián Apesteguía, Nathan D. Smith, Rubén Juarez Valieri, and Peter J. Makovicky based on the dataset of Carrano et al. (2012).
Subsequent analyses have contradicted the above hypothesis. Novas and colleagues conducted an analysis in 2012 which found that Neovenator was closely related to carcharodontosaurids, simultaneously found Megaraptor and related genera to be coelurosaurs closely related to tyrannosaurids. However, Novas et al. subsequently found that megaraptorans lacked most of the key features in the hands of derived coelurosaurs including Guanlong and Deinonychus. Instead, their hands retain a number of primitive characteristics seen in basal tetanurans such as Allosaurus. Nevertheless, there are still a number of other traits that support megaraptorans as members of the Coelurosauria. Other taxa like Deltadromeus and Gualicho have been alternatively recovered as coelurosaurs or noasaurid ceratosaurs.
Several recent analyses do not find a relationship between Neovenator and megaraptorans, which suggests that the latter were not carnosaurs or allosauroids. As a result of these findings, and the fact that Neovenator itself is the only uncontroversial neovenatorid, the family Neovenatoridae sees little use in recent publications.
In 2019, Rauhut and Pol described Asfaltovenator vialidadi, a basal allosauroid displaying a mosaic of primitive and derived features seen within Tetanurae. Their phylogenetic analysis found traditional Megalosauroidea to represent a basal grade of carnosaurs, paraphyletic with respect to Allosauroidea. Because the authors amended the definition of Allosauroidea to include all theropods that are closer to Allosaurus fragilis than to either Megalosaurus bucklandii or Neornithes, the Piatnitzkysauridae was found to fall within Allosauroidea. A cladogram displaying the relationships they recovered is shown below.
The relationship between allosauroids and megalosauroids was also supported by a provisional analysis published by Andrea Cau in 2021. This publication is also the origin of the hypothesis that several "compsognathids" from Europe may have been juvenile carnosaurs. The results of this analysis differ from those of Rauhut and Pol in that Cau finds Megalosauroidea to be monophyletic and the sister-taxon of Allosauroidea within Carnosauria. An abbreviated version of this phylogeny is shown below.
In 2024, Andrea Cau published a paper which presented an analysis of theropod ontogeny which suggested that several theropods that were traditionally considered coelurosaurs may be juvenile allosauroids or megalosauroids. These included Aorun, Juravenator, Sciurumimus, Scipionyx, and Compsognathus. This hypothesis has not been universally accepted, and it notably conflicts with Cau's 2021 publication by finding Megalosauroidea as monophyletic and the sister taxon of Avetheropoda, a grouping which includes both carnosaurs (or allosauroids) and coelurosaurs. Notably, this analysis also treats the abelisauroid genus Kryptops as a chimera and suggests that the postcranial remains of this taxon belong to a carnosaur (possibly Sauroniops). An abbreviated version of the cladogram from that analysis is shown below.
Paleobiology and behavior
Multiple severe injuries have been found on allosauroid remains, which implies that allosauroids were frequently in dangerous situations and supports the hypothesis of an active, predatory lifestyle. Despite the multitude of injuries, only a few of those injuries show signs of infection. For those injuries that did become infected, the infections were usually local to the site of the injury, implying that the allosauroid immune response was able to quickly stop any infection from spreading to the rest of the body. This type of immune response is similar to modern reptilian immune responses; reptiles secrete fibrin near infected areas and localize the infection before it can spread via the bloodstream.
The injuries were also found to be mostly healed. This healing may indicate that allosauroids had an intermediate metabolic rate, similar to non-avian reptiles, which means they require fewer nutrients in order to survive. A lower nutrient requirement means allosauroids do not need to undertake frequent hunts, which lowers their risk of sustaining traumatic injuries.
Although the remains of other large theropods like tyrannosaurids bear evidence of fighting within their species and with other predators, the remains of allosauroids do not bear much evidence of injuries from theropod combat. Most notably, despite a good fossil record, allosauroid skulls lack the distinctive face-biting wounds that are common in tyrannosaurid skulls, leaving open the question of if allosauroids engaged in interspecies and intraspecies fighting. Remains of the allosauroid Mapusaurus are also often found in groups, which could imply the existence of social behavior. While there are alternative explanations for the groupings, like predator traps or habitat reduction due to drought, the frequency of finding allosauroid remains in groups supports the social animal theory. As social animals, allosauroids would share the burden of hunting, allowing injured members of the pack to recover faster.
Paleobiogeography
The paleobiogeographical history of allosauroids closely follows the order that Pangaea separated into the modern continents. By the Middle Jurassic period, tetanurans had spread to every continent and diverged into the allosauroids and the coelurosaurs. Allosauroids first appeared in the Middle Jurassic period and were the first giant taxa (weighing more than 2 tons) in theropod history. Along with members of the superfamily Megalosauroidea, allosauroids were the apex predators that occupied the Middle Jurassic to the early Late Cretaceous periods. Allosauroids have been found in North America, South America, Europe, Africa, and Asia. Specifically, a world-wide dispersal of carcharodontosaurids likely happened in the Early Cretaceous. It has been hypothesized that the dispersal involved Italy's Apulia region (the “heel” of the Italian peninsula), which was connected to Africa by a land bridge during the Early Cretaceous period; various dinosaur footprints found in Apulia support this theory.
Allosauroids were present in both the northern and southern continents during the Jurassic and Early Cretaceous, but they were later displaced by the tyrannosauroids in North America and Asia during the Late Cretaceous. This is likely due to regional extinction events, which, along with increased species isolation through the severing of land connections between the continents, differentiated many dinosaurs in the Late Cretaceous.
| Biology and health sciences | Theropods | Animals |
1252991 | https://en.wikipedia.org/wiki/Orbital%20hybridisation | Orbital hybridisation | In chemistry, orbital hybridisation (or hybridization) is the concept of mixing atomic orbitals to form new hybrid orbitals (with different energies, shapes, etc., than the component atomic orbitals) suitable for the pairing of electrons to form chemical bonds in valence bond theory. For example, in a carbon atom which forms four single bonds, the valence-shell s orbital combines with three valence-shell p orbitals to form four equivalent sp3 mixtures in a tetrahedral arrangement around the carbon to bond to four different atoms. Hybrid orbitals are useful in the explanation of molecular geometry and atomic bonding properties and are symmetrically disposed in space. Usually hybrid orbitals are formed by mixing atomic orbitals of comparable energies.
History and uses
Chemist Linus Pauling first developed the hybridisation theory in 1931 to explain the structure of simple molecules such as methane (CH4) using atomic orbitals. Pauling pointed out that a carbon atom forms four bonds by using one s and three p orbitals, so that "it might be inferred" that a carbon atom would form three bonds at right angles (using p orbitals) and a fourth weaker bond using the s orbital in some arbitrary direction. In reality, methane has four C–H bonds of equivalent strength. The angle between any two bonds is the tetrahedral bond angle of 109°28' (around 109.5°). Pauling supposed that in the presence of four hydrogen atoms, the s and p orbitals form four equivalent combinations which he called hybrid orbitals. Each hybrid is denoted sp3 to indicate its composition, and is directed along one of the four C–H bonds. This concept was developed for such simple chemical systems, but the approach was later applied more widely, and today it is considered an effective heuristic for rationalizing the structures of organic compounds. It gives a simple orbital picture equivalent to Lewis structures.
Hybridisation theory is an integral part of organic chemistry, one of the most compelling examples being Baldwin's rules. For drawing reaction mechanisms sometimes a classical bonding picture is needed with two atoms sharing two electrons. Hybridisation theory explains bonding in alkenes and methane. The amount of p character or s character, which is decided mainly by orbital hybridisation, can be used to reliably predict molecular properties such as acidity or basicity.
Overview
Orbitals are a model representation of the behavior of electrons within molecules. In the case of simple hybridization, this approximation is based on atomic orbitals, similar to those obtained for the hydrogen atom, the only neutral atom for which the Schrödinger equation can be solved exactly. In heavier atoms, such as carbon, nitrogen, and oxygen, the atomic orbitals used are the 2s and 2p orbitals, similar to excited state orbitals for hydrogen.
Hybrid orbitals are assumed to be mixtures of atomic orbitals, superimposed on each other in various proportions. For example, in methane, the C hybrid orbital which forms each carbon–hydrogen bond consists of 25% s character and 75% p character and is thus described as sp3 (read as s-p-three) hybridised. Quantum mechanics describes this hybrid as an sp3 wavefunction of the form , where N is a normalisation constant (here 1/2) and pσ is a p orbital directed along the C-H axis to form a sigma bond. The ratio of coefficients (denoted λ in general) is in this example. Since the electron density associated with an orbital is proportional to the square of the wavefunction, the ratio of p-character to s-character is λ2 = 3. The p character or the weight of the p component is N2λ2 = 3/4.
Types of hybridisation
sp3
Hybridisation describes the bonding of atoms from an atom's point of view. For a tetrahedrally coordinated carbon (e.g., methane CH4), the carbon should have 4 orbitals directed towards the 4 hydrogen atoms.
Carbon's ground state configuration is 1s2 2s2 2p2 or more easily read:
This diagram suggests that the carbon atom could use its two singly occupied p-type orbitals to form two covalent bonds with two hydrogen atoms in a methylene (CH2) molecule, with a hypothetical bond angle of 90° corresponding to the angle between two p orbitals on the same atom. However the true H–C–H angle in singlet methylene is about 102° which implies the presence of some orbital hybridisation.
The carbon atom can also bond to four hydrogen atoms in methane by an excitation (or promotion) of an electron from the doubly occupied 2s orbital to the empty 2p orbital, producing four singly occupied orbitals.
The energy released by the formation of two additional bonds more than compensates for the excitation energy required, energetically favoring the formation of four C-H bonds.
According to quantum mechanics the lowest energy is obtained if the four bonds are equivalent, which requires that they are formed from equivalent orbitals on the carbon. A set of four equivalent orbitals can be obtained that are linear combinations of the valence-shell (core orbitals are almost never involved in bonding) s and p wave functions, which are the four sp3 hybrids.
In CH4, four sp3 hybrid orbitals are overlapped by hydrogen 1s orbitals, yielding four σ (sigma) bonds (that is, four single covalent bonds) of equal length and strength.
The following :
translates into :
sp2
Other carbon compounds and other molecules may be explained in a similar way. For example, ethene (C2H4) has a double bond between the carbons.
For this molecule, carbon sp2 hybridises, because one π (pi) bond is required for the double bond between the carbons and only three σ bonds are formed per carbon atom. In sp2 hybridisation the 2s orbital is mixed with only two of the three available 2p orbitals, usually denoted 2px and 2py. The third 2p orbital (2pz) remains unhybridised.
forming a total of three sp2 orbitals with one remaining p orbital. In ethene, the two carbon atoms form a σ bond by overlapping one sp2 orbital from each carbon atom. The π bond between the carbon atoms perpendicular to the molecular plane is formed by 2p–2p overlap. Each carbon atom forms covalent C–H bonds with two hydrogens by s–sp2 overlap, all with 120° bond angles. The hydrogen–carbon bonds are all of equal strength and length, in agreement with experimental data.
sp
The chemical bonding in compounds such as alkynes with triple bonds is explained by sp hybridization. In this model, the 2s orbital is mixed with only one of the three p orbitals,
resulting in two sp orbitals and two remaining p orbitals. The chemical bonding in acetylene (ethyne) (C2H2) consists of sp–sp overlap between the two carbon atoms forming a σ bond and two additional π bonds formed by p–p overlap. Each carbon also bonds to hydrogen in a σ s–sp overlap at 180° angles.
Hybridisation and molecule shape
Hybridisation helps to explain molecule shape, since the angles between bonds are approximately equal to the angles between hybrid orbitals. This is in contrast to valence shell electron-pair repulsion (VSEPR) theory, which can be used to predict molecular geometry based on empirical rules rather than on valence-bond or orbital theories.
spx hybridisation
As the valence orbitals of main group elements are the one s and three p orbitals with the corresponding octet rule, spx hybridization is used to model the shape of these molecules.
spxdy hybridisation
As the valence orbitals of transition metals are the five d, one s and three p orbitals with the corresponding 18-electron rule, spxdy hybridisation is used to model the shape of these molecules. These molecules tend to have multiple shapes corresponding to the same hybridization due to the different d-orbitals involved. A square planar complex has one unoccupied p-orbital and hence has 16 valence electrons.
sdx hybridisation
In certain transition metal complexes with a low d electron count, the p-orbitals are unoccupied and sdx hybridisation is used to model the shape of these molecules.
Hybridisation of hypervalent molecules
Octet expansion
In some general chemistry textbooks, hybridization is presented for main group coordination number 5 and above using an "expanded octet" scheme with d-orbitals first proposed by Pauling. However, such a scheme is now considered to be incorrect in light of computational chemistry calculations.
In 1990, Eric Alfred Magnusson of the University of New South Wales published a paper definitively excluding the role of d-orbital hybridisation in bonding in hypervalent compounds of second-row (period 3) elements, ending a point of contention and confusion. Part of the confusion originates from the fact that d-functions are essential in the basis sets used to describe these compounds (or else unreasonably high energies and distorted geometries result). Also, the contribution of the d-function to the molecular wavefunction is large. These facts were incorrectly interpreted to mean that d-orbitals must be involved in bonding.
Resonance
In light of computational chemistry, a better treatment would be to invoke sigma bond resonance in addition to hybridisation, which implies that each resonance structure has its own hybridisation scheme. All resonance structures must obey the octet rule.
Hybridisation in computational VB theory
While the simple model of orbital hybridisation is commonly used to explain molecular shape, hybridisation is used differently when computed in modern valence bond programs. Specifically, hybridisation is not determined a priori but is instead variationally optimized to find the lowest energy solution and then reported. This means that all artificial constraints, specifically two constraints, on orbital hybridisation are lifted:
that hybridisation is restricted to integer values (isovalent hybridisation)
that hybrid orbitals are orthogonal to one another (hybridisation defects)
This means that in practice, hybrid orbitals do not conform to the simple ideas commonly taught and thus in scientific computational papers are simply referred to as spx, spxdy or sdx hybrids to express their nature instead of more specific integer values.
Isovalent hybridisation
Although ideal hybrid orbitals can be useful, in reality, most bonds require orbitals of intermediate character. This requires an extension to include flexible weightings of atomic orbitals of each type (s, p, d) and allows for a quantitative depiction of the bond formation when the molecular geometry deviates from ideal bond angles. The amount of p-character is not restricted to integer values; i.e., hybridizations like sp2.5 are also readily described.
The hybridization of bond orbitals is determined by Bent's rule: "Atomic s character concentrates in orbitals directed towards electropositive substituents".
For molecules with lone pairs, the bonding orbitals are isovalent spx hybrids. For example, the two bond-forming hybrid orbitals of oxygen in water can be described as sp4.0 to give the interorbital angle of 104.5°. This means that they have 20% s character and 80% p character and does not imply that a hybrid orbital is formed from one s and four p orbitals on oxygen since the 2p subshell of oxygen only contains three p orbitals.
Hybridisation defects
Hybridisation of s and p orbitals to form effective spx hybrids requires that they have comparable radial extent. While 2p orbitals are on average less than 10% larger than 2s, in part attributable to the lack of a radial node in 2p orbitals, 3p orbitals which have one radial node, exceed the 3s orbitals by 20–33%. The difference in extent of s and p orbitals increases further down a group. The hybridisation of atoms in chemical bonds can be analysed by considering localised molecular orbitals, for example using natural localised molecular orbitals in a natural bond orbital (NBO) scheme. In methane, CH4, the calculated p/s ratio is approximately 3 consistent with "ideal" sp3 hybridisation, whereas for silane, SiH4, the p/s ratio is closer to 2. A similar trend is seen for the other 2p elements. Substitution of fluorine for hydrogen further decreases the p/s ratio. The 2p elements exhibit near ideal hybridisation with orthogonal hybrid orbitals. For heavier p block elements this assumption of orthogonality cannot be justified. These deviations from the ideal hybridisation were termed hybridisation defects by Kutzelnigg.
However, computational VB groups such as Gerratt, Cooper and Raimondi (SCVB) as well as Shaik and Hiberty (VBSCF) go a step further to argue that even for model molecules such as methane, ethylene and acetylene, the hybrid orbitals are already defective and nonorthogonal, with hybridisations such as sp1.76 instead of sp3 for methane.
Photoelectron spectra
One misconception concerning orbital hybridization is that it incorrectly predicts the ultraviolet photoelectron spectra of many molecules. While this is true if Koopmans' theorem is applied to localized hybrids, quantum mechanics requires that the (in this case ionized) wavefunction obey the symmetry of the molecule which implies resonance in valence bond theory. For example, in methane, the ionised states (CH4+) can be constructed out of four resonance structures attributing the ejected electron to each of the four sp3 orbitals. A linear combination of these four structures, conserving the number of structures, leads to a triply degenerate T2 state and an A1 state. The difference in energy between each ionized state and the ground state would be ionization energy, which yields two values in agreement with experimental results.
Localized vs canonical molecular orbitals
Bonding orbitals formed from hybrid atomic orbitals may be considered as localized molecular orbitals, which can be formed from the delocalized orbitals of molecular orbital theory by an appropriate mathematical transformation. For molecules in the ground state, this transformation of the orbitals leaves the total many-electron wave function unchanged. The hybrid orbital description of the ground state is, therefore equivalent to the delocalized orbital description for ground state total energy and electron density, as well as the molecular geometry that corresponds to the minimum total energy value.
Two localized representations
Molecules with multiple bonds or multiple lone pairs can have orbitals represented in terms of sigma and pi symmetry or equivalent orbitals. Different valence bond methods use either of the two representations, which have mathematically equivalent total many-electron wave functions and are related by a unitary transformation of the set of occupied molecular orbitals.
For multiple bonds, the sigma-pi representation is the predominant one compared to the equivalent orbital (bent bond) representation. In contrast, for multiple lone pairs, most textbooks use the equivalent orbital representation. However, the sigma-pi representation is also used, such as by Weinhold and Landis within the context of natural bond orbitals, a localized orbital theory containing modernized analogs of classical (valence bond/Lewis structure) bonding pairs and lone pairs. For the hydrogen fluoride molecule, for example, two F lone pairs are essentially unhybridized p orbitals, while the other is an spx hybrid orbital. An analogous consideration applies to water (one O lone pair is in a pure p orbital, another is in an spx hybrid orbital).
| Physical sciences | Bond structure | Chemistry |
1253292 | https://en.wikipedia.org/wiki/Cefalexin | Cefalexin | Cefalexin, also spelled cephalexin, is an antibiotic that can treat a number of bacterial infections. It kills gram-positive and some gram-negative bacteria by disrupting the growth of the bacterial cell wall. Cefalexin is a β-lactam antibiotic within the class of first-generation cephalosporins. It works similarly to other agents within this class, including intravenous cefazolin, but can be taken by mouth.
Cefalexin can treat certain bacterial infections, including those of the middle ear, bone and joint, skin, and urinary tract. It may also be used for certain types of pneumonia and strep throat and to prevent bacterial endocarditis. Cefalexin is not effective against infections caused by methicillin-resistant Staphylococcus aureus (MRSA), most Enterococcus, or Pseudomonas. Like other antibiotics, cefalexin cannot treat viral infections, such as the flu, common cold or acute bronchitis. Cefalexin can be used in those who have mild or moderate allergies to penicillin. However, it is not recommended in those with severe penicillin allergies.
Common side effects include stomach upset and diarrhea. Allergic reactions or infections with Clostridioides difficile, a cause of diarrhea, are also possible. Use during pregnancy or breastfeeding does not appear to be harmful to the fetus. It can be used in children and those over 65 years of age. Those with kidney problems may require a decrease in dose.
Cefalexin was developed in 1967. It was first marketed in 1969 under the brand name Keflex. It is available as a generic medication. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 101st most commonly prescribed medication in the United States, with more than 6million prescriptions. In Canada, it was the fifth most common antibiotic used in 2013. In Australia, it was one of the top 10 most prescribed medications between 2017 and 2023.
Medical uses
Cefalexin can treat a number of bacterial infections including otitis media, streptococcal pharyngitis, bone and joint infections, pneumonia, cellulitis, and urinary tract infections. It may be used to prevent bacterial endocarditis. It can also be used for the prevention of recurrent urinary-tract infections.
Cefalexin does not treat methicillin-resistant Staphylococcus aureus infections.
Cefalexin is a useful alternative to penicillins in patients with penicillin intolerance. For example, penicillin is the treatment of choice for respiratory tract infections caused by Streptococcus, but cefalexin may be used as an alternative in penicillin-intolerant patients. Caution must be exercised when administering cephalosporin antibiotics to penicillin-sensitive patients, because cross-sensitivity with β-lactam antibiotics has been documented in up to 10% of patients with a documented penicillin allergy.
Pregnancy and breastfeeding
It is categorized in category A in Australia meaning that no evidence of harm has been found after being taken by many pregnant women. Use during breastfeeding is generally safe.
Adverse effects
The most common adverse effects of cefalexin, like other oral cephalosporins, are gastrointestinal (stomach area) disturbances and hypersensitivity reactions. Gastrointestinal disturbances include nausea, vomiting, and diarrhea, the latter being the most common. Hypersensitivity reactions include skin rashes, urticaria, fever, and anaphylaxis. Pseudomembranous colitis and Clostridioides difficile have been reported with use of cefalexin. Less common and more serious side effects include bruising of the skin and yellowing of the skin or eye whites.
Signs and symptoms of an allergic reaction include rash, itching, swelling, trouble breathing, or red, blistered, swollen, or peeling skin. Overall, cefalexin allergy occurs in less than 0.1% of patients. Evidence suggests that it is seen in 1% to 10% of patients with a penicillin allergy.
Interactions
Like other β-lactam antibiotics, renal excretion of cefalexin is delayed by probenecid. It is also not recommended to take cefalexin with dofetilide, live Cholera vaccine, warfarin, and cholestyramine. Alcohol consumption reduces the rate at which it is absorbed. Cefalexin also interacts with metformin, an antidiabetic drug, and this can lead to higher concentrations of metformin in the body. Histamine H2 receptor antagonists like cimetidine and ranitidine may reduce the efficacy of cefalexin by delaying its absorption and altering its antimicrobial pharmacodynamics. Zinc and zinc supplements also interact with cefalexin and may reduce the amount of cefalexin in the body.
Pharmacology
Mechanism of action
Cefalexin is a β-lactam antibiotic of the cephalosporin family. It is bactericidal and acts by inhibiting synthesis of the peptidoglycan layer of the bacterial cell wall. As cefalexin closely resembles d-alanyl-d-alanine, an amino acid ending on the peptidoglycan layer of the cell wall, it can irreversibly bind to the active site of PBP, which is essential for the synthesis of the cell wall. It is most active against gram-positive cocci, and has moderate activity against some gram-negative bacilli. However, some bacterial cells have the enzyme β-lactamase, which hydrolyzes the β-lactam ring, rendering the drug inactive. This contributes to antibacterial resistance towards cefalexin.
Pharmacokinetics
Cefalexin is rapidly and almost completely absorbed from the gastrointestinal tract with oral administration. Absorption is slightly reduced when it is taken with food and the medication can be taken without regard for meals. Peak levels of cefalexin occur about 1 hour after administration. Maximal levels of cefalexin increase approximately linearly over a dose range of 250 to 1,000 mg.
Like most other cephalosporins, cefalexin is not metabolized or otherwise inactivated in the body.
The elimination half-life of cefalexin is approximately 30 to 60 minutes in people with normal renal function. Therapeutic levels of cefalexin with oral administration are maintained for 6 to 8 hours. More than 90% of cefalexin is excreted unchanged in the urine within 8 hours.
Society and culture
It is on the World Health Organization's List of Essential Medicines. The World Health Organization classifies cefalexin as highly important for human medicine.
Names
Cefalexin is the International Nonproprietary Name (INN) and the Australian Approved Name (AAN), while cephalexin is the British Approved Name (BAN) and the United States Adopted Name (USAN). Brand names for cefalexin include Keflex, Acfex, Cephalex, Ceporex, L-Xahl, Medoxine, Ospexin, Torlasporin, Bio-Cef, Panixine DisperDose, and Novo-Lexin.
Veterinary uses
Dogs
According to Plumb's Veterinary Medication Guides, cefalexin can be used in treating skin, respiratory tract, and urinary tract infections. Specifically, it can treat pyoderma in dogs. The U.S. Food and Drug Administration (FDA) has approved it for use in humans and dogs but not for other species. Like other drugs approved for human use, cefalexin may be prescribed by veterinarians for animals in certain situations.
Cefalexin (Lexylan) is indicated for the treatment of cattle, dogs, and cats in the European Union.
| Biology and health sciences | Antibiotics | Health |
19653842 | https://en.wikipedia.org/wiki/Organism | Organism | An organism is any living thing that functions as an individual. Such a definition raises more problems than it solves, not least because the concept of an individual is also difficult. Many criteria, few of them widely accepted, have been proposed to define what an organism is. Among the most common is that an organism has autonomous reproduction, growth, and metabolism. This would exclude viruses, despite the fact that they evolve like organisms. Other problematic cases include colonial organisms; a colony of eusocial insects is organised adaptively, and has germ-soma specialisation, with some insects reproducing, others not, like cells in an animal's body. The body of a siphonophore, a jelly-like marine animal, is composed of organism-like zooids, but the whole structure looks and functions much like an animal such as a jellyfish, the parts collaborating to provide the functions of the colonial organism.
The evolutionary biologists David Queller and Joan Strassmann state that "organismality", the qualities or attributes that define an entity as an organism, has evolved socially as groups of simpler units (from cells upwards) came to cooperate without conflicts. They propose that cooperation should be used as the "defining trait" of an organism. This would treat many types of collaboration, including the fungus/alga partnership of different species in a lichen, or the permanent sexual partnership of an anglerfish, as an organism.
Etymology
The term "organism" (from the Ancient Greek , derived from , meaning , , or ) first appeared in the English language in the 1660s with the now-obsolete meaning of an organic structure or organization. It is related to the verb "organize". In his 1790 Critique of Judgment, Immanuel Kant defined an organism as "both an organized and a self-organizing being".
Whether criteria exist, or are needed
Among the criteria that have been proposed for being an organism are:
autonomous reproduction, growth, and metabolism
noncompartmentability – structure cannot be divided without losing functionality. Richard Dawkins stated this as "the quality of being sufficiently heterogeneous in form to be rendered non-functional if cut in half". However, many organisms can be cut into pieces which then grow into whole organisms.
individuality – the entity has simultaneous holdings of genetic uniqueness, genetic homogeneity and autonomy
an immune response, separating self from foreign
"anti-entropy", the ability to maintain order, a concept first proposed by Erwin Schrödinger; or in another form, that Claude Shannon's information theory can be used to identify organisms as capable of self-maintaining their information content
Other scientists think that the concept of the organism is inadequate in biology;
that the concept of individuality is problematic;
and from a philosophical point of view, question whether such a definition is necessary.
Problematic cases include colonial organisms: for instance, a colony of eusocial insects fulfills criteria such as adaptive organisation and germ-soma specialisation. If so, the same argument, or a criterion of high co-operation and low conflict, would include some mutualistic (e.g. lichens) and sexual partnerships (e.g. anglerfish) as organisms. If group selection occurs, then a group could be viewed as a superorganism, optimized by group adaptation.
Another view is that attributes like autonomy, genetic homogeneity and genetic uniqueness should be examined separately, rather than requiring that an organism possess all of them. On this view, there are multiple dimensions to biological individuality, resulting in several types of organism.
Organisms at differing levels of biological organisation
Differing levels of biological organisation give rise to potentially different understandings of the nature of organisms. A unicellular organism is a microorganism such as a protist, bacterium, or archaean, composed of a single cell, which may contain functional structures called organelles. A multicellular organism such as an animal, plant, fungus, or alga is composed of many cells, often specialised. A colonial organism such as a siphonophore is a being which functions as an individual but is composed of communicating individuals. A superorganism is a colony, such as of ants, consisting of many individuals working together as a single functional or social unit. A mutualism is a partnership of two or more species which each provide some of the needs of the other. A lichen consists of fungi and algae or cyanobacteria, with a bacterial microbiome; together, they are able to flourish as a kind of organism, the components having different functions, in habitats such as dry rocks where neither could grow alone. The evolutionary biologists David Queller and Joan Strassmann state that "organismality" has evolved socially, as groups of simpler units (from cells upwards) came to cooperate without conflicts. They propose that cooperation should be used as the "defining trait" of an organism.
Samuel Díaz‐Muñoz and colleagues (2016) accept Queller and Strassmann's view that organismality can be measured wholly by degrees of cooperation and of conflict. They state that this situates organisms in evolutionary time, so that organismality is context dependent. They suggest that highly integrated life forms, which are not context dependent, may evolve through context-dependent stages towards complete unification.
Boundary cases
Viruses
Viruses are not typically considered to be organisms, because they are incapable of autonomous reproduction, growth, metabolism, or homeostasis. Although viruses have a few enzymes and molecules like those in living organisms, they have no metabolism of their own; they cannot synthesize the organic compounds from which they are formed. In this sense, they are similar to inanimate matter. Viruses have their own genes, and they evolve. Thus, an argument that viruses should be classed as living organisms is their ability to undergo evolution and replicate through self-assembly. However, some scientists argue that viruses neither evolve nor self-reproduce. Instead, viruses are evolved by their host cells, meaning that there was co-evolution of viruses and host cells. If host cells did not exist, viral evolution would be impossible. As for reproduction, viruses rely on hosts' machinery to replicate. The discovery of viruses with genes coding for energy metabolism and protein synthesis fuelled the debate about whether viruses are living organisms, but the genes have a cellular origin. Most likely, they were acquired through horizontal gene transfer from viral hosts.
There is an argument for viewing viruses as cellular organisms. Some researchers perceive viruses not as virions alone, which they believe are just spores of an organism, but as a virocell - an ontologically mature viral organism that has cellular structure. Such virus is a result of infection of a cell and shows all major physiological properties of other organisms: metabolism, growth, and reproduction, therefore, life in its effective presence.
Organism-like colonies
The philosopher Jack A. Wilson examines some boundary cases to demonstrate that the concept of organism is not sharply defined. In his view, sponges, lichens, siphonophores, slime moulds, and eusocial colonies such as those of ants or naked molerats, all lie in the boundary zone between being definite colonies and definite organisms (or superorganisms).
Synthetic organisms
Scientists and bio-engineers are experimenting with different types of synthetic organism, from chimaeras composed of cells from two or more species, cyborgs including electromechanical limbs, hybrots containing both electronic and biological elements, and other combinations of systems that have variously evolved and been designed.
An evolved organism takes its form by the partially understood mechanisms of evolutionary developmental biology, in which the genome directs an elaborated series of interactions to produce successively more elaborate structures. The existence of chimaeras and hybrids demonstrates that these mechanisms are "intelligently" robust in the face of radically altered circumstances at all levels from molecular to organismal.
Synthetic organisms already take diverse forms, and their diversity will increase. What they all have in common is a teleonomic or goal-seeking behaviour that enables them to correct errors of many kinds so as to achieve whatever result they are designed for. Such behaviour is reminiscent of intelligent action by organisms; intelligence is seen as an embodied form of cognition.
| Biology and health sciences | Biology | null |
19653902 | https://en.wikipedia.org/wiki/Weed | Weed | A weed is a plant considered undesirable in a particular situation, growing where it conflicts with human preferences, needs, or goals. Plants with characteristics that make them hazardous, aesthetically unappealing, difficult to control in managed environments, or otherwise unwanted in farm land, orchards, gardens, lawns, parks, recreational spaces, residential and industrial areas, may all be considered weeds. The concept of weeds is particularly significant in agriculture, where the presence of weeds in fields used to grow crops may cause major losses in yields. Invasive species, plants introduced to an environment where their presence negatively impacts the overall functioning and biodiversity of the ecosystem, may also sometimes be considered weeds.
Taxonomically, the term "weed" has no botanical significance, because a plant that is a weed in one context, is not a weed when growing in a situation where it is wanted. Some plants that are widely regarded as weeds are intentionally grown in gardens and other cultivated settings. For this reason, some plants are sometimes called beneficial weeds. Similarly, volunteer plants from a previous crop are regarded as weeds when growing in a subsequent crop. Thus, alternative nomenclature for the same plants might be hardy pioneers, cosmopolitan species, volunteers, "spontaneous urban vegetation," etc.
Although whether a plant is a weed depends on context, plants commonly defined as weeds broadly share biological characteristics that allow them to thrive in disturbed environments and to be particularly difficult to destroy or eradicate. In particular, weeds are adapted to thrive under human management in the same way as intentionally grown plants. Since the origins of agriculture on Earth, agricultural weeds have co-evolved with human crops and agricultural systems, and some have been domesticated into crops themselves after their fitness in agricultural settings became apparent.
More broadly, the term "weed" is occasionally applied pejoratively to species outside the plant kingdom, species that can survive in diverse environments and reproduce quickly; in this sense it has even been applied to humans.
Weed control is important in agriculture and horticulture. Methods include hand cultivation with hoes, powered cultivation with cultivators, smothering with mulch or soil solarization, lethal wilting with high heat, burning, or chemical attack with herbicides and cultural methods such as crop rotation and fallowing land to reduce the weed population.
History
It has long been assumed that weeds, in the sense of rapidly-evolving plants taking advantage of human-disturbed environments, evolved in response to the Neolithic agricultural revolution approximately 12,000 years ago. However, researchers have found evidence of "proto-weeds" behaving in similar ways at Ohalo II, a 23,000-year-old archeological site in Israel.
The idea of "weeds" as a category of undesirable plant has not been universal throughout history. Before 1200 A.D., little evidence exists of concern with weed control or of agricultural practices solely intended to control weeds. A possible reason for this is that for much of human history, women and children were an abundant source of cheap labor to control weeds, and not directly acknowledged. Weeds are assumed to have existed since the beginning of agriculture, and accepted as an "inevitable nuisance."
Though the plants are not named using a specific term denoting a "weed" in the contemporary sense, plants that may be interpreted as "weeds" are referenced in the Bible:
Some early Roman writers referenced weeding activities in agricultural fields, but weed control in the pre-modern era was probably an incidental effect of plowing. Ancient Egyptians, Assyrians, and Sumerians had no specific word for "weeds," seeing all plants as having some use. The English word "weed" can be traced back to the Old English weod, which refers to woad, rather than a category of plant as in the modern usage; in early medieval European herbals, each plant is regarded as having its own "virtues".
By the sixteenth century, the concept of a "weed" was better defined as a "noxious" or undesirable type of plant, as referenced metaphorically in William Shakespeare's works. An example of a Shakespearean reference to weeds is found in Sonnet 69:
In London during this period, poor women were paid low wages to weed gardens and courtyards.
After the Reformation, Christian theology that emphasized the degradation of nature after the Fall of Man, and humankind's role and duty to dominate and subdue nature, became more developed and widespread. Various European writers designated certain plants as "vermin" and "filth," though many plants identified as such were valued by gardeners or by herbalists and apothecaries, and some questioned the idea that any plant could be without purpose or value. Laws mandating the control of weeds emerged as early as the seventeenth century; in 1691 a law in New York required the removal of "poysonous and Stincking Weeds" in front of houses.
In the nineteenth century, manual labor was used to control weeds in European towns and cities, and chemical methods of weed control emerged. For example, a French journal in 1831 documented a mixture of sulfur, lime and water boiled in an iron cauldron as an effective herbicide to prevent grass from growing among cobblestones.
The cultural association between weeds and moral or spiritual degradation persisted into the last nineteenth century in American cities. Urban expansion and development created ideal habitats for weeds in nineteenth-century America. Reformers consequently saw weeds as a part of the larger problem of filth, disease, and moral corruption that plagued the urban environments, and weeds were seen as refuge for "tramps" and other criminal or undesirable people. The St. Louis Post-Dispatch credited weeds as causing diphtheria, scarlet fever, and typhoid. In St. Louis between the years of 1905-1910, weeds became viewed as a major public health hazard, believed to cause typhoid and malaria, and legal precedents were set in order to control weeds that would help facilitate the adoption of weed control laws throughout the country.
Ecological significance
"Weed" as a category of plant overlaps with the closely related concepts of ruderal and pioneer species. Pioneer species are specifically adapted to disturbed environments, where the existing plant and soil community has been disrupted or damaged in some way. Adaptation to disturbance can give weeds advantages over desirable crops, pastures, or ornamental plants. The nature of the habitat and its disturbances will affect or even determine which types of weed communities become dominant. In weed ecology some authorities speak of the relationship between "the three Ps": plant, place, perception. These have been very variously defined, but the weed traits listed by H.G. Baker are widely cited.
Examples of such ruderal or pioneer species include plants that are adapted to naturally-occurring disturbed environments such as dunes and other windswept areas with shifting soils, alluvial flood plains, river banks and deltas, and areas that are burned repeatedly. Since human agricultural and horticultural practices often mimic these natural disturbances that weedy species have adapted for, some weeds are effectively preadapted to grow and proliferate in human-disturbed areas such as agricultural fields, lawns, gardens, roadsides, and construction sites. As agricultural practices continue and develop, weeds evolve further, with humans exerting evolutionary pressure upon weeds through manipulating their habitat and attempting to control weed populations.
Due to their ability to survive and thrive in conditions challenging or hostile to other plants, weeds have been considered extremophiles.
Adaptability
Due to their evolutionary heritage as disturbance-adapted pioneers, most weeds exhibit incredibly high phenotype plasticity, meaning that individual plants hold the potential to adapt their morphology, growth, and appearance in response to their conditions. The potential within a single individual to adapt to a wide variety of conditions is sometimes referred to as an "all-purpose genotype." Disturbance-adapted plants typically grow rapidly and reproduce quickly, with some annual weeds having multiple generations in a single growing season. They commonly have seeds that persist in the soil seed bank for many years. Perennial weeds often have underground stems that spread under the soil surface or, like ground ivy (Glechoma hederacea), have creeping stems that root and spread out over the ground. These traits make many disturbance-adapted plants highly successful as weeds.
On top of the ability of individual plants to adapt to their conditions, weed populations also evolve much more quickly than older models of evolution account for. Once established in an agricultural setting, weeds have been observed to undergo evolutionary changes to adapt to selective pressures imposed by human management. Some examples include changes in seed dormancy, changes in seasonal life cycles, changes in plant morphology, and the evolution of resistance to herbicides. Rapid life cycles, large populations, and ability to spread large numbers of seeds long distances also allow weed species with these general characteristics to evolve quickly.
Dispersal
The concept of weeds also overlaps with the concept of invasive species, both in the sense that human activities tend to introduce weeds outside their native range, and that an introduced species may be considered a weed. Many weed species have moved out of their natural geographic ranges and spread around the world in tandem with human migrations and commerce. Weed seeds are often collected and transported with crops after the harvesting of grains, so humans are a vector of transport as well as a producer of the disturbed environments to which weed species are well adapted, resulting in many weeds having a close association with human activities.
Some plants become dominant when introduced into new environments because the animals and plants in their original environment that compete with them or feed on them are absent; in what is sometimes called the "natural enemies hypothesis", plants freed from these specialist consumers may become dominant. An example is Klamath weed, which threatened millions of hectares of prime grain and grazing land in North America after it was accidentally introduced. The Klamathweed Beetle, a species that specializes in consuming the plant, was imported during World War II. Within several years Klamath weed was reduced to a rare roadside weed. In locations where predation and mutually competitive relationships are absent, weeds have increased resources available for growth and reproduction. The weediness of some species that are introduced into new environments may be caused by their production of allelopathic chemicals which indigenous plants are not yet adapted to, a scenario sometimes called the "novel weapons hypothesis". These chemicals may limit the growth of established plants or the germination and growth of seeds and seedlings. Weed growth can also inhibit the growth of later-successional species in ecological succession.
Introduced species have been observed to undergo rapid evolutionary change to adapt to their new environments, with changes in plant height, size, leaf shape, dispersal ability, reproductive output, vegetative reproduction ability, level of dependence on the mycorrhizal network, and level of phenotype plasticity appearing on timescales of decades to centuries. Invasive species can be more adaptable in their new environments than in their native environments, occupying broader ranges in areas where they are invasive than in areas where they are native. Hybridization between similar species can produce novel invasive plants that are better adapted to their surroundings. Polyploidy is also observed to be strongly selected for among some invasive populations, such as Solidago canadensis in China. Many weed species are now found almost worldwide, with novel adaptations that suit regional populations to their environments.
Negative impacts
Some negative impacts of weeds are functional: they interfere with food and fiber production in agriculture, wherein they must be controlled to prevent lost or diminished crop yields. In other settings, they interfere with other cosmetic, decorative, or recreational goals, such as in lawns, landscape architecture, playing fields, and golf courses. In the case of invasive species, they can be of concern for environmental reasons, when introduced species outcompete native plants and cause broader damage to ecosystem health and functioning.
Some weed species have been classified as noxious weeds by government authorities because, if left unchecked, they often compete with native or crop plants or cause harm to livestock. They are often foreign species accidentally or imprudently imported into a region where there are few natural controls to limit their population and spread.
In a range of contexts, weeds can have negative impacts by:
competing with the desired plants for the resources that a plant typically needs, namely, direct sunlight, soil nutrients, water, and (to a lesser extent) space for growth,
providing hosts and vectors for plant pathogens, giving them greater opportunity to infect and degrade the quality of the desired plants;
providing food or shelter for animal pests such as seed-eating birds and Tephritid fruit flies that otherwise could hardly survive seasonal shortages;
offering irritation to the skin or digestive tracts of people or animals, either physical irritation via thorns, prickles, or burs, or chemical irritation via natural poisons or irritants in the weed (for example, the poisons found in Nerium species);
causing root damage to engineering works such as drains, road surfaces, and foundations,
in the case of aquatic plants, obstructing or clogging streams and waterways, which interferes with boating, irrigation systems, fishing, and hydroelectric power.
Positive impacts
While the term "weed" generally has a negative connotation, many plants known as weeds can have beneficial properties. A number of weeds, such as the dandelion (Taraxacum) and lamb's quarter, are edible, and their leaves or roots may be used for food or herbal medicine. Burdock is common over much of the world, and is sometimes used to make soup and medicine in East Asia. Some weeds attract beneficial insects, which in turn can protect crops from harmful pests. Weeds can also prevent pest insects from finding a crop, because their presence disrupts the incidence of positive cues which pests use to locate their food. Weeds may also act as a "living mulch", providing ground cover that reduces moisture loss and prevents erosion. Weeds may also improve soil fertility; dandelions, for example, bring up nutrients like calcium and nitrogen from deep in the soil with their tap root, and clover hosts nitrogen-fixing bacteria in its roots, fertilizing the soil directly. The dandelion is also one of several species which break up hardpan in overly-cultivated fields, helping crops grow deeper root systems. Some garden flowers originated as weeds in cultivated fields and have been selectively bred for their garden-worthy flowers or foliage. An example of a crop weed that is grown in gardens is the corncockle, (Agrostemma githago), which was a common weed in European wheat fields, but is now sometimes grown as a garden plant.
Ecological role
As pioneer species, weeds begin the process of ecological succession after a disturbance event. The rapid, aggressive growth of weeds rapidly prevents erosion in newly exposed bare soil, and has substantially slowed topsoil loss due to anthropogenic disturbances.
In climate change adaptation
It has been suggested that weeds, with their aggressive ability to adapt, could provide humans with vital tools and knowledge for climate change adaptation. Some researchers argue that researching weed species could offer valuable insights for crop breeding, or that weeds themselves hold potential as hardy, climate-change-resistant crops. Adaptable weeds could also be a source of transgenic genes which could confer useful traits upon crops.
Weed species have been used in the restoration of land in Australia using a method called natural sequence farming. This method allows non-native weeds to stabilize and restore degraded areas where native species are not yet capable of regenerating themselves.
Weeds as adaptable species
An alternate definition often used by biologists is any species, not just plants, that can quickly adapt to any environment. Some traits of weedy species are the ability to reproduce quickly, disperse widely, live in a variety of habitats, establish a population in strange places, succeed in disturbed ecosystems and resist eradication once established. Such species often do well in human-dominated environments as other species are not able to adapt. Common examples include the common pigeon, brown rat and the raccoon. Other weedy species have been able to expand their range without actually living in human environments, as human activity has damaged the ecosystems of other species. These include the coyote, the white-tailed deer and the brown headed cowbird.
In response to the idea that humans may face extinction due to environmental degradation, paleontologist David Jablonsky counters by arguing that humans are a weed species. Like other weedy species, humans are widely dispersed in a wide variety of environments, and are highly unlikely to go extinct no matter how much damage the environment faces.
Plants often considered to be weeds
White clover is considered by some to be a weed in lawns, but in many other situations is a desirable source of fodder, honey and soil nitrogen.
A short list of some plants that often are considered to be weeds follows:
Amaranth – ("pigweed") annual with copious long-lasting seeds, also a highly edible and resilient food source
Bermuda grass – perennial, spreading by runners, rhizomes and seeds.
Bindweed
Broadleaf plantain – perennial, spreads by seeds that persist in the soil for many years
Burdock – biennial
Common lambsquarters – annual
Cogongrass - Imperata cylindrica - One of the most damaging pest weeds in the world, infesting vast areas in the tropics.
Creeping charlie – perennial, fast-spreading plants with long creeping stems
Dandelion – perennial, wind-spread, fast-growing, and drought-tolerant
Goldenrod – perennial
Japanese knotweed
Kudzu – perennial
Leafy spurge – perennial, with underground stems
Milk thistle – annual or biennial
Poison ivy – perennial
Ragweed – annual
Sorrel – annual or perennial
Striga
St John's wort – perennial
Sumac – woody perennial
Tree of heaven – woody perennial
Wild carrot – biennial
Wood sorrel – perennial
Yellow nutsedge – perennial
Many invasive weeds were introduced deliberately in the first place, and may have not been considered nuisances at the time, but rather beneficial.
Weed control
Weed control encompasses a range of methods used by humans to stop, reduce or prevent the growth and reproduction of weeds within agricultural or other managed environments. Some weed control is preventative, implementing protocols to stop weeds from invading new areas. Cultural weed control involves shaping the managed environment to make it less favorable for weeds. Once weeds are present in an area, a wide variety of means to destroy the weeds and their seeds can be employed. Since weeds are highly adaptable, relying on a single method to control weeds soon results in the invasion or adaptation of weeds that are not susceptible. Integrated pest management as it applies to weeds refers to a plan of controlling weeds that integrates multiple methods of weed control and prevention.
Methods of preventative weed control include cleaning equipment, stopping existing weeds in nearby areas from producing seed, and avoiding seed or manure that could be contaminated with weeds. A wide variety of cultural weed control methods are used, including cover cropping, crop rotation, selecting the most competitive cultivars of crops, mulching, planting with optimal density, and intercropping.
Mechanical methods of weed control involve physically cutting, uprooting, or otherwise destroying weeds. On small farms, hand weeding is the dominant means of weed control, but as larger farms dominate agriculture, this method becomes less feasible. On many operations, however, some hand-weeding may be an unavoidable component of weed control. Tillage, mowing, and burning are common examples of mechanical weed control on larger scales. New technology increases the range of mechanical weed control options. One newly emerging form of mechanical weed control uses electricity to kill weeds.
Mechanical weed control has been increasingly replaced by the use of herbicides. The reliance on herbicides has resulted in the rapid evolution of herbicide resistance in weeds, making previously effective herbicide treatments useless for the control of weeds. In particular, glyphosate, which was once considered a revolutionary breakthrough in weed control, was relied upon heavily when it was first introduced to agriculture, resulting in rapid emergence of resistance. As of 2023, 58 weed species have developed resistance to glyphosate.
Herbicide resistance in weeds has rapidly developed into new, increasingly challenging forms as the plants continually evolve. Non-target site resistance, or NTSR, is particularly difficult to counteract, since it may confer resistance to multiple herbicides at once, including herbicides the plants' ancestors were never exposed to. Various methods of adjusting herbicide application to avoid resistance, such as rotating herbicides used and tank mixing herbicides, have all been questioned in terms of their efficacy for preventing resistance from arising.
Understanding the habit of weeds is important for non-chemical methods of weed control, such as plowing, surface scuffling, promotion of more beneficial cover crops, and prevention of seed accumulation in fields. For example, amaranth is an edible plant that is considered a weed by mainstream modern agriculture. It produces copious seeds (up to 1 million per plant) that last many years, and is an early-emergent fast grower. Those seeking to control amaranth quote the mantra "This year’s seeds become next year’s weeds!". However, another view of amaranth values the plant as a resilient food source.
Some people have appreciated weeds for their tenacity, their wildness and even the work and connection to nature they provide. As Christopher Lloyd wrote in The Well-Tempered Garden:
Under climate change
As anthropogenic climate change increases temperatures and atmospheric carbon dioxide, many weeds are expected to become harder to control and to expand their ranges, at the expense of less "weedy" species. For example, kudzu, the infamous invasive vine found throughout the Southeastern United States, is expected to spread northward due to climate change. Increased competitive strength of agricultural weeds in future climate conditions threaten future ability to grow crops. Existing weed management practices will likely fail under future changes in climate conditions, meaning new agricultural techniques will be needed for global food security. Suggested techniques are holistic, transitioning away from reliance on herbicide, and include aggressive adaptation of agroforestry and use of allelopathic crop residues to suppress weeds.
| Biology and health sciences | Botany | null |
19654281 | https://en.wikipedia.org/wiki/Pangaea | Pangaea | Pangaea or Pangea ( ) was a supercontinent that existed during the late Paleozoic and early Mesozoic eras. It assembled from the earlier continental units of Gondwana, Euramerica and Siberia during the Carboniferous approximately 335 million years ago, and began to break apart about 200 million years ago, at the end of the Triassic and beginning of the Jurassic. Pangaea was C-shaped, with the bulk of its mass stretching between Earth's northern and southern polar regions and surrounded by the superocean Panthalassa and the Paleo-Tethys and subsequent Tethys Oceans. Pangaea is the most recent supercontinent to have existed and the first to be reconstructed by geologists.
Origin of the concept
The name "Pangaea" is derived from Ancient Greek pan (, "all, entire, whole") and Gaia or Gaea (, "Mother Earth, land"). The first to suggest that the continents were once joined and later separated may have been Abraham Ortelius in 1596. The concept that the continents once formed a contiguous land mass was hypothesised, with corroborating evidence, by Alfred Wegener, the originator of the scientific theory of continental drift, in three 1912 academic journal articles written in German titled Die Entstehung der Kontinente (The Origin of Continents). He expanded upon his hypothesis in his 1915 book of the same title, in which he postulated that, before breaking up and drifting to their present locations, all the continents had formed a single supercontinent that he called the Urkontinent.
Wegener used the name "Pangaea" once in the 1920 edition of his book, referring to the ancient supercontinent as "the Pangaea of the Carboniferous". He used the Germanized form Pangäa, but the name entered German and English scientific literature (in 1922 and 1926, respectively) in the Latinized form Pangaea, especially during a symposium of the American Association of Petroleum Geologists in November 1926.
Wegener originally proposed that the breakup of Pangaea was caused by centripetal forces from Earth's rotation acting on the high continents. However, this mechanism was easily shown to be physically implausible, which delayed acceptance of the Pangaea hypothesis. Arthur Holmes proposed the more plausible mechanism of mantle convection, which, together with evidence provided by the mapping of the ocean floor following the Second World War, led to the development and acceptance of the theory of plate tectonics. This theory provides the widely-accepted explanation for the existence and breakup of Pangaea.
Evidence of existence
The geography of the continents bordering the Atlantic Ocean was the first evidence suggesting the existence of Pangaea. The seemingly close fit of the coastlines of North and South America with Europe and Africa was remarked on almost as soon as these coasts were charted. Careful reconstructions showed that the mismatch at the contour was less than , and it was argued that this was much too similar to be attributed to coincidence.
Additional evidence for Pangaea is found in the geology of adjacent continents, including matching geological trends between the eastern coast of South America and the western coast of Africa. The polar ice cap of the Carboniferous covered the southern end of Pangaea. Glacial deposits, specifically till, of the same age and structure are found on many separate continents that would have been together in the continent of Pangaea. The continuity of mountain chains provides further evidence, such as the Appalachian Mountains chain extending from the southeastern United States to the Scandinavian Caledonides of Europe; these are now believed to have formed a single chain, the Central Pangean Mountains.
Fossil evidence for Pangaea includes the presence of similar and identical species on continents that are now great distances apart. For example, fossils of the therapsid Lystrosaurus have been found in South Africa, India and Antarctica, alongside members of the Glossopteris flora, whose distribution would have ranged from the polar circle to the equator if the continents had been in their present position; similarly, the freshwater reptile Mesosaurus has been found in only localized regions of the coasts of Brazil and West Africa.
Geologists can also determine the movement of continental plates by examining the orientation of magnetic minerals in rocks. When rocks are formed, they take on the magnetic orientation of the Earth, showing which direction the poles lie relative to the rock; this determines latitudes and orientations (though not longitudes). Magnetic differences between samples of sedimentary and intrusive igneous rock whose age varies by millions of years is due to a combination of magnetic polar wander (with a cycle of a few thousand years) and the drifting of continents over millions of years. The polar wander component, which is identical for all contemporaneous samples, can be subtracted, leaving the portion that shows continental drift and can be used to help reconstruct earlier continental latitudes and orientations.
Formation
Pangaea is the most recent supercontinent reconstructed from the geologic record and therefore is by far the best understood. The formation of supercontinents and their breakup appears to be cyclical through Earth's history. There may have been several others before Pangaea.
Paleomagnetic measurements help geologists determine the latitude and orientation of ancient continental blocks, and newer techniques may help determine longitudes. Paleontology helps determine ancient climates, confirming latitude estimates from paleomagnetic measurements, and the distribution of ancient forms of life provides clues on which continental blocks were close to each other at particular geological moments. However, reconstructions of continents prior to Pangaea, including the ones in this section, remain partially speculative, and different reconstructions will differ in some details.
Previous supercontinents
The fourth-last supercontinent, called Columbia or Nuna, appears to have assembled in the period 2.0–1.8 billion years ago (Ga). Columbia/Nuna broke up, and the next supercontinent, Rodinia, formed from the accretion and assembly of its fragments. Rodinia lasted from about 1.3 billion years ago until about 750 million years ago, but its configuration and geodynamic history are not nearly as well understood as those of the later supercontinents, Pannotia and Pangaea.
According to one reconstruction, when Rodinia broke up, it split into three pieces: proto-Laurasia, proto-Gondwana, and the smaller Congo Craton. Proto-Laurasia and proto-Gondwana were separated by the Proto-Tethys Ocean. Proto-Laurasia split apart to form the continents of Laurentia, Siberia, and Baltica. Baltica moved to the east of Laurentia, and Siberia moved northeast of Laurentia. The split created two oceans, the Iapetus Ocean and Paleoasian Ocean.
Most of these landmasses coalesced again to form the relatively short-lived supercontinent Pannotia, which included large areas of land near the poles and a small strip connecting the polar masses near the equator. Pannotia lasted until 540 Ma, near the beginning of the Cambrian and then broke up, giving rise to the continents of Laurentia, Baltica, and the southern supercontinent Gondwana.
Formation of Euramerica (Laurussia)
In the Cambrian, Laurentia—which would later become North America—sat on the equator with three bordering oceans: the Panthalassic Ocean to the north and west, the Iapetus Ocean to the south, and the Khanty Ocean to the east. In the early Ordovician, around 480 Ma, the microcontinent Avalonia—a landmass incorporating fragments of what would become eastern Newfoundland, the southern British Isles, and parts of Belgium, northern France, Nova Scotia, New England, South Iberia, and northwest Africa—broke free from Gondwana and began its journey to Laurentia. Baltica, Laurentia, and Avalonia all came together by the end of the Ordovician to form a landmass called Euramerica or Laurussia, closing the Iapetus Ocean. The collision resulted in the formation of the northern Appalachians. Siberia sat near Euramerica, with the Khanty Ocean between the two continents. While all this was happening, Gondwana drifted slowly towards the South Pole. This was the first step of the formation of Pangaea.
Collision of Gondwana with Euramerica
The second step in the formation of Pangaea was the collision of Gondwana with Euramerica. By the middle of the Silurian, 430 Ma, Baltica had already collided with Laurentia, forming Euramerica, an event called the Caledonian orogeny. As Avalonia inched towards Laurentia, the seaway between them, a remnant of the Iapetus Ocean, was slowly shrinking. Meanwhile, southern Europe broke off from Gondwana and began to move towards Euramerica across the Rheic Ocean. It collided with southern Baltica in the Devonian.
By the late Silurian, Annamia (Indochina) and the South China Craton split from Gondwana and moved northward, shrinking the Proto-Tethys Ocean and opening the Paleo-Tethys Ocean to the south. In the Devonian Gondwana moved towards Euramerica, causing the Rheic Ocean to shrink. In the Early Carboniferous, northwest Africa had touched the southeastern coast of Euramerica, creating the southern portion of the Appalachian Mountains, the Meseta Mountains, and the Mauritanide Mountains, an event called the Variscan orogeny. South America moved northward to southern Euramerica, while the eastern portion of Gondwana (India, Antarctica, and Australia) headed toward the South Pole from the equator. North and South China were on independent continents. The Kazakhstania microcontinent had collided with Siberia. (Siberia had been a separate continent for millions of years since the breakup of Pannotia.)
The Variscan orogeny raised the Central Pangaean Mountains, which were comparable to the modern Himalayas in scale. With Pangaea stretching from the South Pole across the equator and well into the Northern Hemisphere, an intense megamonsoon climate was established, except for a perpetually wet zone immediately around the central mountains.
Formation of Laurasia
Western Kazakhstania collided with Baltica in the late Carboniferous, closing the Ural Ocean and the western Proto-Tethys (Uralian orogeny), causing the formation of the Ural Mountains and Laurasia. This was the last step of the formation of Pangaea. Meanwhile, South America had collided with southern Laurentia, closing the Rheic Ocean and completing the Variscian orogeny with the formation the southernmost part of the Appalachians and Ouachita Mountains. By this time, Gondwana was positioned near the South Pole, and glaciers formed in Antarctica, India, Australia, southern Africa, and South America. The North China Craton collided with Siberia by the Jurassic, completely closing the Proto-Tethys Ocean.
By the Early Permian, the Cimmerian plate split from Gondwana and moved towards Laurasia, thus closing the Paleo-Tethys Ocean and forming the Tethys Ocean in its southern end. Most of the landmasses were all in one. By the Triassic, Pangaea rotated a little, and the Cimmerian plate was still travelling across the shrinking Paleo-Tethys until the Middle Jurassic. By the Late Triassic, the Paleo-Tethys had closed from west to east, creating the Cimmerian Orogeny. Pangaea, which looked like a C, with the Tethys Ocean inside the C, had rifted by the Middle Jurassic.
Life
Pangaea existed as a supercontinent for 160 million years, from its assembly around 335 Ma (Early Carboniferous) to its breakup 175 Ma (Middle Jurassic). During this interval, important developments in the evolution of life took place. The seas of the Early Carboniferous were dominated by rugose corals, brachiopods, bryozoans, sharks, and the first bony fish. Life on land was dominated by lycopsid forests inhabited by insects and other arthropods and the first tetrapods. By the time Pangaea broke up, in the Middle Jurassic, the seas swarmed with molluscs (particularly ammonites), ichthyosaurs, sharks and rays, and the first ray-finned bony fishes, while life on land was dominated by forests of cycads and conifers in which dinosaurs flourished and in which the first true mammals had appeared.
The evolution of life in this time reflected the conditions created by the assembly of Pangaea. The union of most of the continental crust into one landmass reduced the extent of sea coasts. Increased erosion from uplifted continental crust increased the importance of floodplain and delta environments relative to shallow marine environments. Continental assembly and uplift also meant increasingly arid land climates, favoring the evolution of amniote animals and seed plants, whose eggs and seeds were better adapted to dry climates. The early drying trend was most pronounced in western Pangaea, which became a center of the evolution and geographical spread of amniotes.
Coal swamps typically form in perpetually wet regions close to the equator. The assembly of Pangaea disrupted the Intertropical Convergence Zone and created an extreme monsoon climate that reduced the deposition of coal to its lowest level in the last 300 million years. During the Permian, coal deposition was largely restricted to the North and South China microcontinents, which were among the few areas of continental crust that had not joined with Pangaea. The extremes of climate in the interior of Pangaea are reflected in bone growth patterns of pareiasaurs and the growth patterns in gymnosperm forests.
The lack of oceanic barriers is thought to have favored cosmopolitanism, in which successful species attain wide geographical distribution. Cosmopolitanism was also driven by mass extinctions, including the Permian–Triassic extinction event, the most severe in the fossil record, and also the Triassic–Jurassic extinction event. These events resulted in disaster fauna showing little diversity and high cosmopolitanism, including Lystrosaurus, which opportunistically spread to every corner of Pangaea following the Permian–Triassic extinction event. On the other hand, there is evidence that many Pangaean species were provincial, with a limited geographical range, despite the absence of geographical barriers. This may be due to the strong variations in climate by latitude and season produced by the extreme monsoon climate. For example, cold-adapted pteridosperms (early seed plants) of Gondwana were blocked from spreading throughout Pangaea by the equatorial climate, and northern pteridosperms ended up dominating Gondwana in the Triassic.
Mass extinctions
The tectonics and geography of Pangaea may have worsened the Permian–Triassic extinction event or other mass extinctions. For example, the reduced area of continental shelf environments may have left marine species vulnerable to extinction. However, no evidence for a species-area effect has been found in more recent and better characterized portions of the geologic record. Another possibility is that reduced seafloor spreading associated with the formation of Pangaea, and the resulting cooling and subsidence of oceanic crust, may have reduced the number of islands that could have served as refugia for marine species. Species diversity may have already been reduced prior to mass extinction events due to mingling of species possible when formerly separate continents were merged. However, there is strong evidence that climate barriers continued to separate ecological communities in different parts of Pangaea. The eruptions of the Emeishan Traps may have eliminated South China, one of the few continental areas not merged with Pangaea, as a refugium.
Rifting and break-up
There were three major phases in the break-up of Pangaea.
Opening of the Atlantic
The Atlantic Ocean did not open uniformly; rifting began in the north-central Atlantic. The first breakup of Pangaea is proposed for the late Ladinian (230 Ma) with initial spreading in the opening central Atlantic. Then the rifting proceeded along the eastern margin of North America, the northwest African margin and the High, Saharan and Tunisian Atlas Mountains.
Another phase began in the Early-Middle Jurassic (about 175 Ma), when Pangaea began to rift from the Tethys Ocean in the east to the Pacific Ocean in the west. The rifting that took place between North America and Africa produced multiple failed rifts. One rift resulted in the North Atlantic Ocean.The South Atlantic did not open until the Cretaceous when Laurasia started to rotate clockwise and moved northward with North America to the north, and Eurasia to the south. The clockwise motion of Laurasia led much later to the closing of the Tethys Ocean and the widening of the "Sinus Borealis", which later became the Arctic Ocean. Meanwhile, on the other side of Africa and along the adjacent margins of east Africa, Antarctica and Madagascar, rifts formed that led to the formation of the southwestern Indian Ocean in the Cretaceous.
Break-up of Gondwana
The second major phase in the break-up of Pangaea began in the Early Cretaceous (150–140 Ma), when Gondwana separated into multiple continents (Africa, South America, India, Antarctica, and Australia). The subduction at Tethyan Trench probably caused Africa, India and Australia to move northward, causing the opening of a "South Indian Ocean". In the Early Cretaceous, Atlantica, today's South America and Africa, separated from eastern Gondwana. Then in the Middle Cretaceous, Gondwana fragmented to open up the South Atlantic Ocean as South America started to move westward away from Africa. The South Atlantic did not develop uniformly; rather, it rifted from south to north.
Also, at the same time, Madagascar and Insular India began to separate from Antarctica and moved northward, opening up the Indian Ocean. Madagascar and India separated from each other 100–90 Ma in the Late Cretaceous. India continued to move northward toward Eurasia at 15 centimeters (6 in) per year (a plate tectonic record), closing the eastern Tethys Ocean, while Madagascar stopped and became locked to the African Plate. New Zealand, New Caledonia and the rest of Zealandia began to separate from Australia, moving eastward toward the Pacific and opening the Coral Sea and Tasman Sea.
Opening of the Norwegian Sea and break-up of Australia and Antarctica
The third major and final phase of the break-up of Pangaea occurred in the early Cenozoic (Paleocene to Oligocene). Laurasia split when Laurentia broke from Eurasia, opening the Norwegian Sea about 60–55 Ma. The Atlantic and Indian Oceans continued to expand, closing the Tethys Ocean.
Meanwhile, Australia split from Antarctica and moved quickly northward, just as India had done more than 40 million years before. Australia is currently on a collision course with eastern Asia. Both Australia and India are currently moving northeast at 5–6 centimeters (2–3 in) per year. Antarctica has been near or at the South Pole since the formation of Pangaea about 280 Ma. India started to collide with Asia beginning about 35 Ma, forming the Himalayan orogeny and closing the Tethys Ocean; this collision continues today. The African Plate started to change directions, from west to northwest toward Europe, and South America began to move in a northward direction, separating it from Antarctica and allowing complete oceanic circulation around Antarctica for the first time. This motion, together with decreasing atmospheric carbon dioxide concentrations, caused a rapid cooling of Antarctica and allowed glaciers to form. This glaciation eventually coalesced into the kilometers-thick ice sheets seen today. Other major events took place during the Cenozoic, including the opening of the Gulf of California, the uplift of the Alps, and the opening of the Sea of Japan. The break-up of Pangaea continues today in the Red Sea Rift and East African Rift.
Climate change after Pangaea
The breakup of Pangaea was accompanied by outgassing of large quantities of carbon dioxide from continental rifts. This produced a Mesozoic CO2 high that contributed to the very warm climate of the Early Cretaceous. The opening of the Tethys Ocean also contributed to the warming of the climate. The very active mid-ocean ridges associated with the breakup of Pangaea raised sea levels to the highest in the geological record, flooding much of the continents.
The expansion of the temperate climate zones that accompanied the breakup of Pangaea may have contributed to the diversification of the angiosperms.
| Physical sciences | Geological history | null |
10599506 | https://en.wikipedia.org/wiki/Community%20%28ecology%29 | Community (ecology) | In ecology, a community is a group or association of populations of two or more different species occupying the same geographical area at the same time, also known as a biocoenosis, biotic community, biological community, ecological community, or life assemblage. The term community has a variety of uses. In its simplest form it refers to groups of organisms in a specific place or time, for example, "the fish community of Lake Ontario before industrialization".
Community ecology or synecology is the study of the interactions between species in communities on many spatial and temporal scales, including the distribution, structure, abundance, demography, and interactions of coexisting populations. The primary focus of community ecology is on the interactions between populations as determined by specific genotypic and phenotypic characteristics. It is important to understand the origin, maintenance, and consequences of species diversity when evaluating community ecology.
Community ecology also takes into account abiotic factors that influence species distributions or interactions (e.g. annual temperature or soil pH). For example, the plant communities inhabiting deserts are very different from those found in tropical rainforests due to differences in annual precipitation. Humans can also affect community structure through habitat disturbance, such as the introduction of invasive species.
On a deeper level the meaning and value of the community concept in ecology is up for debate. Communities have traditionally been understood on a fine scale in terms of local processes constructing (or destructing) an assemblage of species, such as the way climate change is likely to affect the make-up of grass communities. Recently this local community focus has been criticized. Robert Ricklefs, a professor of biology at the University of Missouri and author of Disintegration of the Ecological Community, has argued that it is more useful to think of communities on a regional scale, drawing on evolutionary taxonomy and biogeography, where some species or clades evolve and others go extinct. Today, community ecology focuses on experiments and mathematical models, however, it used to focus primarily on patterns of organisms. For example, taxonomic subdivisions of communities are called populations, while functional partitions are called guilds.
Organization
Niche
Within the community, each species occupies a niche. A species' niche determines how it interacts with the environment around it and its role within the community. By having different niches species are able to coexist. This is known as niche partitioning. For example, the time of day a species hunts or the prey it hunts.
Niche partitioning reduces competition between species such that species are able to coexist because they suppress their own growth more than they limit the growth of other species (i.e., the competition within a species is greater than the competition between species, or intraspecific competition is greater than interspecific).
The number of niches present in a community determines the number of species present. If two species have the same niche (e.g., the same food demands) then one species outcompetes the other. The more niches filled, the higher the biodiversity of the community.
Trophic level
A species' trophic level is their position in the food chain or web. At the bottom of the food web are autotrophs, also known as primary producer. Producers provide their own energy through photosynthesis or chemosynthesis, plants are primary producers. The next level is herbivores (primary consumers), these species feed on vegetation for their energy source. Herbivores are consumed by omnivores or carnivores. These species are secondary and tertiary consumers. Additional levels to the trophic scale come when smaller omnivores or carnivores are eaten by larger ones. At the top of the food web is the apex predator, this animal species is not consumed by any other in the community. Herbivores, omnivores and carnivores are all heterotrophs.
A basic example of a food chain is; grass → rabbit → fox. Food chains become more complex when more species are present, often being food webs. Energy is passed up through trophic levels. Energy is lost at each level, due to ecological inefficiencies.
The trophic level of an organism can change based on the other species present. For example, tuna can be an apex predator eating the smaller fish, such as mackerel. However, in a community where a shark species is present the shark becomes the apex predator, feeding on the tuna.
Decomposers play a role in the trophic pyramid. They provide energy source and nutrients to the plant species in the community. Decomposers such as fungi and bacteria recycle energy back to the base of the food web by feeding on dead organisms from all trophic levels.
Guild
A guild is a group of species in the community that utilize the same resources in a similar way. Organisms in the same guild experience competition due to their shared resource. Closely related species are often in the same guild, due to traits inherited through common descent from their common ancestor. However, guilds are not exclusively composed of closely related species.
Carnivores, omnivores and herbivores are all basic examples of guilds. A more precise guild would be vertebrates that forage for ground dwelling arthropods, this would contain certain birds and mammals. Flowering plants that have the same pollinator also form a guild.
Influential species
Certain species have a greater influence on the community through their direct and indirect interactions with other species. The population of influential species are affected by abiotic and biotic disturbances. These species are important in identifying communities of ecology. The loss of these species results in large changes to the community, often reducing the stability of the community. Climate change and the introduction of invasive species can affect the functioning of key species and thus have knock-on effects on the community processes. Industrialization and the introduction of chemical pollutants into environments have forever altered communities and even entire ecosystems.
Foundation species
Foundation species largely influence the population, dynamics and processes of a community, by creating physical changes to the environment itself. These species can occupy any trophic level, but tend to be producers. Red mangrove is a foundation species in marine communities. The mangrove's root provides nursery grounds for young fish, such as snappers.
Whitebark pine (Pinus albicaulis) is a foundation species. Post fire disturbance the tree provides shade (due to its dense growth) enabling the regrowth of other plant species in the community, This growth prompts the return of invertebrates and microbes needed for decomposition. Whitebark pine seeds provide food for grizzly bears.
Keystone species
Keystone species have a disproportionate influence on the community than most species. Keystone species tend to be at the higher trophic levels, often being the apex predator. Removal of the keystone species causes top-down trophic cascades. Wolves are keystone species, being an apex predator.
In Yellowstone National Park the loss of the wolf population through overhunting resulted in the loss of biodiversity in the community. The wolves had controlled the number of elks in the park, through predation. Without the wolves the elk population drastically increased, resulting in overgrazing. This negatively affected the other organisms in the park; the increased grazing from the elks removed food sources from other animals present. Wolves have since been reintroduced to return the park community to optimal functioning. See Wolf reintroduction and History of wolves in Yellowstone for more details on this case study.
A marine example of a keystone species is Pisaster ochraceus. This starfish controls the abundance of Mytilus californianus, allowing enough resources for the other species in the community.
Ecological engineers
An ecosystem engineer is a species that maintains, modifies and creates aspects of a community. They cause physical changes to the habitat and alter the resources available to the other organisms present.
Dam building beavers are ecological engineers. Through the cutting of trees to form dams they alter the flow of water in a community. These changes influence the vegetation on the riparian zone, studies show biodiversity is increased. Burrowing by the beavers creates channels, increasing the connections between habitats. This aids the movement of other organisms in the community such as frogs.
Theories of community structure
Community structure is the composition of the community. It is often measured through biological networks, such as food webs. Food webs are a map showing species networks and the energy that links the species together through trophic interactions.
Holistic theory
Holistic theory refers to the idea that a community is defined by the interactions between the organisms in it. All species are interdependent, each playing a vital role in the working of the community. Due to this communities are repeatable and easy to identify, with similar abiotic factors controlling throughout.
Frederic Clements developed the holistic (or organismic) concept of community, as if it were a superorganism or discrete unit, with sharp boundaries. Clements proposed this theory after noticing that certain plant species were regularly found together in habitats, he concluded that the species were dependent on each other. Formation of communities is non-random and involves coevolution.
The Holistic theory stems from the greater thinking of Holism—which refers to a system with many parts, all required for the system to function.
Individualistic theory
Henry Gleason developed the individualistic (also known as open or continuum) concept of community, with the abundance of a population of a species changing gradually along complex environmental gradients. Each species changes independently in relation to other species present along the gradient. Association of species is random and due to coincidence. Varying environmental conditions and each species' probability of arriving and becoming established along the gradient influence the community composition.
Individualistic theory proposes that communities can exist as continuous entities, in addition to the discrete groups referred to in the holistic theory.
Neutral theory
Stephen P. Hubbell introduced the neutral theory of ecology (not to be confused with the neutral theory of molecular evolution). Within the community (or metacommunity), species are functionally equivalent, and the abundance of a population of a species changes by stochastic demographic processes (i.e., random births and deaths).
Equivalence of the species in the community leads to ecological drift. Ecological drift leads to species' populations randomly fluctuating, whilst the overall number of individuals in the community remains constant.
When an individual dies, there is an equal chance of each species colonising that plot. Stochastic changes can cause species within the community to go extinct, however, this can take a long time if there are many individuals of that species.
Species can coexist because they are similar, resources and conditions apply a filter to the type of species that are present in the community. Each population has the same adaptive value (competitive and dispersal abilities) and resources demand. Local and regional composition represent a balance between speciation or dispersal (which increase diversity), and random extinctions (which decrease diversity).
Interspecific interactions
Species interact in various ways: competition, predation, parasitism, mutualism, commensalism, etc. The organization of a biological community with respect to ecological interactions is referred to as community structure.
Competition
Species can compete with each other for finite resources. It is considered an important limiting factor of population size, biomass and species richness. Many types of competition have been described, but proving the existence of these interactions is a matter of debate. Direct competition has been observed between individuals, populations and species, but there is little evidence that competition has been the driving force in the evolution of large groups.
Interference competition: occurs when an individual of one species directly interferes with an individual of another species. This can be for food or for territory. Examples include a lion chasing a hyena from a kill, or a plant releasing allelopathic chemicals to impede the growth of a competing species.
Apparent competition: occurs when two species share a predator. For example, a cougar preys on woodland caribou and deer. The populations of both species can be depressed by predation without direct exploitative competition.
Exploitative competition: This occurs via the consumption of resources. When an individual of one species consumes a resource (e.g., food, shelter, sunlight, etc.), that resource is no longer available for consumption by a member of a second species. Exploitative competition is thought to be more common in nature, but care must be taken to distinguish it from the apparent competition. An example of exploitative competition could be between herbivores consuming vegetation; rabbit and deer both eating meadow grass. Exploitative competition varies:
complete symmetric - all individuals receive the same amount of resources, irrespective of their size
perfect size symmetric - all individuals exploit the same amount of resource per unit biomass
absolute size-asymmetric - the largest individuals exploit all the available resource.
The degree of size asymmetry has major effects on the structure and diversity of ecological communities
Predation
Predation is hunting another species for food. This is a positive-negative interaction, the predator species benefits while the prey species is harmed. Some predators kill their prey before eating them, also known as kill and consume. For example, a hawk catching and killing a mouse.
Other predators are parasites that feed on prey while alive, for example, a vampire bat feeding on a cow. Parasitism can however lead to death of the host organism over time.
Another example is the feeding on plants of herbivores, for example, a cow grazing. Herbivory is a type of predation in which a plant (the prey in this example) will attempt to dissuade the predator from eating the plant by pumping a toxin to the plant leaves. This may cause the predator to consume other areas of the plant or not consume the plant at all.
Predation may affect the population size of predators and prey and the number of species coexisting in a community.
Predation can be specialist, for example the least weasel predates solely on the field vole. Or generalist, e.g. polar bear primarily eats seals but can switch diet to birds when seal population is low.
Species can be solitary or group predators. The advantage of hunting in a group means bigger prey can be taken, however, the food source must be shared. Wolves are group predators, whilst tigers are solitary.
Predation is density dependant, often leading to population cycles. When prey is abundant predator species increases, thus eating more prey species and causing the prey population to decline. Due to lack of food the predator population declines. Due to lack of predation the prey population increases. See Lotka–Volterra equations for more details on this. A well-known example of this is lynx-hare population cycles seen in the north.
Predation can result in coevolution – evolutionary arms race, prey adapts to avoid predator, predator evolves. For example, a prey species develops a toxin that kills its predator and the predator evolves resistance to the toxin making it no longer lethal.
Mutualism
Mutualism is an interaction between species in which both species benefit.
An example is Rhizobium bacteria growing in nodules on the roots of legumes. This relationship between plant and bacteria is endosymbiotic, the bacteria living on the roots of the legume. The plant provides compounds made during photosynthesis to the bacteria, that can be used as an energy source. Whilst Rhizobium is a nitrogen fixing bacteria, providing amino acids or ammonium to the plant.
Insects pollinating the flowers of angiosperms, is another example. Many plants are dependent on pollination from a pollinator. A pollinator transfers pollen from the male flower to the female's stigma. This fertilises the flower and enables the plant to reproduce. Bees, such as honeybees, are the most commonly known pollinators. Bees get nectar from the plant that they use as an energy source. Un-transferred pollen provides protein for the bee. The plant benefits through fertilisation, whilst the bee is provided with food.
Commensalism
Commensalism is a type of relationship among organisms in which one organism benefits while the other organism is neither benefited nor harmed. The organism that benefited is called the commensal while the other organism that is neither benefited nor harmed is called the host.
For example, an epiphytic orchid attached to the tree for support benefits the orchid but neither harms nor benefits the tree. This type of commensalism is called inquilinism, the orchid permanently lives on the tree.
Phoresy is another type of commensalism, the commensal uses the host solely for transport. Many mite species rely on another organism, such as birds or mammals, for dispersal.
Metabiosis is the final form of commensalism. The commensal relies on the host to prepare an environment suitable for life. For example, kelp has a root like system, called a holdfast, that attaches it to the seabed. Once rooted it provides molluscs, such as sea snails, with a home that protects them from predation.
Amensalism
The opposite of commensalism is amensalism, an interspecific relationship in which a product of one organism has a negative effect on another organism but the original organism is unaffected.
An example is an interaction between tadpoles of the common frog and a freshwater snail. The tadpoles consume large amounts of micro-algae. Making algae less abundant for the snail, the algae available for the snail is also of lower quality. The tadpole, therefore, has a negative effect on the snail without a gaining noticeable advantage from the snail. The tadpoles would obtain the same amount of food with or without the presence of the snail.
An older, taller tree can inhibit the growth of smaller trees. A new sapling growing in the shade of a mature tree struggles to get light for photosynthesis. The mature tree also has a well-developed root system, helping it outcompete the sapling for nutrients. Growth of the sapling is therefore impeded, often resulting in death. The relationship between the two trees is amensalism, the mature tree is unaffected by the presence of the smaller one.
Parasitism
Parasitism is an interaction in which one organism, the host, is harmed while the other, the parasite, benefits.
Parasitism is a symbiosis, a long-term bond in which the parasite feeds on the host or takes resources from the host. Parasites can live within the body such as a tapeworm. Or on the body's surface, for example head-lice.
Malaria is a result of a parasitic relationship between a female Anopheles mosquito and Plasmodium.
Mosquitos get the parasite by feeding on an infected vertebrate. Inside the mosquito the plasmodium develops in the midgut's wall. Once developed to a zygote the parasite moves to the salivary glands where it can be passed on to a vertebrate species, for example humans. The mosquito acts as a vector for Malaria. The parasite tends to reduce the mosquito's lifespan and inhibits the production of offspring.
A second example of parasitism is brood parasitism.
Cuckoos regularly do this type of parasitism. Cuckoos lay their eggs in the nest of another species of birds. The host, therefore, provides for the cuckoo chick as if it were as their own, unable to tell the difference. The cuckoo chicks eject the host's young from the nest meaning they get a greater level of care and resources from the parents. Rearing for young is costly and can reduce the success of future offspring, thus the cuckoo attempts to avoid this cost through brood parasitism.
In a similar way to predation, parasitism can lead to an evolutionary arms race. The host evolves to protect themselves from the parasite and the parasite evolves to overcome this restriction.
Neutralism
Neutralism is where species interact, but the interaction has no noticeable effects on either species involved. Due to the interconnectedness of communities, true neutralism is rare. Examples of neutralism in ecological systems are hard to prove, due to the indirect effects that species can have on each other.
| Biology and health sciences | Ecology | Biology |
10607261 | https://en.wikipedia.org/wiki/Folding%20%28chemistry%29 | Folding (chemistry) | In chemistry, folding is the process by which a molecule assumes its shape or conformation. The process can also be described as intramolecular self-assembly, a type of molecular self-assembly, where the molecule is directed to form a specific shape through noncovalent interactions, such as hydrogen bonding, metal coordination, hydrophobic forces, van der Waals forces, pi-pi interactions, and electrostatic effects.
The most active area of interest in the folding of molecules is the process of protein folding, which analyses the specific sequences of amino acids in a protein. The shape of the folded protein can be used to understand its function and design drugs to influence the processes that it is involved in.
There is also a great deal of interest in the construction of artificial folding molecules or foldamers. They are studied as models of biological molecules and for potential application to the development of new functional materials.
| Physical sciences | Supramolecular chemistry | Chemistry |
10608031 | https://en.wikipedia.org/wiki/Ediacaran%20biota | Ediacaran biota | The Ediacaran (; formerly Vendian) biota is a taxonomic period classification that consists of all life forms that were present on Earth during the Ediacaran Period (). These were enigmatic tubular and frond-shaped, mostly sessile, organisms. Trace fossils of these organisms have been found worldwide, and represent the earliest known complex multicellular organisms. The term "Ediacara biota" has received criticism from some scientists due to its alleged inconsistency, arbitrary exclusion of certain fossils, and inability to be precisely defined.
The Ediacaran biota may have undergone evolutionary radiation in a proposed event called the Avalon explosion, . This was after the Earth had thawed from the Cryogenian period's extensive glaciation. This biota largely disappeared with the rapid increase in biodiversity known as the Cambrian explosion. Most of the currently existing body plans of animals first appeared in the fossil record of the Cambrian rather than the Ediacaran. For macroorganisms, the Cambrian biota appears to have almost completely replaced the organisms that dominated the Ediacaran fossil record, although relationships are still a matter of debate.
The organisms of the Ediacaran Period first appeared around and flourished until the cusp of the Cambrian , when the characteristic communities of fossils vanished. A diverse Ediacaran community was discovered in 1995 in Sonora, Mexico, and is approximately 555 million years in age, roughly coeval with Ediacaran fossils of the Ediacara Hills in South Australia and the White Sea on the coast of Russia. While rare fossils that may represent survivors have been found as late as the Middle Cambrian (510–500 Mya), the earlier fossil communities disappear from the record at the end of the Ediacaran leaving only curious fragments of once-thriving ecosystems. Multiple hypotheses exist to explain the disappearance of this biota, including preservation bias, a changing environment, the advent of predators and competition from other life-forms. A sampling, reported in 2018, of late Ediacaran strata across the scattered remnants of Baltica suggests the flourishing of the organisms coincided with conditions of low overall productivity with a very high percentage produced by bacteria, which may have led to high concentrations of dissolved organic material in the oceans.
Determining where Ediacaran organisms fit in the tree of life has proven challenging; it is not even established that most of them were animals, with suggestions that they were lichens (fungus-alga symbionts), algae, protists known as foraminifera, fungi or microbial colonies, or hypothetical intermediates between plants and animals. The morphology and habit of some taxa (e.g. Funisia dorothea) suggest relationships to Porifera or Cnidaria (e.g. Auroralumina). Kimberella may show a similarity to molluscs, and other organisms have been thought to possess bilateral symmetry, although this is controversial. Most macroscopic fossils are morphologically distinct from later life-forms: they resemble discs, tubes, mud-filled bags or quilted mattresses. Due to the difficulty of deducing evolutionary relationships among these organisms, some palaeontologists have suggested that these represent completely extinct lineages that do not resemble any living organism. Palaeontologist Adolf Seilacher proposed a separate subkingdom level category Vendozoa (now renamed Vendobionta) in the Linnaean hierarchy for the Ediacaran biota. If these enigmatic organisms left no descendants, their strange forms might be seen as a "failed experiment" in multicellular life, with later multicellular life evolving independently from unrelated single-celled organisms. A 2018 study confirmed that one of the period's most-prominent and iconic fossils, Dickinsonia, included cholesterol, suggesting affinities to animals, fungi, or red algae.
History
The first Ediacaran fossils discovered were the disc-shaped Aspidella terranovica in 1868. Their discoverer, Scottish geologist Alexander Murray, found them useful aids for correlating the age of rocks around Newfoundland. However, since they lay below the "Primordial Strata" of the Cambrian that was then thought to contain the very first signs of animal life, a proposal four years after their discovery by Elkanah Billings that these simple forms represented fauna was dismissed by his peers. Instead, they were interpreted as gas escape structures or inorganic concretions. No similar structures elsewhere in the world were then known and the one-sided debate soon fell into obscurity. In 1933, Georg Gürich discovered specimens in Namibia but assigned them to the Cambrian Period. In 1946, Reg Sprigg noticed "jellyfishes" in the Ediacara Hills of Australia's Flinders Ranges, which were at the time believed to be Early Cambrian.
It was not until the British discovery of the iconic Charnia that the Precambrian was seriously considered as containing life. This frond-shaped fossil was found in England's Charnwood Forest first by a 15 year-old girl in 1956 (Tina Negus, who was not believed) and then the next year by a group of three schoolboys including 15 year-old Roger Mason. Due to the detailed geological mapping of the British Geological Survey, there was no doubt these fossils sat in Precambrian rocks. Palaeontologist Martin Glaessner finally, in 1959, made the connection between this and the earlier finds and with a combination of improved dating of existing specimens and an injection of vigour into the search, many more instances were recognised.
All specimens discovered until 1967 were in coarse-grained sandstone that prevented preservation of fine details, making interpretation difficult. S.B. Misra's discovery of fossiliferous ash-beds at the Mistaken Point assemblage in Newfoundland changed all this as the delicate detail preserved by the fine ash allowed the description of features that were previously undiscernible. It was also the first discovery of Ediacarans in deep water sediments.
Poor communication, combined with the difficulty in correlating globally distinct formations, led to a plethora of different names for the biota.
In 1960 the French name "Ediacarien" – after the Ediacara Hills – was added to the competing terms "Sinian" and "Vendian" for terminal-Precambrian rocks, and these names were also applied to the life-forms. "Ediacaran" and "Ediacarian" were subsequently applied to the epoch or period of geological time and its corresponding rocks. In March 2004, the International Union of Geological Sciences ended the inconsistency by formally naming the terminal period of the Neoproterozoic after the Australian locality.
The term "Ediacaran biota" and similar ("Ediacara" / "Ediacaran" / "Ediacarian" / "Vendian" and "fauna" / "biota") has, at various times, been used in a geographic, stratigraphic, taphonomic, or biological sense, with the latter the most common in modern literature.
Preservation
Microbial mats
Microbial mats are areas of sediment stabilised by the presence of colonies of microbes that secrete sticky fluids or otherwise bind the sediment particles. They appear to migrate upwards when covered by a thin layer of sediment but this is an illusion caused by the colony's growth; individuals do not, themselves, move. If too thick a layer of sediment is deposited before they can grow or reproduce through it, parts of the colony will die leaving behind fossils with a characteristically wrinkled ("elephant skin") and tubercular texture.
Some Ediacaran strata with the texture characteristics of microbial mats contain fossils, and Ediacaran fossils are almost always found in beds that contain these microbial mats. Although microbial mats were once widespread before the Cambrian substrate revolution, the evolution of grazing organisms vastly reduced their numbers. These communities are now limited to inhospitable refugia, such as the stromatolites found in Hamelin Pool Marine Nature Reserve in Shark Bay, Western Australia, where the salt levels can be twice those of the surrounding sea.
Fossilization
The preservation of Ediacaran fossils is of interest, since as soft-bodied organisms they would normally not fossilize. Further, unlike later soft-bodied fossil biota such as the Burgess Shale or Solnhofen Limestone, the Ediacaran biota is not found in a restricted environment subject to unusual local conditions: they are global. The processes that were operating must therefore have been systemic and worldwide. Something about the Ediacaran Period permitted these delicate creatures to be left behind; the fossils may have been preserved by virtue of rapid covering by ash or sand, trapping them against the mud or microbial mats on which they lived. Their preservation was possibly enhanced by the high concentration of silica in the oceans before silica-secreting organisms such as sponges and diatoms became prevalent. Ash beds provide more detail and can readily be dated to the nearest million years or better using radiometric dating. However, it is more common to find Ediacaran fossils under sandy beds deposited by storms or in turbidites formed by high-energy bottom-scraping ocean currents. Soft-bodied organisms today rarely fossilize during such events, but the presence of widespread microbial mats probably aided preservation by stabilising their impressions in the sediment below.
Scale of preservation
The rate of cementation of the overlying substrate relative to the rate of decomposition of the organism determines whether the top or bottom surface of an organism is preserved. Most disc-shaped fossils decomposed before the overlying sediment was cemented, whereupon ash or sand slumped in to fill the void, leaving a cast of the organism's underside. Conversely, quilted fossils tended to decompose after the cementation of the overlying sediment; hence their upper surfaces are preserved. Their more resistant nature is reflected in the fact that, in rare occasions, quilted fossils are found within storm beds as the high-energy sedimentation did not destroy them as it would have the less-resistant discs. Further, in some cases, the bacterial precipitation of minerals formed a "death mask", ultimately leaving a positive, cast-like impression of the organism.
Morphology
The Ediacaran biota exhibited a vast range of morphological characteristics. Size ranged from millimetres to metres; complexity from "blob-like" to intricate; rigidity from sturdy and resistant to jelly-soft. Almost all forms of symmetry were present. These organisms differed from earlier, mainly microbial, fossils in having an organised, differentiated multicellular construction and centimetre-plus sizes.
These disparate morphologies can be broadly grouped into form taxa:
"Embryos" Recent discoveries of Precambrian multicellular life have been dominated by reports of embryos, particularly from the Doushantuo Formation in China. Some finds generated intense media excitement though some have claimed they are instead inorganic structures formed by the precipitation of minerals on the inside of a hole. Other "embryos" have been interpreted as the remains of the giant sulfur-reducing bacteria akin to Thiomargarita, a view that, while it had enjoyed a notable gain of supporters as of 2007, has since suffered following further research comparing the potential Doushantuo embryos' morphologies with those of Thiomargarita specimens, both living and in various stages of decay. A recent discovery of comparable Ediacaran fossil embryos from the Portfjeld Formation in Greenland has significantly expanded the paleogeograpical occurrence of Doushantuo-type fossil "embryos" with similar biotic forms now reported from differing paleolatitudes.
Microfossils dating from – just 3 million years after the end of the Cryogenian glaciations – may represent embryonic 'resting stages' in the life cycle of the earliest known animals. An alternative proposal is that these structures represent adult stages of the multicellular organisms of this period. Microfossils of Caveasphaera are thought to foreshadow the evolutionary origin of animal-like embryology.
Discs Circular fossils, such as Ediacaria, Cyclomedusa, and Rugoconites led to the initial identification of Ediacaran fossils as cnidaria, which include jellyfish and corals. Further examination has provided alternative interpretations of all disc-shaped fossils: not one is now confidently recognised as a jellyfish. Alternate explanations include holdfasts and protists; the patterns displayed where two meet have led to many 'individuals' being identified as microbial colonies, and yet others may represent scratch marks formed as stalked organisms spun around their holdfasts.
Bags Fossils such as Pteridinium preserved within sediment layers resemble "mud-filled bags". The scientific community is a long way from reaching a consensus on their interpretation.
Toroids The fossil Vendoglossa tuberculata from the Nama Group, Namibia, has been interpreted as a dorso-ventrally compressed stem-group metazoan, with a large gut cavity and a transversely ridged ectoderm. The organism is in the shape of a flattened torus, with the long axis of its toroidal body running through the approximate center of the presumed gut cavity.
Quilted organisms The organisms considered in Seilacher's revised definition of the Vendobionta share a "quilted" appearance and resembled an inflatable mattress. Sometimes these quilts would be torn or ruptured prior to preservation: Such damaged specimens provide valuable clues in the reconstruction process. For example, the three (or more) petaloid fronds of Swartpuntia germsi could only be recognised in a posthumously damaged specimen – usually multiple fronds were hidden as burial squashed the organisms flat. These organisms appear to form two groups: the fractal rangeomorphs and the simpler erniettomorphs. Including such fossils as the iconic Charnia and Swartpuntia, the group is both the most iconic of the Ediacaran biota and the most difficult to place within the existing tree of life. Lacking any mouth, gut, reproductive organs, or indeed any evidence of internal anatomy, their lifestyle was somewhat peculiar by modern standards; the most widely accepted hypothesis holds that they sucked nutrients out of the surrounding seawater by osmotrophy or osmosis. However, others argue against this.
Non-Vendobionts
Possible early forms of living phyla, excluding them from some definitions of the Ediacaran biota. The earliest such fossil is the reputed bilaterian Vernanimalcula claimed by some, however, to represent the infilling of an egg-sac or acritarch. In 2020, Ikaria wariootia was claimed to represent one of the oldest organisms with anterior and posterior differentiation. Later examples are almost universally accepted as bilaterians and include the mollusc-like Kimberella, Spriggina (pictured) and the shield-shaped Parvancorina whose affinities are currently debated. A suite of fossils known as the small shelly fossils are represented in the Ediacaran, most famously by Cloudina a shelly tube-like fossil that often shows evidence of predatory boring, suggesting that, while predation may not have been common in the Ediacaran Period, it was at least present. Organic microfossils known as small carbonaceous fossils are found in Ediacaran sediments, including the spiral-shaped Cochleatina which spans the Ediacaran–Cambrian boundary. Ediacaria also survived well into the Cambrian. Representatives of modern taxa existed in the Ediacaran, some of which are recognisable today. Sponges, red and green algæ, protists and bacteria are all easily recognisable with some pre-dating the Ediacaran by nearly three billion years. Possible arthropods have also been described. Surface trails left by Treptichnus bear similarities to modern priapulids. Fossils of the hard-shelled foraminifera Platysolenites are known from the latest Ediacaran of western Siberia, coexisting with Cloudina and Namacalathus.
Filaments
Filament-shaped structures in Precambrian fossils have been observed on many occasions. Frondose fossils in Newfoundland have been observed to have developed filamentous bedding planes, inferred to be stolonic outgrowths. A study of Brazilian Ediacaran fossils found filamentous microfossils, suggested to be eukaryotes or large sulfur-oxidizing-bacteria (SOBs). Fungus-like filaments found in the Doushantuo Formation have been interpreted as eukaryotes and possibly fungi, providing possible evidence for the evolution and terrestrialization of fungi ~635 Ma.
Trace fossils With the exception of some very simple vertical burrows the only Ediacaran burrows are horizontal, lying on or just below the surface of the seafloor. Such burrows have been taken to imply the presence of motile organisms with heads, which would probably have had a bilateral symmetry. This could place them in the bilateral clade of animals but they could also have been made by simpler organisms feeding as they slowly rolled along the sea floor. Putative "burrows" dating as far back as may have been made by animals that fed on the undersides of microbial mats, which would have shielded them from a chemically unpleasant ocean; however their uneven width and tapering ends make a biological origin so difficult to defend that even the original proponent no longer believes they are authentic.
The burrows observed imply simple behaviour, and the complex efficient feeding traces common from the start of the Cambrian are absent. Some Ediacaran fossils, especially discs, have been interpreted tentatively as trace fossils but this hypothesis has not gained widespread acceptance. As well as burrows, some trace fossils have been found directly associated with an Ediacaran fossil. Yorgia and Dickinsonia are often found at the end of long pathways of trace fossils matching their shape; these fossils are thought to be associated with ciliary feeding but the precise method of formation of these disconnected and overlapping fossils largely remains a mystery. The potential mollusc Kimberella is associated with scratch marks, perhaps formed by a radula.
Classification and interpretation
Classification of the Ediacarans is inevitably difficult, hence a variety of theories exist as to their placement on the tree of life.
Martin Glaessner proposed in The Dawn of Animal Life (1984) that the Ediacaran biota were recognizable crown group members of modern phyla, but were unfamiliar because they had yet to evolve the characteristic features we use in modern classification.
In 1998 Mark McMenamin claimed Ediacarans did not possess an embryonic stage, and thus could not be animals. He believed that they independently evolved a nervous system and brains, meaning that "the path toward intelligent life was embarked upon more than once on this planet".
In 2018 analysis of ancient sterols was taken as evidence that one of the period's most-prominent and iconic fossils, Dickinsonia, was an early animal.
Cnidarians
Since the most primitive eumetazoans—multi-cellular animals with tissues—are cnidarians, and the first recognized Ediacaran fossil Charnia looks very much like a sea pen, the first attempt to categorise these fossils designated them as jellyfish and sea pens. However, more recent discoveries have established that many of the circular forms formerly considered "cnidarian medusa" are actually holdfasts – sand-filled vesicles occurring at the base of the stem of upright frond-like Ediacarans. A notable example is the form known as Charniodiscus, a circular impression later found to be attached to the long 'stem' of a frond-like organism that now bears the name.
The link between frond-like Ediacarans and sea pens has been thrown into doubt by multiple lines of evidence; chiefly the derived nature of the most frond-like pennatulacean octocorals, their absence from the fossil record before the Tertiary, and the apparent cohesion between segments in Ediacaran frond-like organisms. Some researchers have suggested that an analysis of "growth poles" discredits the pennatulacean nature of Ediacaran fronds.
Protozoans
Adolf Seilacher has suggested that in the Ediacaran, animals take over from giant protists as the dominant life form. The modern xenophyophores are giant single-celled protozoans found throughout the world's oceans, largely on the abyssal plain. Genomic evidence suggests that the xenophyophores are a specialised group of Foraminifera.
Unique phyla
Seilacher has suggested that the Ediacaran organisms represented a unique and extinct grouping of related forms descended from a common ancestor (clade) and created the kingdom Vendozoa, named after the now-obsolete Vendian era. He later excluded fossils identified as metazoans and relaunched the phylum "Vendobionta", which he described as "quilted" cnidarians lacking stinging cells. This absence precludes the current cnidarian method of feeding, so Seilacher suggested that the organisms may have survived by symbiosis with photosynthetic or chemoautotrophic organisms. Mark McMenamin saw such feeding strategies as characteristic for the entire biota, and referred to the marine biota of this period as a "Garden of Ediacara".
Lichen hypothesis
Greg Retallack has proposed that Ediacaran organisms were lichens. He argues that thin sections of Ediacaran fossils show lichen-like compartments and hypha-like wisps of ferruginized clay, and that Ediacaran fossils have been found in strata that he interprets as desert soils.
The suggestion has been disputed by other scientists; some have described the evidence as ambiguous and unconvincing, for instance noting that Dickinsonia fossils have been found on rippled surfaces (suggesting a marine environment), while trace fossils like Radulichnus could not have been caused by needle ice as Retallack has proposed. Ben Waggoner notes that the suggestion would place the root of the Cnidaria back from around 900 mya to between 1500 mya and 2000 mya, contradicting much other evidence. Matthew Nelsen, examining phylogenies of ascomycete fungi and chlorophyte algae (components of lichens), calibrated for time, finds no support for the hypothesis that lichens predated the vascular plants.
Other interpretations
Several classifications have been used to accommodate the Ediacaran biota at some point, from algae, to protozoans, to fungi to bacterial or microbial colonies, to hypothetical intermediates between plants and animals.
A new extant genus discovered in 2014, Dendrogramma, which at the time of discovery appeared to be a basal metazoan but of unknown taxonomic placement, had been noted to have similarities with the Ediacaran fauna. It has since been found to be a siphonophore, possibly even sections of a more complex species.
Origin
It took almost 4 billion years from the formation of the Earth for Ediacaran fossils to first appear, 655 million years ago. While putative fossils are reported from , the first uncontroversial evidence for life is found , and cells with nuclei certainly existed by .
It could be that no special explanation is required: the slow process of evolution simply required 4 billion years to accumulate the necessary adaptations. Indeed, there does seem to be a slow increase in the maximum level of complexity seen over this time, with more and more complex forms of life evolving as time progresses, with traces of earlier semi-complex life such as Nimbia, found in the Twitya formation, and older rocks dating to in Kazakhstan.
On the early Earth, reactive elements, such as iron and uranium, existed in a reduced form that would react with any free oxygen produced by photosynthesising organisms. Oxygen would not be able to build up in the atmosphere until all the iron had rusted (producing banded iron formations), and all the other reactive elements had been oxidised. Donald Canfield detected records of the first significant quantities of atmospheric oxygen just before the first Ediacaran fossils appeared – and the presence of atmospheric oxygen was soon heralded as a possible trigger for the Ediacaran radiation. Oxygen seems to have accumulated in two pulses; the rise of small, sessile (stationary) organisms seems to correlate with an early oxygenation event, with larger and mobile organisms appearing around the second pulse of oxygenation. However, the assumptions underlying the reconstruction of atmospheric composition have attracted some criticism, with widespread anoxia having little effect on life where it occurs in the Early Cambrian and the Cretaceous.
Periods of intense cold have also been suggested as a barrier to the evolution of multicellular life.
The earliest known embryos, from China's Doushantuo Formation, appear just a million years after the Earth emerged from a global glaciation, suggesting that ice cover and cold oceans may have prevented the emergence of multicellular life.
In early 2008, a team analysed the range of basic body structures ("disparity") of Ediacaran organisms from three different fossil beds: Avalon in Canada, to ; White Sea in Russia, to ; and Nama in Namibia, to , immediately before the start of the Cambrian. They found that, while the White Sea assemblage had the most species, there was no significant difference in disparity between the three groups, and concluded that before the beginning of the Avalon timespan these organisms must have gone through their own evolutionary "explosion", which may have been similar to the famous Cambrian explosion.
Preservation bias
The paucity of Ediacaran fossils after the Cambrian could simply be due to conditions no longer favoring the fossilization of Ediacaran organisms, which may have continued to thrive unpreserved for a considerable time. However, if they were common, more than the occasional specimen might be expected in exceptionally preserved fossil assemblages (Konservat-Lagerstätten) such as the Burgess Shale and Chengjiang. Although no reports of Ediacara-type organisms in the Cambrian period are widely accepted at present, a few disputed reports have been made, as well as unpublished observations of 'vendobiont' fossils from 535 Ma Orsten-type deposits in China.
Predation and grazing
It has been suggested that by the Early Cambrian, organisms higher in the food chain caused the microbial mats to largely disappear. If these grazers first appeared as the Ediacaran biota started to decline, then it may suggest that they destabilised the microbial mats in a "Cambrian substrate revolution", leading to displacement or detachment of the biota; or that the destruction of the microbial substrate destabilized the ecosystem, causing extinctions.
Alternatively, skeletonized animals could have fed directly on the relatively undefended Ediacaran biota.
However, if the interpretation of the Ediacaran age Kimberella as a grazer is correct then this suggests that the biota had already had limited exposure to "predation".
Competition
Increased competition due to the evolution of key innovations among other groups, perhaps as a response to predation, may have driven the Ediacaran biota from their niches. However, the supposed "competitive exclusion" of brachiopods by bivalve molluscs was eventually deemed to be a coincidental result of two unrelated trends.
Change in environmental conditions
Great changes were happening at the end of the Precambrian and the start of the Early Cambrian. The breakup of the supercontinents, rising sea levels (creating shallow, "life-friendly" seas), a nutrient crisis, fluctuations in atmospheric composition, including oxygen and carbon dioxide levels, and changes in ocean chemistry (promoting biomineralisation) could all have played a part.
Assemblages
Late Ediacaran macrofossils are recognized globally in at least 52 formations and a variety of depositional conditions. Each formation is commonly grouped into three main types, known as assemblages and named after typical localities. Each assemblage tends to occupy its own time period and region of morphospace, and after an initial burst of diversification (or extinction) changes little for the rest of its existence.
Avalon assemblage
The Avalon assemblage is defined at Mistaken Point one the Avalon Peninsula of Canada, the oldest locality with a large quantity of Ediacaran fossils.
The assemblage is easily dated because it contains many fine ash-beds, which are a good source of zircons used in the uranium-lead method of radiometric dating. These fine-grained ash beds also preserve exquisite detail. Constituents of this biota appear to survive through until the extinction of all Ediacarans at the base of the Cambrian.
One interpretation of the biota is as deep-sea-dwelling rangeomorphs such as Charnia, all of which share a fractal growth pattern. They were probably preserved in situ (without post-mortem transportation), although this point is not universally accepted. The assemblage, while less diverse than the White Sea or Nama assemblages, resembles Carboniferous suspension-feeding communities, which may suggest filter feeding as the assemblage is often found in water too deep for photosynthesis.
White Sea assemblage
The White Sea or Ediacaran assemblage is named after Russia's White Sea or Australia's Ediacara Hills and is marked by much higher diversity than the Avalon or Nama assemblages. In Australia, they are typically found in red gypsiferous and calcareous paleosols formed on loess and flood deposits in an arid cool temperate paleoclimate. Most fossils are preserved as imprints in microbial beds, but a few are preserved within sandy units.
Nama assemblage
The Nama assemblage is best represented in Namibia. It is marked by extreme biotic turnover, with rates of extinction exceeding rates of origination for the whole period. Three-dimensional preservation is most common, with organisms preserved in sandy beds containing internal bedding. Dima Grazhdankin believes that these fossils represent burrowing organisms, while Guy Narbonne maintains they were surface dwellers. These beds are sandwiched between units comprising interbedded sandstones, siltstones and shales—with microbial mats, where present, usually containing the fossils. The environment is interpreted as sand bars formed at the mouth of a delta's distributaries. Mattress-like vendobionts (Ernietta, Pteridinium, Rangea) in these sandstones form a very different assemblage from vermiform fossils (Cloudina, Namacalathus) of Ediacaran "wormworld" in marine dolomite of Namibia.
Significance of assemblages
Since they are globally distributed – described on all continents except Antarctica – geographical boundaries do not appear to be a factor; the same fossils are found at all palaeolatitudes (the latitude where the fossil was created, accounting for continental drift - an application of paleomagnetism) and in separate sedimentary basins. An analysis of one of the White Sea fossil beds, where the layers cycle from continental seabed to inter-tidal to estuarine and back again a few times, found that a specific set of Ediacaran organisms was associated with each environment. However, while there is some delineation in organisms adapted to different environments, the three assemblages are more distinct temporally than paleoenvironmentally. Because of this, the three assemblages are often separated by temporal boundaries rather than environmental ones (timeline at right).
As the Ediacaran biota represent an early stage in multicellular life's history, it is unsurprising that not all possible modes of life are occupied. It has been estimated that of 92 potentially possible modes of life – combinations of feeding style, tiering and motility — no more than a dozen are occupied by the end of the Ediacaran. Just four are represented in the Avalon assemblage.
| Biology and health sciences | General classifications | Animals |
10610469 | https://en.wikipedia.org/wiki/Theory%20of%20tides | Theory of tides | The theory of tides is the application of continuum mechanics to interpret and predict the tidal deformations of planetary and satellite bodies and their atmospheres and oceans (especially Earth's oceans) under the gravitational loading of another astronomical body or bodies (especially the Moon and Sun).
History
Australian Aboriginal astronomy
The Yolngu people of northeastern Arnhem Land in the Northern Territory of Australia identified a link between the Moon and the tides, which they mythically attributed to the Moon filling with water and emptying out again.
Classical era
The tides received relatively little attention in the civilizations around the Mediterranean Sea, as the tides there are relatively small, and the areas that experience tides do so unreliably. A number of theories were advanced, however, from comparing the movements to breathing or blood flow to theories involving whirlpools or river cycles. A similar "breathing earth" idea was considered by some Asian thinkers. Plato reportedly believed that the tides were caused by water flowing in and out of undersea caverns. Crates of Mallus attributed the tides to "the counter-movement (ἀντισπασμός) of the sea” and Apollodorus of Corcyra to "the refluxes from the Ocean". An ancient Indian Purana text dated to 400-300 BC refers to the ocean rising and falling because of heat expansion from the light of the Moon.
Ultimately the link between the Moon (and Sun) and the tides became known to the Greeks, although the exact date of discovery is unclear; references to it are made in sources such as Pytheas of Massilia in 325 BC and Pliny the Elder's Natural History in 77 AD. Although the schedule of the tides and the link to lunar and solar movements was known, the exact mechanism that connected them was unclear. Classicists Thomas Little Heath claimed that both Pytheas and Posidonius connected the tides with the moon, "the former directly, the latter through the setting up of winds". Seneca mentions in De Providentia the periodic motion of the tides controlled by the lunar sphere. Eratosthenes (3rd century BC) and Posidonius (1st century BC) both produced detailed descriptions of the tides and their relationship to the phases of the Moon, Posidonius in particular making lengthy observations of the sea on the Spanish coast, although little of their work survived. The influence of the Moon on tides was mentioned in Ptolemy's Tetrabiblos as evidence of the reality of astrology. Seleucus of Seleucia is thought to have theorized around 150 BC that tides were caused by the Moon as part of his heliocentric model.
Aristotle, judging from discussions of his beliefs in other sources, is thought to have believed the tides were caused by winds driven by the Sun's heat, and he rejected the theory that the Moon caused the tides. An apocryphal legend claims that he committed suicide in frustration with his failure to fully understand the tides. Heraclides also held "the sun sets up winds, and that these winds, when they blow, cause the high tide and, when they cease, the low tide". Dicaearchus also "put the tides down to the direct action of the sun according to its position". Philostratus discusses tides in Book Five of Life of Apollonius of Tyana (circa 217-238 AD); he was vaguely aware of a correlation of the tides with the phases of the Moon but attributed them to spirits moving water in and out of caverns, which he connected with the legend that spirits of the dead cannot move on at certain phases of the Moon.
Medieval period
The Venerable Bede discusses the tides in The Reckoning of Time and shows that the twice-daily timing of tides is related to the Moon and that the lunar monthly cycle of spring and neap tides is also related to the Moon's position. He goes on to note that the times of tides vary along the same coast and that the water movements cause low tide at one place when there is high tide elsewhere. However, he made no progress regarding the question of how exactly the Moon created the tides.
Medieval rule-of-thumb methods for predicting tides were said to allow one "to know what Moon makes high water" from the Moon's movements. Dante references the Moon's influence on the tides in his Divine Comedy.
Medieval European understanding of the tides was often based on works of Muslim astronomers that became available through Latin translation starting from the 12th century. Abu Ma'shar al-Balkhi, in his Introductorium in astronomiam, taught that ebb and flood tides were caused by the Moon. Abu Ma'shar discussed the effects of wind and Moon's phases relative to the Sun on the tides. In the 12th century, al-Bitruji contributed the notion that the tides were caused by the general circulation of the heavens. Medieval Arabic astrologers frequently referenced the Moon's influence on the tides as evidence for the reality of astrology; some of their treatises on the topic influenced western Europe. Some theorized that the influence was caused by lunar rays heating the ocean's floor.
Modern era
Simon Stevin in his 1608 De spiegheling der Ebbenvloet (The Theory of Ebb and Flood) dismisses a large number of misconceptions that still existed about ebb and flood. Stevin pleads for the idea that the attraction of the Moon was responsible for the tides and writes in clear terms about ebb, flood, spring tide and neap tide, stressing that further research needed to be made. In 1609, Johannes Kepler correctly suggested that the gravitation of the Moon causes the tides, which he compared to magnetic attraction basing his argument upon ancient observations and correlations.
In 1616, Galileo Galilei wrote Discourse on the Tides. He strongly and mockingly rejects the lunar theory of the tides, and tries to explain the tides as the result of the Earth's rotation and revolution around the Sun, believing that the oceans moved like water in a large basin: as the basin moves, so does the water. Therefore, as the Earth revolves, the force of the Earth's rotation causes the oceans to "alternately accelerate and retardate". His view on the oscillation and "alternately accelerated and retardated" motion of the Earth's rotation is a "dynamic process" that deviated from the previous dogma, which proposed "a process of expansion and contraction of seawater." However, Galileo's theory was erroneous. In subsequent centuries, further analysis led to the current tidal physics. Galileo tried to use his tidal theory to prove the movement of the Earth around the Sun. Galileo theorized that because of the Earth's motion, borders of the oceans like the Atlantic and Pacific would show one high tide and one low tide per day. The Mediterranean Sea had two high tides and low tides, though Galileo argued that this was a product of secondary effects and that his theory would hold in the Atlantic. However, Galileo's contemporaries noted that the Atlantic also had two high tides and low tides per day, which led to Galileo omitting this claim from his 1632 Dialogue.
René Descartes theorized that the tides (alongside the movement of planets, etc.) were caused by aetheric vortices, without reference to Kepler's theories of gravitation by mutual attraction; this was extremely influential, with numerous followers of Descartes expounding on this theory throughout the 17th century, particularly in France. However, Descartes and his followers acknowledged the influence of the Moon, speculating that pressure waves from the Moon via the aether were responsible for the correlation.
Newton, in the Principia, provides a correct explanation for the tidal force, which can be used to explain tides on a planet covered by a uniform ocean but which takes no account of the distribution of the continents or ocean bathymetry.
Dynamic theory
While Newton explained the tides by describing the tide-generating forces and Daniel Bernoulli gave a description of the static reaction of the waters on Earth to the tidal potential, the dynamic theory of tides, developed by Pierre-Simon Laplace in 1775, describes the ocean's real reaction to tidal forces. Laplace's theory of ocean tides takes into account friction, resonance and natural periods of ocean basins. It predicts the large amphidromic systems in the world's ocean basins and explains the oceanic tides that are actually observed.
The equilibrium theory—based on the gravitational gradient from the Sun and Moon but ignoring the Earth's rotation, the effects of continents, and other important effects—could not explain the real ocean tides. Since measurements have confirmed the dynamic theory, many things have possible explanations now, like how the tides interact with deep sea ridges, and chains of seamounts give rise to deep eddies that transport nutrients from the deep to the surface. The equilibrium tide theory calculates the height of the tide wave of less than half a meter, while the dynamic theory explains why tides are up to 15 meters.
Satellite observations confirm the accuracy of the dynamic theory, and the tides worldwide are now measured to within a few centimeters. Measurements from the CHAMP satellite closely match the models based on the TOPEX data. Accurate models of tides worldwide are essential for research since the variations due to tides must be removed from measurements when calculating gravity and changes in sea levels.
Laplace's tidal equations
In 1776, Laplace formulated a single set of linear partial differential equations for tidal flow described as a barotropic two-dimensional sheet flow. Coriolis effects are introduced as well as lateral forcing by gravity. Laplace obtained these equations by simplifying the fluid dynamics equations, but they can also be derived from energy integrals via Lagrange's equation.
For a fluid sheet of average thickness D, the vertical tidal elevation ζ, as well as the horizontal velocity components u and v (in the latitude φ and longitude λ directions, respectively) satisfy Laplace's tidal equations:
where Ω is the angular frequency of the planet's rotation, g is the planet's gravitational acceleration at the mean ocean surface, a is the planetary radius, and U is the external gravitational tidal-forcing potential.
William Thomson (Lord Kelvin) rewrote Laplace's momentum terms using the curl to find an equation for vorticity. Under certain conditions this can be further rewritten as a conservation of vorticity.
Tidal analysis and prediction
Harmonic analysis
Laplace's improvements in theory were substantial, but they still left prediction in an approximate state. This position changed in the 1860s when the local circumstances of tidal phenomena were more fully brought into account by William Thomson's application of Fourier analysis to the tidal motions as harmonic analysis. Thomson's work in this field was further developed and extended by George Darwin, applying the lunar theory current in his time. for the tidal harmonic constituents are still used, for example: M: moon/lunar; S: sun/solar; K: moon-sun/lunisolar.
Darwin's harmonic developments of the tide-generating forces were later improved when A.T. Doodson, applying the lunar theory of E.W. Brown, developed the tide-generating potential (TGP) in harmonic form, distinguishing 388 tidal frequencies. Doodson's work was carried out and published in 1921. Doodson devised a practical system for specifying the different harmonic components of the tide-generating potential, the Doodson numbers, a system still in use.
Since the mid-twentieth century further analysis has generated many more terms than Doodson's 388. About 62 constituents are of sufficient size to be considered for possible use in marine tide prediction, but sometimes many fewer can predict tides to useful accuracy. The calculations of tide predictions using the harmonic constituents are laborious, and from the 1870s to about the 1960s they were carried out using a mechanical tide-predicting machine, a special-purpose form of analog computer. More recently digital computers, using the method of matrix inversion, are used to determine the tidal harmonic constituents directly from tide gauge records.
Tidal constituents
Tidal constituents combine to give an endlessly varying aggregate because of their different and incommensurable frequencies: the effect is visualized in an animation of the American Mathematical Society illustrating the way in which the components used to be mechanically combined in the tide-predicting machine. Amplitudes (half of peak-to-peak amplitude) of tidal constituents are given below for six example locations:
Eastport, Maine (ME), Biloxi, Mississippi (MS), San Juan, Puerto Rico (PR), Kodiak, Alaska (AK), San Francisco, California (CA), and Hilo, Hawaii (HI).
Semi-diurnal
Diurnal
Long period
Short period
Doodson numbers
In order to specify the different harmonic components of the tide-generating potential, Doodson devised a practical system which is still in use, involving what are called the Doodson numbers based on the six Doodson arguments or Doodson variables. The number of different tidal frequency components is large, but each corresponds to a specific linear combination of six frequencies using small-integer multiples, positive or negative. In principle, these basic angular arguments can be specified in numerous ways; Doodson's choice of his six "Doodson arguments" has been widely used in tidal work. In terms of these Doodson arguments, each tidal frequency can then be specified as a sum made up of a small integer multiple of each of the six arguments. The resulting six small integer multipliers effectively encode the frequency of the tidal argument concerned, and these are the Doodson numbers: in practice all except the first are usually biased upwards by +5 to avoid negative numbers in the notation. (In the case that the biased multiple exceeds 9, the system adopts X for 10, and E for 11.)
The Doodson arguments are specified in the following way, in order of decreasing frequency:
is mean Lunar time, the Greenwich hour angle of the mean Moon plus 12 hours.
is the mean longitude of the Moon.
is the mean longitude of the Sun.
is the longitude of the Moon's mean perigee.
is the negative of the longitude of the Moon's mean ascending node on the ecliptic.
or is the longitude of the Sun's mean perigee.
In these expressions, the symbols , , and refer to an alternative set of fundamental angular arguments (usually preferred for use in modern lunar theory), in which:-
is the mean anomaly of the Moon (distance from its perigee).
is the mean anomaly of the Sun (distance from its perigee).
is the Moon's mean argument of latitude (distance from its node).
is the Moon's mean elongation (distance from the sun).
It is possible to define several auxiliary variables on the basis of combinations of these.
In terms of this system, each tidal constituent frequency can be identified by its Doodson numbers. The strongest tidal constituent "M2" has a frequency of 2 cycles per lunar day, its Doodson numbers are usually written 255.555, meaning that its frequency is composed of twice the first Doodson argument, and zero times all of the others. The second strongest tidal constituent "S2" is influenced by the sun, and its Doodson numbers are 273.555, meaning that its frequency is composed of twice the first Doodson argument, +2 times the second, -2 times the third, and zero times each of the other three. This aggregates to the angular equivalent of mean solar time +12 hours. These two strongest component frequencies have simple arguments for which the Doodson system might appear needlessly complex, but each of the hundreds of other component frequencies can be briefly specified in a similar way, showing in the aggregate the usefulness of the encoding.
| Physical sciences | Geophysics | Earth science |
260844 | https://en.wikipedia.org/wiki/Mergus | Mergus | Mergus is the genus of the typical mergansers ( ) fish-eating ducks in the subfamily Anatinae. The genus name is a Latin word used by Pliny the Elder and other Roman authors to refer to an unspecified waterbird.
The common merganser (Mergus merganser) and red-breasted merganser (M. serrator) have broad ranges in the northern hemisphere. The Brazilian merganser (M. octosetaceus) is a South American duck, and one of the six most threatened waterfowl in the world, with possibly fewer than 250 birds in the wild. The scaly-sided merganser or "Chinese merganser" (M. squamatus) is an endangered species. It lives in temperate East Asia, breeding in the north and wintering in the south.
The hooded merganser (Lophodytes cucullatus, formerly known as Mergus cucullatus) is not of this genus but is closely related. The other "aberrant" merganser, the smew (Mergellus albellus), is phylogenetically closer to goldeneyes (Bucephala).
Although they are seaducks, most of the mergansers prefer riverine habitats, with only the red-breasted merganser being common at sea. These large fish-eaters typically have black-and-white, brown and/or green hues in their plumage, and most have somewhat shaggy crests. All have serrated edges to their long and thin bills that help them grip their prey. Along with the smew and hooded merganser, they are therefore often known as "sawbills". The goldeneyes, on the other hand, feed mainly on mollusks, and therefore have a more typical duck-bill.
Mergus ducks are also classified as "diving ducks" because they submerge completely in looking for food. In other traits, however, the genera Mergus, Lophodytes, Mergellus, and Bucephala are very similar: uniquely among all Anseriformes, they do not have notches at the hind margin of their sternum, but holes surrounded by bone.
Taxonomy
The genus Mergus was introduced in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. The genus name is the Latin word for an unidentified waterbird mentioned by Pliny the Elder and other authors; some sources have identified the original mergus as referring to either a cormorant or Scopoli's shearwater. The type species was designated as Mergus serrator Linnaeus, 1758 (the red-breasted merganser) by Thomas Campbell Eyton in 1838.
Recent species
The genus contains four living species and two recently extinct species.
Fossil species
Some fossil members of this genus have been described:
Mergus miscellus is known from the Middle Miocene Calvert Formation (Barstovian, c.14 million years ago) of Virginia, USA.
Mergus connectens lived in the Early Pleistocene about 2–1 million years ago, in Central and Eastern Europe.
The Early Oligocene booby "Sula" ronzoni was at first mistakenly believed to be a typical merganser. A Late Serravallian (13–12 million years ago) fossil sometimes attributed to Mergus, found in the Sajóvölgyi Formation of Mátraszőlős, Hungary, probably belongs to Mergellus. The affiliations of the mysterious "Anas" albae from the Messinian (c. 7–5 million years ago) of Hungary are undetermined; it was initially believed to be a typical merganser too.
| Biology and health sciences | Anseriformes | Animals |
260857 | https://en.wikipedia.org/wiki/Vireo | Vireo | The vireos make up a family, Vireonidae, of small to medium-sized passerine birds found in the New World (Canada to Argentina, including Bermuda and the West Indies) and Southeast Asia. The family contains 62 species and is divided into eight genera. "Vireo" is a Latin word referring to a green migratory bird, perhaps the female golden oriole, possibly the European greenfinch.
They are typically dull-plumaged and greenish in color, the smaller species resembling wood warblers apart from their heavier bills. They range in size from the Chocó vireo, dwarf vireo and lesser greenlet, all at around 10 cm and 8g, to the peppershrikes and shrike-vireos at up to 17 cm and 40g.
Distribution and habitat
Most species are found in Middle America and northern South America. Thirteen species of true vireos occur farther north, in the United States, Bermuda and Canada; of these all but Hutton's vireo are migratory. Members of the family seldom fly long distances except in migration. They inhabit forest environments, with different species preferring forest canopies, undergrowth, or mangrove swamps.
A few species in the genus Vireo have appeared on the eastern side of the Atlantic as vagrants to the Western Palearctic.
Behaviour
The resident species occur in pairs or family groups that maintain territories all year (except Hutton's vireo, which joins mixed feeding flocks). Most of the migrants defend winter territories against conspecifics. The exceptions are the complex comprising the red-eyed vireo, the yellow-green vireo, the black-whiskered vireo, and the Yucatan vireo, which winter in small wandering flocks.
Voice
Males of most species are persistent singers. Songs are usually rather simple, monotonous in some species of the Caribbean littoral and islands, and most elaborate and pleasant to human ears in the Chocó vireo and the peppershrikes.
Breeding
The nests of many tropical species are unknown. Of those that are known, all build a cup-shaped nest that hangs from branches. The female does most of the incubation, spelled by the male except in the red-eyed vireo complex.
Feeding
All members of the family eat some fruit but mostly insects and other arthropods. They take prey from leaves and branches; true vireos also flycatch, and the gray vireo takes 5 percent of its prey from the ground.
Systematics
The family Vireonidae is related to the crow-like birds in family Corvidae and the shrikes in family Laniidae as part of superfamily Corvoidea. As currently circumscribed the family is made up of eight genera.
Traditionally the family was considered to include four New World genera containing the true vireos (Vireo), the greenlets (Hylophilus), the shrike-vireos (Vireolanius) and the peppershrikes (Cyclarhis). However, phylogenetic studies found Hylophilus to be polyphyletic, with the greenlets split into three distinct groups: the "scrub" greenlets in a restricted Hylophilus, the "canopy" greenlets in resurrected genus Pachysylvia and the tawny-crowned greenlet in new genus Tunchiornis.
In addition, biochemical studies have identified two babbler genera (Pteruthius and Erpornis) which may be Old World members of this family. Observers have commented on the vireo-like behaviour of the Pteruthius shrike-babblers, but apparently no-one suspected the biogeographically unlikely possibility of vireo relatives in Asia. Some recent taxonomic treatements, such as the IOC taxonomy followed here, include Pteruthius and Erpornis in Vireionidae, whereas other place them in their own families Pteruthidae and Erpornidae.
Species in taxonomic order
| Biology and health sciences | Passerida | Animals |
261002 | https://en.wikipedia.org/wiki/Traffic%20sign | Traffic sign | Traffic signs or road signs are signs erected at the side of or above roads to give instructions or provide information to road users. The earliest signs were simple wooden or stone milestones. Later, signs with directional arms were introduced, for example the fingerposts in the United Kingdom and their wooden counterparts in Saxony.
With traffic volumes increasing since the 1930s, many countries have adopted pictorial signs or otherwise simplified and standardized their signs to overcome language barriers, and enhance traffic safety. Such pictorial signs use symbols (often silhouettes) in place of words and are usually based on international protocols. Such signs were first developed in Europe, and have been adopted by most countries to varying degrees.
International conventions
International conventions such as Vienna Convention on Road Signs and Signals and Geneva Convention on Road Traffic have helped to achieve a degree of uniformity in traffic signing in various countries. Countries have also unilaterally (to some extent) followed other countries in order to avoid confusion.
Categories
Traffic signs can be grouped into several types. For example, Annexe 1 of the Vienna Convention on Road Signs and Signals (1968), which on 30 June 2004 had 52 signatory countries, defines eight categories of signs:
A. Danger warning signs
B. Priority signs
C. Prohibitory or restrictive signs
D. Mandatory signs
E. Special regulation signs
F. Information, facilities, or service signs
G. Direction, position, or indication signs
H. Additional panels
In the United States, Canada, Australia, and New Zealand signs are categorized as follows:
Regulatory signs
Warning signs
Guide signs
Street name signs
Route marker signs
Expressway signs
Freeway signs
Welcome signs
Informational signs
Recreation and cultural interest signs
Emergency management (civil defense) signs
Temporary traffic control (construction or work zone) signs
School signs
Railroad and light rail signs
Bicycle signs
In the United States, the categories, placement, and graphic standards for traffic signs and pavement markings are legally defined in the Federal Highway Administration's Manual on Uniform Traffic Control Devices as the standard.
A rather informal distinction among the directional signs is the one between advance directional signs, interchange directional signs, and reassurance signs. Advance directional signs appear at a certain distance from the interchange, giving information for each direction. A number of countries do not give information for the road ahead (so-called "pull-through" signs), and only for the directions left and right. Advance directional signs enable drivers to take precautions for the exit (e.g., switch lanes, double check whether this is the correct exit, slow down).
They often do not appear on lesser roads, but are normally posted on expressways and motorways, as drivers would be missing exits without them. While each nation has its own system, the first approach sign for a motorway exit is mostly placed at least from the actual interchange. After that sign, one or two additional advance directional signs typically follow before the actual interchange itself.
History
The earliest road signs were milestones, giving distance or direction; for example, the Romans erected stone columns throughout their empire giving the distance to Rome. According to Strabo, Mauryas erected signboards at distance of 10 stades to mark their roads. In the Middle Ages, multidirectional signs at intersections became common, giving directions to cities and towns.
In 1686, the first known Traffic Regulation Act in Europe was established by King Peter II of Portugal. This act foresaw the placement of priority signs in the narrowest streets of Lisbon, stating which traffic should back up to give way. One of these signs still exists at Salvador street, in the neighborhood of Alfama.
The first modern road signs erected on a wide scale were designed for riders of high or "ordinary" bicycles in the late 1870s and early 1880s. These machines were fast, silent and their nature made them difficult to control, moreover their riders travelled considerable distances and often preferred to tour on unfamiliar roads. For such riders, cycling organizations began to erect signs that warned of potential hazards ahead (particularly steep hills), rather than merely giving distance or directions to places, thereby contributing the sign type that defines "modern" traffic signs.
The development of automobiles encouraged more complex signage systems using more than just text-based notices. One of the first modern-day road sign systems was devised by the Italian Touring Club in 1895. By 1900, a Congress of the International League of Touring Organizations in Paris was considering proposals for standardization of road signage. In 1903 the British government introduced four "national" signs based on shape, but the basic patterns of most traffic signs were set at the 1908 World Road Congress in Paris. In 1909, nine European governments agreed on the use of four pictorial symbols, indicating "bump", "curve", "intersection", and "grade-level railroad crossing". The intensive work on international road signs that took place between 1926 and 1949 eventually led to the development of the European road sign system. Both Britain and the United States developed their own road signage systems, both of which were adopted or modified by many other nations in their respective spheres of influence. The UK adopted a version of the European road signs in 1964 and, over past decades, North American signage began using some symbols and graphics mixed in with English.
In the U.S., the first road signs were erected by the American Automobile Association (AAA). Starting in 1906, regional AAA clubs began paying for and installing wooden signs to help motorists find their way. In 1914, AAA started a cohesive transcontinental signage project, installing more than 4,000 signs in one stretch between Los Angeles and Kansas City alone.
Over the years, change was gradual. Pre-industrial signs were stone or wood, but with the development of Darby's method of smelting iron using coke-painted cast iron became favoured in the late 18th and 19th centuries. Cast iron continued to be used until the mid-20th century, but it was gradually displaced by aluminium or other materials and processes, such as vitreous enamelled and/or pressed malleable iron, or (later) steel. Since 1945 most signs have been made from sheet aluminium with adhesive plastic coatings; these are normally retroreflective for nighttime and low-light visibility. Before the development of reflective plastics, reflectivity was provided by glass reflectors set into the lettering and symbols.
New generations of traffic signs based on electronic displays can also change their text (or, in some countries, symbols) to provide for "intelligent control" linked to automated traffic sensors or remote manual input. In over 20 countries, real-time Traffic Message Channel incident warnings are conveyed directly to vehicle navigation systems using inaudible signals carried via FM radio, 3G cellular data and satellite broadcasts. Finally, cars can pay tolls and trucks pass safety screening checks using video numberplate scanning, or RFID transponders in windshields linked to antennae over the road, in support of on-board signalling, toll collection, and travel time monitoring.
Yet another "medium" for transferring information ordinarily associated with visible signs is RIAS (Remote Infrared Audible Signage), e.g., "talking signs" for print-handicapped (including blind/low-vision/illiterate) people. These are infra-red transmitters serving the same purpose as the usual graphic signs when received by an appropriate device such as a hand-held receiver or one built into a cell phone.
Then, finally, in 1914, the world's first electric traffic signal is put into place on the corner of Euclid Avenue and East 105th Street in Cleveland, Ohio, on August 5.
Typefaces
Typefaces used on traffic signs vary by location, with some typefaces being designed specifically for the purpose of being used on traffic signs and based on attributes that aid viewing from a distance. A typeface chosen for a traffic sign is selected based on its readability, which is essential for conveying information to drivers quickly and accurately at high speeds and long distances.
Factors such as clear letterforms, lines of copy, appropriate spacing, and simplicity contribute to readability. Increased X-height and counters specifically help with letter distinction and reduced halation, which especially affects aging drivers. In cases of halation, certain letters can blur and look like others, such as a lowercase "e" appearing as an "a", "c", or "o".
Dispute of standard typefaces for North American traffic signs
In 1997, a design team at T.D. Larson Transportation Institute began testing Clearview, a typeface designed to improve readability and halation issues with the FHWA Standard Alphabet, also known as Highway Gothic, which is the standard typeface for highway signs in the U.S.
The adoption of Clearview for traffic signs over Highway Gothic has been slow since its initial proposal. Country-wide adoption faced resistance from both local governments and the Federal Highway Administration (FHWA), citing concerns about consistency and cost, along with doubts of the studies done on Clearview’s improved readability. As stated by the FHWA, "This process (of designing Clearview) did not result in a necessarily better set of letter styles for highway signing, but rather a different set of letter styles with increased letter height and different letter spacing that was not comparable to the Standard Alphabets."
The FHWA allowed use of Clearview to be approved on an interim basis as opposed to national change, where local governments could decide to submit a request to the FHWA for approval to update their signs with Clearview, but in 2016 they rescinded this approval, wanting to limit confusion and inconsistency that could come from a mix of two typefaces being used. In 2018, they again allowed interim approval of Clearview, with Highway Gothic remaining the standard.
Automatic traffic sign recognition
Cars are beginning to feature cameras with automatic traffic sign recognition, beginning 2008 with the Opel Insignia. It mainly recognizes speed limits and no-overtaking areas. It also uses GPS and a database over speed limits, which is useful in the many countries which signpost city speed limits with a city name sign, not a speed limit sign.
Image gallery
Rail traffic
Rail traffic has often a lot of differences between countries and often not much similarity with road signs. Rail traffic has professional drivers who have much longer education than what's normal for road driving licenses. Differences between neighboring countries cause problems for cross border traffic and causes need for additional education for drivers.
| Technology | Road infrastructure | null |
261376 | https://en.wikipedia.org/wiki/Tanager | Tanager | The tanagers (singular ) comprise the bird family Thraupidae, in the order Passeriformes. The family has a Neotropical distribution and is the second-largest family of birds. It represents about 4% of all avian species and 12% of the Neotropical birds.
Traditionally, the family contained around 240 species of mostly brightly colored fruit-eating birds. As more of these birds were studied using modern molecular techniques, it became apparent that the traditional families were not monophyletic. Euphonia and Chlorophonia, which were once considered part of the tanager family, are now treated as members of the Fringillidae, in their own subfamily (Euphoniinae). Likewise, the genera Piranga (which includes the scarlet tanager, summer tanager, and western tanager), Chlorothraupis, and Habia appear to be members of the family Cardinalidae, and have been reassigned to that family by the American Ornithological Society.
Description
Tanagers are small to medium-sized birds. The shortest-bodied species, the white-eared conebill, is long and weighs , barely smaller than the short-billed honeycreeper. The longest, the magpie tanager is and weighs . The heaviest is the white-capped tanager, which weighs and measures about . Both sexes are usually the same size and weight.
Tanagers are often brightly colored, but some species are black and white. Males are typically more brightly colored than females and juveniles. Most tanagers have short, rounded wings. The shape of the bill seems to be linked to the species' foraging habits.
Distribution
Tanagers are restricted to the Western Hemisphere and mainly to the tropics. About 60% of tanagers live in South America, and 30% of these species live in the Andes. Most species are endemic to a relatively small area.
Behavior
Most tanagers live in pairs or in small groups of three to five individuals. These groups may consist simply of parents and their offspring. These birds may also be seen in single-species or mixed flocks. Many tanagers are thought to have dull songs, though some are elaborate.
Diet
Tanagers are omnivorous, and their diets vary by genus. They have been seen eating fruits, seeds, nectar, flower parts, and insects. Many pick insects off branches or from holes in the wood. Other species look for insects on the undersides of leaves. Yet others wait on branches until they see a flying insect and catch it in the air. Many of these particular species inhabit the same areas, but these specializations alleviate competition.
Breeding
The breeding season is March through June in temperate areas and in September through October in South America. Some species are territorial, while others build their nests closer together. Little information is available on tanager breeding behavior. Males show off their brightest feathers to potential mates and rival males. Some species' courtship rituals involve bowing and tail lifting.
Most tanagers build cup nests on branches in trees. Some nests are almost globular. Entrances are usually built on the side of the nest. The nests can be shallow or deep. The species of the tree in which they choose to build their nests and the nests' positions vary among genera. Most species nest in an area hidden by very dense vegetation. No information is yet known regarding the nests of some species.
The clutch size is three to five eggs. The female incubates the eggs and builds the nest, but the male may feed the female while she incubates. Both sexes feed the young. Five species have helpers assist in feeding the young. These helpers are thought to be the previous year's nestlings.
Taxonomy
The family Thraupidae was introduced (as the subfamily Thraupinae) in 1847 by German ornithologist Jean Cabanis. The type genus is Thraupis.
The family Thraupidae is a member of an assemblage of over 800 birds known as the New World, nine-primaried oscines. The traditional pre-molecular classification was largely based on the different feeding specializations. Nectar-feeders were placed in Coerebidae (honeycreepers), large-billed seed-eaters in Cardinalidae (cardinals and grosbeaks), smaller-billed seed-eaters in Emberizidae (New World finches and sparrows), ground-foraging insect-eaters in Icteridae (blackbirds) and fruit-eaters in Thraupidae. This classification was known to be problematic as analyses using other morphological characteristics often produced conflicting phylogenies. Beginning in the last decade of the 20th century, a series of molecular phylogenetic studies led to a complete reorganization of the traditional families. Thraupidae now includes large-billed seed eaters, thin-billed nectar feeders, and foliage gleaners as well as fruit-eaters.
One consequence of redefining the family boundaries is that for many species their common names are no longer congruent with the families in which they are placed. As of July 2020 there are 39 species with "tanager" in the common name that are not placed in Thraupidae. These include the widely distributed scarlet tanager and western tanager, which are both now placed in Cardinalidae. There are also 106 species within Thraupidae that have "finch" in their common name.
A molecular phylogenetic study published in 2014 revealed that many of the traditional genera were not monophyletic. In the resulting reorganization six new genera were introduced, eleven genera were resurrected, and seven genera were abandoned.
As of July 2023 the family contains 386 species which are divided into 15 subfamilies and 105 genera. For a complete list, see the article List of tanager species.
List of genera
Catamblyrhynchinae
The plushcap has no close relatives and is now placed in its own subfamily. It was previously placed either in the subfamily Catamblyrhynchinae within the Emberizidae or in its own family Catamblyrhynchidae.
Charitospizinae
The coal-crested finch is endemic to the grasslands of Brazil and has no close relatives. It is unusual in that both sexes have a crest. It was formerly placed in Emberizidae.
Orchesticinae
Two species with large thick bills. Parkerthraustes was formerly placed in Cardinalidae.
Nemosiinae
Brightly colored, sexually dichromatic birds. Most form single-species flocks.
Emberizoidinae
Grassland dwelling birds that were formerly placed in Emberizidae.
Porphyrospizinae
Yellow billed birds. The blue finch (Rhopospina caerulescens) was formerly placed in Cardinalidae; the other species were formerly placed in Emberizidae.
Hemithraupinae
These species are sexually dichromatic and many have yellow and black plumage. Except for Heterospingus, they have slender bills.
Dacninae
Sexually dichromatic species—males have blue plumage and females are green.
Saltatorinae
Mainly arboreal with long tails and thick bills. Formerly placed in Cardinalidae.
Coerebinae
This subfamily includes Darwin's finches that are endemic to the Galápagos Islands and Cocos Island. Most of these species were formerly placed in Emberizidae; the exceptions are the bananaquit that was placed in Parulidae and the orangequit that was placed in Thraupidae. These species build domed or covered nests with side entrances. They have evolved a variety of foraging techniques, including nectar-feeding (Coereba, Euneornis), seed-eating (Geospiza, Loxigilla, Tiaris), and insect gleaning (Certhidea).
Darwin's finches:
Tachyphoninae
Most of these are lowland species. Many have ornamental features such as crests, and many have sexually dichromatic plumage.
Sporophilinae
These species were formerly placed in Emberizidae.
Poospizinae
Some of these species were formerly placed in Emberizidae.
Diglossinae
This is a morphologically diverse group that includes seed-eaters (Nesospiza, Sicalis, Catamenia, Haplospiza), arthropod feeders (Conirostrum), a bamboo specialist (Acanthidops), an aphid feeder (Xenodacnis), and boulder field specialists (Idiopsar). Many species live at high altitudes. Conirostrum was previously placed in Parulidae, Diglossa was placed in Thraupidae, and the remaining genera were placed in Emberizidae.
Thraupinae
Typical tanagers.
Genera formerly placed in Thraupidae
Passerellidae – New World sparrows
Chlorospingus – eight species - bush-tanagers
Oreothraupis – tanager finch
Cardinalidae – cardinals
Piranga – 9 species - northern tanagers
Habia – five species - ant-tanagers or habias
Chlorothraupis – three species
Amaurospiza – four species
Fringillidae – subfamily Euphoniinae
Euphonia – 27 species
Chlorophonia – five species
Phaenicophilidae – Hispaniolan tanagers
Microligea – green-tailed warbler
Xenoligea – white-winged warbler
Phaenicophilus – two species
Mitrospingidae – Mitrospingid tanagers
Mitrospingus – two species
Orthogonys – olive-green tanager
Lamprospiza – red-billed pied tanager
Nesospingidae
Nesospingus – Puerto Rican tanager
Spindalidae
Spindalis – four species - spindalises
Calyptophilidae
Calyptophilus – two species - chat-tanagers
Rhodinocichlidae
Rhodinocichla – rosy thrush-tanager
| Biology and health sciences | Passerida | Animals |
261456 | https://en.wikipedia.org/wiki/Lime%20%28fruit%29 | Lime (fruit) | A lime is a citrus fruit, which is typically round, lime green in colour, in diameter, and contains acidic juice vesicles.
There are several species of citrus trees whose fruits are called limes, including the Key lime (Citrus aurantiifolia), Persian lime, kaffir lime, finger lime, blood lime, and desert lime. Limes are a rich source of vitamin C, are sour, and are often used to accent the flavours of foods and beverages. They are grown year-round. Plants with fruit called "limes" have diverse genetic origins; limes do not form a monophyletic group. The term lime originated in other languages (from French , from Arabic , from Persian , ).
Plants known as "lime"
The difficulty in identifying exactly which species of fruit are called lime in different parts of the English-speaking world (the same problem applies to synonyms in other European languages) is increased by the botanical complexity of the Citrus genus itself, to which the majority of limes belong. Species of this genus hybridise readily; only recently have genetic studies started to shed light on the structure of the genus. The majority of cultivated species are in reality hybrids, produced from the citron (Citrus medica), the mandarin orange (Citrus reticulata), the pomelo (Citrus maxima) and in particular with many lime varieties, the micrantha (Citrus hystrix var. micrantha).
Australian limes (former Microcitrus and Eremocitrus)
Australian desert lime (Citrus glauca)
Australian finger lime (Citrus australasica)
Australian lime (Citrus australis)
Blood lime (red finger lime × (sweet orange × mandarin))
Makrut lime (Citrus hystrix); a papeda relative, is one of the three most widely produced limes globally.
Key lime (Citrus × aurantiifolia=Citrus micrantha × Citrus medica) is also one of the three most widely produced limes globally.
Philippine lime (Citrus × microcarpa), a kumquat × mandarin hybrid
Persian lime (Citrus × latifolia) a key lime × lemon hybrid, is the single most widely produced lime globally, with Mexico being the largest producer.
Rangpur lime (Mandarin lime, lemandarin, Citrus limonia), a mandarin orange × citron hybrid
Spanish lime (Melicoccus bijugatus); not a citrus
Sweet lime etc. (Citrus limetta, etc.); several distinct citrus hybrids
Wild lime (Adelia ricinella); not a citrus
Wild lime (Zanthoxylum fagara); not a citrus
Limequat (key lime × kumquat)
The tree species known in Britain as lime trees (Tilia sp.), called linden or basswood in other dialects of English, are broadleaf temperate plants unrelated to the citrus fruits.
History
Most species and hybrids of citrus plants called "limes" have varying origins within tropical Southeast Asia and South Asia. They were spread throughout the world via migration and trade. The makrut lime, in particular, was one of the earliest citrus fruits introduced to other parts of the world by humans. They were spread into Micronesia and Polynesia via the Austronesian expansion (c. 3000–1500 BCE). They were also later spread into Middle East, and the Mediterranean region via the spice trade and the incense trade routes from as early as ~1200 BCE.
To prevent scurvy during the 19th century, British sailors were issued a daily allowance of citrus, such as lemon, and later switched to lime. The use of citrus was initially a closely guarded military secret, as scurvy was a common scourge of various national navies, and the ability to remain at sea for lengthy periods without contracting the disorder was a huge benefit for the military. British sailors thus acquired the nickname "Limey" because of their use of limes.
Production
In 2022, world production of limes (combined with lemons for reporting) was 21.5 million tonnes, led by India, Mexico, and China as the major producers (table).
Uses
Culinary
Limes have higher contents of sugars and acids than lemons do. Lime juice may be squeezed from fresh limes, or purchased in bottles in both unsweetened and sweetened varieties. Lime juice is used to make limeade, and as an ingredient (typically as sour mix) in many cocktails.
Lime pickles are an integral part of Indian cuisine, especially in South India. In Kerala, the Onam Sadhya usually includes either lemon pickle or lime pickle. Other Indian preparations of limes include sweetened lime pickle, salted pickle, and lime chutney.
In cooking, lime is valued both for the acidity of its juice and the floral aroma of its zest. It is a common ingredient in authentic Mexican, Vietnamese and Thai dishes. Lime soup is a traditional dish from the Mexican state of Yucatan. It is also used for its pickling properties in ceviche. Some guacamole recipes call for lime juice.
The use of dried limes (called black lime or limoo) as a flavouring is typical of Persian cuisine, Iraqi cuisine, as well as in Eastern Arabian cuisine baharat (a spice mixture that is also called kabsa or kebsa).
Key lime gives the character flavouring to the American dessert known as Key lime pie. In Australia, desert lime is used for making marmalade.
Lime is an ingredient in several highball cocktails, often based on gin, such as gin and tonic, the gimlet and the Rickey. Freshly squeezed lime juice is also considered a key ingredient in margaritas, although sometimes lemon juice is substituted. It is also found in many rum cocktails such as the daiquiri, and other tropical drinks.
Lime extracts and lime essential oils are frequently used in perfumes, cleaning products, and aromatherapy.
Nutrition and phytochemicals
Raw limes are 88% water, 10% carbohydrates and less than 1% each of fat and protein (table). Only vitamin C content at 35% of the Daily Value (DV) per 100 g serving is significant for nutrition, with other nutrients present in low DV amounts (table). Lime juice contains slightly less citric acid than lemon juice (about 47 g/L), nearly twice the citric acid of grapefruit juice, and about five times the amount of citric acid found in orange juice.
Lime pulp and peel contain diverse phytochemicals, including polyphenols and terpenes.
Toxicity
Contact with lime peel or lime juice followed by exposure to ultraviolet light may lead to phytophotodermatitis, which is sometimes called margarita photodermatitis or lime disease (not to be confused with Lyme disease). Bartenders handling limes and other citrus fruits while preparing cocktails may develop phytophotodermatitis.
A class of organic chemical compounds called furanocoumarins are reported to cause phytophotodermatitis in humans. Limes contain numerous furanocoumarin compounds, including limettin (also called citropten), bergapten, isopimpinellin, xanthotoxin (also called methoxsalen), and psoralen. Bergapten appears to be the primary furanocoumarin compound responsible for lime-induced phytophotodermatitis.
Lime peel contains higher concentrations of furanocoumarins than lime pulp (by one or two orders of magnitude). Thus lime peels are considerably more phototoxic than lime pulp.
| Biology and health sciences | Sapindales | null |
261463 | https://en.wikipedia.org/wiki/Tilia | Tilia | Tilia is a genus of about 30 species of trees or bushes, native throughout most of the temperate Northern Hemisphere. The tree is known as linden for the European species, and basswood for North American species. In Great Britain and Ireland they are commonly called lime trees, although they are not related to the citrus lime. The genus occurs in Europe and eastern North America, but the greatest species diversity is found in Asia. Under the Cronquist classification system, this genus was placed in the family Tiliaceae, but genetic research summarised by the Angiosperm Phylogeny Group has resulted in the incorporation of this genus, and of most of the previous family, into the Malvaceae.
Tilia is the only known ectomycorrhizal genus in the family Malvaceae. Studies of ectomycorrhizal relations of Tilia species indicate a wide range of fungal symbionts and a preference toward Ascomycota fungal partners.
Description
Tilia species are mostly large, deciduous trees, reaching typically tall, with oblique-cordate (heart-shaped) leaves across. As with elms, the exact number of species is uncertain, as many of the species can hybridise readily, both in the wild and in cultivation. They are hermaphroditic, having perfect flowers with both male and female parts, pollinated by insects.
The Tilia sturdy trunk stands like a pillar and the branches divide and subdivide into numerous ramifications on which the twigs are fine and thick. In summer, these are profusely clothed with large leaves and the result is a dense head of abundant foliage.
The leaves of all Tilia species are heart-shaped, and most are asymmetrical. The tiny, pea-like fruit hangs attached to a ribbon-like, greenish-yellow bract whose apparent purpose is to launch the ripened seed clusters just a little beyond the parent tree. The flowers of the European and American Tilia species are similar, except the American ones bear a petal-like scale among their stamens and the European varieties are devoid of these appendages. All of the Tilia species may be propagated by cuttings and grafting, as well as by seed. They grow rapidly in rich soil, but are subject to the attack of many insects. Tilia is notoriously difficult to propagate from seed unless collected fresh in fall. If allowed to dry, the seeds go into a deep dormancy and take 18 months to germinate.
Taxonomy
Subdivision
Species
This list comprises the most widely accepted species, hybrids, and cultivars.
Tilia americana L. – American basswood, American linden
Tilia amurensis – Amur lime, Amur linden
Tilia caroliniana – Carolina basswood
Tilia chinensis – Chinese linden
Tilia chingiana Hu & W.C.Cheng
Tilia cordata Mill. – Small-leaved lime, little-leaf linden or greenspire linden
Tilia dasystyla Steven
Tilia henryana Szyszyl. – Henry's lime, Henry's linden
Tilia hupehensis – Hubei lime
Tilia insularis
Tilia intonsa
Tilia japonica – Japanese lime, shina (when used as a laminate)
†Tilia johnsoni Wolfe & Wehr Eocene; Washington and British Columbia
Tilia kiusiana
Tilia mandshurica – Manchurian lime
Tilia maximowicziana
Tilia miqueliana
Tilia mongolica Maxim. – Mongolian lime, Mongolian linden
Tilia nasczokinii – Nasczokin's lime, Nasczokin's linden
Tilia nobilis – noble lime
Tilia officinarum
Tilia oliveri – Oliver's lime
Tilia paucicostata
Tilia platyphyllos Scop. – large-leaved lime
Tilia rubra – Red stem lime (syn. T. platyphyllos var. rubra)
Tilia tomentosa Moench – silver lime, silver linden
Tilia tuan Szyszyl.
Hybrids and cultivars
Tilia × euchlora (T. dasystyla × T. cordata)
Tilia × europaea – Common lime (T. cordata × T. platyphyllos; syn. T. × vulgaris)
Tilia × petiolaris (T. tomentosa × T. ?)
Tilia 'Flavescens' – Glenleven linden (T. americana × T. cordata)
Tilia 'Moltkei' (T. americana × T. petiolaris)
Tilia 'Orbicularis' (hybrid, unknown origin)
Tilia 'Spectabilis' (hybrid, unknown origin)
Etymology
The Latin tilia is cognate to Greek πτελέᾱ, ptelea, "elm tree", τιλίαι, tiliai, "black poplar" (Hes.), ultimately from a Proto-Indo-European word *ptel-ei̯ā with a meaning of "broad" (feminine); perhaps "broad-leaved" or similar.
The genus is generally called "lime" or "linden" in Britain and "linden", "lime", or "basswood" in North America.
"Lime" is an altered form of Middle English lind, in the 16th century also line, from Old English feminine lind or linde, Proto-Germanic *lindō (cf. Dutch/German Linde, plural Linden), cognate to Latin lentus "flexible" and Sanskrit latā "liana". Within Germanic languages, English "lithe" and Dutch/German lind for "lenient, yielding" are from the same root.
"Linden" was originally the adjective, "made from linwood or lime-wood" (equivalent to "wooden" or "oaken"); from the late 16th century, "linden" was also used as a noun, probably influenced by translations of German romance, as an adoption of Linden, the plural of Linde in Dutch and German.
Neither the name nor the tree is related to Citrus genus species and hybrids that go by the same name, such as Key limes (Citrus × aurantifolia). Another common name used in North America is basswood, derived from bast, the name for the inner bark (see Uses, below). Teil is an old name for the lime tree.
Ecology
Aphids are attracted by the rich supply of sap, and are in turn often "farmed" by ants for the production of the sap, which the ants collect for their own use, and the result can often be a dripping of excess sap onto the lower branches and leaves, and anything else below. Cars left under the trees can quickly become coated with a film of the syrup ("honeydew") thus dropped from higher up. The ant/aphid "farming" process does not appear to cause any serious damage to the trees.
Uses
The linden is recommended as an ornamental tree when a mass of foliage or a deep shade is desired. It produces fragrant and nectar-producing flowers and is an important honey plant for beekeepers, giving rise to a pale but richly flavoured monofloral honey. In European and North American herbal medicine, the flowers are also used for herbal teas and tinctures. The flowers are used for herbal tea in the winter in the Balkans. In China, dried Tilia flowers are also used to make tea.
In English landscape gardens, avenues of linden trees were fashionable, especially during the late 17th and early 18th centuries. Many country houses have a surviving "lime avenue" or "lime walk", the example at Hatfield House was planted between 1700 and 1730. The fashion was derived from the earlier practice of planting lindens in lines as shade trees in Germany, the Netherlands, Belgium and northern France. Most of the trees used in British gardens were cultivars propagated by layering in the Netherlands.
Wood
Linden trees produce soft and easily worked timber, which has very little grain and a density of 560 kg/m3. It was often used by Germanic tribes for constructing shields. It is a popular wood for model building and for intricate carving. Especially in Germany, it was the classic wood for sculpture from the Middle Ages onwards and is the material for the elaborate altarpieces of Veit Stoss, Tilman Riemenschneider, and many others. In England, it was the favoured medium of the sculptor Grinling Gibbons (1648–1721). The wood is used in marionette- and puppet-making and -carving. Having a fine, light grain and being comparatively light in weight, it has been used for centuries for this purpose; despite the availability of modern alternatives, it remains one of the main materials used . In China, it was also widely used in carving or furniture, interior decorating, handicrafts, etc.
Ease of working and good acoustic properties also make limewood popular for electric and bass guitar bodies and for wind instruments such as recorders. Percussion manufacturers sometimes use Tilia as a material for drum shells, both to enhance their sound and for their aesthetics.
Linden wood is also the material of choice for window blinds and shutters. Real-wood blinds are often made from this lightweight but strong and stable wood, which is well suited to natural and stained finishes.
In China, 冻蘑/"dongmo" grows well on decomposing logs of Tilia trees in the old-growth forest; therefore, people use logs of Tilia trees to cultivate S. edulis and even Black fungus or shiitake mushrooms with excellent results. Currently, "椴木黑木耳/Tilia-logs-black fungus" or "椴木香菇/Tilia-logs-shiitake mushrooms" has become a term for a method of cultivating black fungus and shiitake mushrooms and "椴木/Tilia-logs" no longer exclusively refers to Tilia tree wood but also to other woods suitable for black fungus or shiitake mushrooms cultivation.
In Russian, "linden-made" (липовый, lipoviy) is a term for forgery, due to the popularity of the material for making forged seals in the past centuries.
Bark
Known in the trade as basswood, particularly in North America, its name originates from the inner fibrous bark of the tree, known as bast. A strong fibre is obtained from the tree by peeling off the bark and soaking it in water for a month, after which the inner fibres can be easily separated. Bast obtained from the inside of the bark of the Tilia japonica tree has been used by the Ainu people of Japan to weave their traditional clothing, the attus. Excavations in Britain have shown that lime tree fibre was preferred for clothing there during the Bronze Age. The Manchu people in the mountains of Northeast China made ropes, baskets, raincoats, large fishing nets, and guide lines for gunpowder from the bast. Similar fibres obtained from other plants are also called bast: see Bast fibre.
Nectar
Tilia is a high-quality wild honey plant. In China, "椴树蜜/Tilia honey" is produced in the northeast region. White in color, it is called "white honey" or "snow honey". Heilongjiang is well-known throughout the country for producing high-quality "Tilia honey": Heilongjiang not only has lush Tilia trees, but also a rare and excellent bee species - "东北黑蜂/Northeast Black Bee" to collect honey(Raohe County is the location of the national "东北黑蜂自然保护区/Northeast Black Bee Nature Reserve". It is the only nature reserve for bees in Asia.). "Tilia honey" mainly comes from Tilia amurensis and Tilia mandshurica. "Tilia honey" and southern "longan honey" and "lychee honey" are called "China's three famous honeys". "Tilia honey", "rape honey" and "black acacia honey" are the three most productive honeys in China.
Phytochemicals
The dried flowers are mildly sweet and sticky, and the fruit is somewhat sweet and mucilaginous. Linden flower tea has a pleasing taste, due to the aromatic volatile oil found in the flowers. Phytochemicals in the Tilia flowers include flavonoids and tannins with astringent properties.
The nectar contains a major secondary metabolite with the trivial name tiliaside (1-[4-(1-hydroxy-1-methylethyl)-1,3-cyclohexadiene-1-carboxylate]-6-O-β-D-glucopyranosyl-β-D-glucopyranose) which is transformed in the gut of bumblebees to the aglycone (i.e., the gentiobiose group is cleaved) which is bioactive against a common and debilitating gut parasite of bumblebees, Crithidia bombi. This naturally occurring compound may support bees to manage the burden of disease - one of the major contributors to pollinator decline.
Other uses
A beverage made from dried linden leaves and flowers is brewed and consumed as a folk medicine and relaxant in many Balkan countries, including Serbia and Greece. Usually, the double-flowered species are used to make perfumes. The leaf buds and young leaves are also edible raw.
Tilia species are used as food plants by the larvae of some Lepidoptera; see List of Lepidoptera that feed on Tilia.
In culture
In Europe, some linden trees reached considerable ages. A coppice of T. cordata in Westonbirt Arboretum in Gloucestershire is estimated to be 2,000 years old. In the courtyard of the Imperial Castle at Nuremberg is a Tilia, which by tradition recounted in 1900, was planted by the Empress Cunigunde, the wife of Henry II of Germany circa 1000. The Tilia of Neuenstadt am Kocher in Baden-Württemberg, Germany, was estimated at 1000 years old when it fell. The Alte Linde tree of Naters, Switzerland, is mentioned in a document in 1357 and described by the writer at that time as already magnam (large). A plaque at its foot mentions that in 1155, a linden tree was already on this spot. The Najevnik linden tree (), a 700-year-old T. cordata, is the thickest tree in Slovenia. Next to the 英華殿/Yinghua Temple in the Forbidden City in Beijing, there are two Tilia trees planted by Empress Dowager Li, the biological mother of Wanli Emperor about five hundred years ago.
The excellence of the honey of the far-famed Hyblaean Mountains was due to the linden trees that covered its sides and crowned its summit.
Lime fossils have been found in the Tertiary formations of Grinnell Land, Canada, at 82°N latitude, and in Svalbard, Norway. Sapporta believed he had found there the common ancestor of the Tilia species of Europe and America.
Gallery
| Biology and health sciences | Malvales | Plants |
261504 | https://en.wikipedia.org/wiki/Tody | Tody | The todies are a family, Todidae, of tiny Caribbean birds in the order Coraciiformes, which also includes the kingfishers, bee-eaters and rollers. The family has one living genus, Todus, and one genus known from the fossil record, Palaeotodus.
Taxonomy and systematics
The todies were originally placed in the kingfisher genus Alcedo before being placed in the genus Todus in 1760 by Mathurin Jacques Brisson. They have been linked to a large number of potential relatives since then, including nightjars, trogons, barbets, jacamars, puffbirds, kingfishers, motmots and even some passerine species such as broadbills, cotingas and flowerpeckers. The todies were placed in their own order, Todiformes, before being placed in the Coraciiformes.
Genetic analysis of the extant (living) species suggests that they diversified between 6-7 million years ago. The fossil record of the family is sparse, but three species of tody have been described from fossils found in North America, Germany and France, showing that the family was once more widespread than it is today. Species from the fossil genus, Palaeotodus, are larger than living species and may have been closer in size to the tody motmot.
The phylogenetic relationship between the six families that make up the order Coraciiformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
Distribution and habitat
The todies are endemic to the islands of the Caribbean. These are small, near passerine species of forests of the Greater Antilles: Puerto Rico, Jamaica, and Cuba, with adjacent islands, have one species each, and Hispaniola has two: the broad-billed tody (Todus subulatus) in the lowlands (including Gonâve Island), and the narrow-billed tody (Todus angustirostris) in the highlands.
Description
Todies range in weight from 5 to 7 g and in length from 10 to 11.5 cm. They have colourful plumage, and resemble kingfishers in their general shape. They have green heads, backs and wings, red throats (absent in immature Puerto Rican, broad-billed, and narrow-billed todies) with a white and blue-grey stripe on each side, and yellow undertail coverts; the colour of the rest of the undersides is pale and varies according to species. The irises are pale grey. They have long, flattened bills (as do many flycatching birds) with serrated edges; the upper mandible is black and the lower is red with a little black. The legs, and especially the feet, are small. Todies are highly vocal, except that the Jamaican tody seldom calls in the non-breeding season (August to November); they give simple, unmusical buzzing notes, beeps, and guttural rattles, puffing their throats out with every call. Their wings produce a "strange, whirring rattle", though mostly when courting or defending territory in the Puerto Rican tody.
Behaviour and ecology
Diet
Todies eat small prey such as insects and lizards. Insects, from 50 families have been identified in their diet, particularly grasshoppers, crickets, beetles, bugs, butterflies, bees, wasps, and ants, form the greater part of the diet. Spiders and millipedes may also be taken, as is a small amount of fruit (2% of the diet).
Their preferred habitat for foraging is in the forest understory. Todies typically sit on a low, small branch, singly or in pairs, keeping still or stepping or hopping sideways. When they see prey moving on the lower surface of a leaf, they fly a short distance (averaging 2.2 m in the broad-billed tody and 1.0 m in the Puerto Rican tody), diagonally upward to glean it. They may also take prey from the ground, occasionally chasing it with a few hops. Todies are generally sedentary; the longest single flight known for the broad-billed tody is 40 m. Their activity is greatest in the morning when sunny weather follows rain, and in March and September.
Todies are highly territorial but will join mixed-species foraging flocks composed of resident species and migrants from North America, when they pass through their territories.
Breeding
Like most of the Coraciiformes, todies nest in tunnels, which they dig with their beaks and feet in steep banks or rotten tree trunks. The tunnel is 30 cm long in the Cuban and narrow-billed todies, 30 to 60 cm in the broad-billed tody, and ends in a nest chamber, generally not reused. They lay about four round white eggs in the chamber. Both parents incubate but are surprisingly inattentive to the eggs. The young are altricial and stay in the nest until they can fly. Both parents also care for the nestlings, much more attentively; they may feed each chick up to 140 times per day, the highest rate known among birds.
Species list
Todus
Broad-billed tody, Todus subulatus
Cuban tody, Todus multicolor
Jamaican tody, Todus todus
Narrow-billed tody, Todus angustirostris
Puerto Rican tody, Todus mexicanus
†Palaeotodus
†Palaeotodus emryi
†Palaeotodus escampsiensis
†Palaeotodus itardiensis
| Biology and health sciences | Coraciiformes | Animals |
261538 | https://en.wikipedia.org/wiki/Winch | Winch | A winch is a mechanical device that is used to pull in (wind up) or let out (wind out) or otherwise adjust the tension of a rope or wire rope (also called "cable" or "wire cable").
In its simplest form, it consists of a spool (or drum) attached to a hand crank. Traditionally, winches on ships accumulated wire or rope on the drum; those that do not accumulate, and instead pass on the wire/rope (see yacht photo above), are called capstans. Despite this, sailboat capstans are most often referred to as winches. Winches are the basis of such machines as tow trucks, steam shovels and elevators. More complex designs have gear assemblies and can be powered by electric, hydraulic, pneumatic or internal combustion drives. It might include a solenoid brake and/or a mechanical brake or ratchet and pawl which prevents it unwinding unless the pawl is retracted. The rope may be stored on the winch. When trimming a line on a sailboat, the crew member turns the winch handle with one hand, while tailing (pulling on the loose tail end) with the other to maintain tension on the turns. Some winches have a "stripper" or cleat to maintain tension. These are known as "self-tailing" winches.
History
In the Ancient World
The earliest literary reference to a winch can be found in the account of Herodotus of Halicarnassus on the Persian Wars (Histories 7.36), where he describes how wooden winches were used to tighten the cables for a pontoon bridge across the Hellespont in 480 BCE. Winches may have been employed even earlier in Assyria.
By the 4th century BCE, winch and pulley hoists were regarded by Aristotle as common for architectural use (Mech. 18; 853b10-13).
In the 20th Century
The yacht Reliance, American defender of the 1903 America's Cup, was the first racing boat to be fitted with modern winches below decks. The Reliance's competitors relied on muscle power using topside mounted capstans and windlasses, which would soon be replaced in most applications by winches, including on fishing boats, where they are used to bring in the fishing nets.
Other applications
Vehicle recovery
The main feature that legally distinguishes a tow truck from a conventional truck in many jurisdictions is the presence of a winch, which is used to either extract disabled or immobilized vehicles, or to load them onto flatbed/tilt and load type tow trucks. These may be electrically or hydraulically powered from a power take-off, and is wound with a wire cable and equipped with a hook. Snatch blocks may be used to change direction or increase the pulling power and a variety of specialized hooks may be attached to the main hook, including hooks which attach to specific parts of the car. J-hooks, which look somewhat like blunt meat hooks are used to hook around axles. Mini-J hooks can be used if there is a tow loop provided, and R and T hooks are designed to hook into slots cut by the manufacturer in the underside of the frame on many cars. Axle straps may also be used, when there are few other places to attach.
Off-road vehicles
Off-road vehicles may be equipped with recovery tools such as winches on the front and back bumpers, usually mounted to a winch bar or frame mounted metal bumper. Less commonly it is mounted on a specialised metal plate "hidden winch mount" behind the vehicle's stock bumper, this is referred to as a "hidden winch" as the hook and fairlead hides behind a flip-up front number plate, the winch itself is not visible. The snubbing winch is used to pull vehicles out of mud, snow, sand, rocks, and water, and to pull vehicles through or over obstacles. The winch is made of cable made up of a braided synthetic rope, or a steel cable wrapped around a motorized drum. Each is controlled electronically, allowing the operator to control the winch speed. Modern vehicles typically use electric winches running off the car's 12V starter or 24V secondary battery. The winch is either controlled with a detachable cable, a button inside the car or wireless remote. Older vehicles may have a PTO winch, controlled via the car's transmission, a secondary clutch maybe used so the vehicle does not need to be moving while winching. Some winches are powered by the pressure generated in the hydraulic steering system. The high lift jack or come-along is used for manual winching.
Aircraft use
Gliders are often launched using a winch mounted on a trailer or heavy vehicle. This method is widely used at many European gliding clubs, as an inexpensive alternative to aerotowing. The engine is usually a gas/petrol, LPG or diesel, though hydraulic fluid engines and electrical motors are also used. The winch pulls in of high-tensile steel wire or a synthetic fibre cable, attached at the other end to the glider. The cable is released at a height of about after a short, steep climb.
Search and Rescue helicopters are often equipped with winches to avoid having to get the helicopter dangerously close to obstacles, or into ocean troughs, allowing rescue teams to be lowered and evacuees to be extricated while the helicopter hovers overhead. Helicopter winches are also used for heli-logging and for airlifting oversized cargo, such as vehicles and other aircraft, although the winch in these cases is only used to reduce the hazards to flying with a loose cable hanging below the helicopter.
Stationary balloons, such as the barrage balloons used during the Second World War to discourage marauding aircraft, and the Kite balloons used during the First World War for artillery spotting are usually tethered with a winch, which can be used to lower the balloon, either to relocate it, or to bring it down quickly to prevent it being shot down by enemy aircraft. Larger man carrying kites often used winches to raise and lower them.
Towed gunnery targets, used to train anti-aircraft gunners, and both fighter pilots and aircraft gunners, are run out behind the target tug aircraft for practice, and winched in for take-off and landing.
Before advances were made in antennas in the 1950s, radio aerials were quite long, and needed to be winched out for use, and winched back in for landing. Failure to do so would then damage the aerial, as happened to Amelia Earhart on one of the legs of her last flight.
Theatre
Winches are frequently used as elements of backstage mechanics to move scenery in large theatrical productions. They are often embedded in the stage floor and used to move large set pieces on and off.
Wakeskate winch
Wakeskate winching is a sport where a person on a waterski or snowboard is propelled across the water with a winch. The winch consists of a gas-powered engine, spool, rope, frame, and sometimes a simple transmission. The person being towed walks (or swims) away from the winch, while extending the rope. When the winch is engaged, it pulls the boarder in at a speed ranging from . The winch may be mounted to a vehicle, set into the ground by stakes, or tied to a tree. The cable may also be run through pulleys mounted offshore so that it pulls the person away from where the winch is located, and multiple pulleys may be used to multiply the force applied by a small but high revving motor instead of using a transmission.
Winch types
Lever winch
Lever winches are winches that use self-gripping jaws instead of spools to move rope or wire through the winch. Powered by moving a handle back and forth, they allow one person to move objects several tons in weight.
Snubbing winch
This is a vertical spool with a ratchet mechanism similar to a conventional winch, but with no crank handle or other form of drive. The line is wrapped around the spool and can be tightened or reeled in by pulling the tail line. The winch takes the load once the pull is stopped with little operator tension needed to hold it. These also allow controlled release of the tension by the operator using the friction of the line around the ratcheted spool. They are used on small sailing boats and dinghies to control sheets and other lines, and in larger applications to supplement and relieve tension on the primary winches.
Air winch
An air winch, sometimes known as an air hoist or air tugger, is an air-powered version of a winch. It is commonly used for the lifting and the suspension of materials. In the oil and gas, construction, and maritime industries, air winches are frequently preferred to electric, diesel, and hydraulic winches because of their durability, versatility, and safety.
| Technology | Mechanisms | null |
261613 | https://en.wikipedia.org/wiki/Stomach%20cancer | Stomach cancer | Stomach cancer, also known as gastric cancer, is a malignant tumor of the stomach. It's a cancer that develops from the lining of the stomach. Most cases of stomach cancers are gastric carcinomas, which can be divided into a number of subtypes, including gastric adenocarcinomas. Lymphomas and mesenchymal tumors may also develop in the stomach. Early symptoms may include heartburn, upper abdominal pain, nausea, and loss of appetite. Later signs and symptoms may include weight loss, yellowing of the skin and whites of the eyes, vomiting, difficulty swallowing, and blood in the stool, among others. The cancer may spread from the stomach to other parts of the body, particularly the liver, lungs, bones, lining of the abdomen, and lymph nodes.
The bacterium Helicobacter pylori accounts for more than 60% of cases of stomach cancer. Certain strains of H. pylori have greater risks than others. Smoking, dietary factors such as pickled vegetables and obesity are other risk factors. About 10% of cases run in families, and between 1% and 3% of cases are due to genetic syndromes inherited such as hereditary diffuse gastric cancer. Most of the time, stomach cancer develops in stages over years. Diagnosis is usually by biopsy done during endoscopy. This is followed by medical imaging to determine if the disease has spread to other parts of the body. Japan and South Korea, two countries that have high rates of the disease, screen for stomach cancer.
A Mediterranean diet lowers the risk of stomach cancer, as does not smoking. Tentative evidence indicates that treating H. pylori decreases the future risk. If stomach cancer is treated early, it can be cured. Treatments may include some combination of surgery, chemotherapy, radiation therapy, and targeted therapy. For certain subtypes of gastric cancer, cancer immunotherapy is an option as well. If treated late, palliative care may be advised. Some types of lymphoma can be cured by eliminating H. pylori. Outcomes are often poor, with a less than 10% five-year survival rate in the Western world for advanced cases. This is largely because most people with the condition present with advanced disease. In the United States, five-year survival is 31.5%, while in South Korea it is over 65% and Japan over 70%, partly due to screening efforts.
Globally, stomach cancer is the fifth-leading type of cancer and the third-leading cause of death from cancer, making up 7% of cases and 9% of deaths. In 2018, it newly occurred in 1.03 million people and caused 783,000 deaths. Before the 1930s, it was a leading cause of cancer deaths in the Western world, however rates have sharply declined among younger generations in the West, while they remain high for people living in East Asia. The decline in the West is believed to be due to the decline of salted and pickled food consumption, as a result of the development of refrigeration as a method of preserving food. Stomach cancer occurs most commonly in East Asia, followed by Eastern Europe. It occurs twice as often in males as in females.
Signs and symptoms
Stomach cancer is often either asymptomatic (producing no noticeable symptoms) or it may cause only nonspecific symptoms (which may also be present in other related or unrelated disorders) in its early stages. By the time symptoms are recognized, the cancer has often reached an advanced stage (see below) and may have metastasized (spread to other, perhaps distant, parts of the body), which is one of the main reasons for its relatively poor prognosis. Stomach cancer can cause the following signs and symptoms: Unexplained nausea, vomiting, diarrhoea and constipation. Patients also can experience unexplained weight loss.
Early cancers may be associated with indigestion or a burning sensation (heartburn). However, fewer than one in every 50 people referred for endoscopy due to indigestion has cancer. Abdominal discomfort and loss of appetite can occur.
Gastric cancers that have enlarged and invaded normal tissue can cause weakness, fatigue, bloating of the stomach after meals, abdominal pain in the upper abdomen, nausea and occasional vomiting. Further enlargement may cause weight loss or bleeding with vomiting blood or having blood in the stool, the latter apparent as black discolouration (melena) and sometimes leading to anemia. Dysphagia suggests a tumour in the cardia or extension of the gastric tumour into the esophagus.
These can be symptoms of other problems such as a stomach virus, gastric ulcer, or tropical sprue.
Risk factors
Gastric cancer can occur as a result of many factors. It occurs twice as commonly in males as females. Estrogen may protect women against the development of this form of cancer.
Infections
Helicobacter pylori infection is an essential risk factor in 65–80% of gastric cancers, but only 2% of people with H. pylori infections develop stomach cancer. The mechanism by which H. pylori induces stomach cancer potentially involves chronic inflammation, the action of H. pylori virulence factors such as CagA, or an interaction between H. pylori infection and germline pathogenic variants in homologous-recombination genes. It was estimated that Epstein–Barr virus is responsible for 84,000 cases per year. AIDS is also associated with elevated risk.
Smoking
Smoking increases the risk of developing gastric cancer significantly, from 40% increased risk for current smokers to 82% increase for heavy smokers. Gastric cancers due to smoking mostly occur in the upper part of the stomach near the esophagus.
Alcohol
Some studies show increased risk with alcohol consumption as well.
Diet
Dietary factors are not proven causes, and the association between stomach cancer and various foods and beverages is weak. Some foods including fried foods, smoked foods, salt and salt-rich foods, meat, processed meat, red meat, pickled vegetables, and brackens are associated with a higher risk of stomach cancer.
Fresh fruit and vegetable intake, citrus fruit intake, and antioxidant intake are associated with a lower risk of stomach cancer. A Mediterranean diet is associated with lower rates of stomach cancer, as is regular aspirin use.
Obesity is a physical risk factor that has been found to increase the risk of gastric adenocarcinoma by contributing to the development of gastroesophageal reflux disease (GERD). The exact mechanism by which obesity causes GERD is not completely known. Studies hypothesize that increased dietary fat leading to increased pressure on the stomach and the lower esophageal sphincter, due to excess adipose tissue, could play a role, yet no statistically significant data have been collected. However, the risk of gastric cardia adenocarcinoma, with GERD present, has been found to increase more than two times for an obese person. There is a correlation between iodine deficiency and gastric cancer.
Genetics
About 10% of cases run in families, and between 1 and 3% of cases are due to genetic syndromes inherited such as hereditary diffuse gastric cancer.
A genetic risk factor for gastric cancer is a genetic defect of the CDH1 gene known as hereditary diffuse gastric cancer (HDGC). The CDH1 gene, which codes for E-cadherin, lies on the 16th chromosome. When the gene experiences a particular mutation, gastric cancer develops through a mechanism that is not fully understood. This mutation is considered autosomal dominant, meaning that half of a carrier's children will likely experience the same mutation. Diagnosis of hereditary diffuse gastric cancer usually takes place when at least two cases involving a family member, such as a parent or grandparent, are diagnosed, with at least one diagnosed before the age of 50. The diagnosis can also be made if at least three cases occur in the family, in which case age is not considered.
The International Cancer Genome Consortium is leading efforts to identify genomic changes involved in stomach cancer. A very small percentage of diffuse-type gastric cancers (see Histopathology below) arise from an inherited abnormal CDH1 gene. Genetic testing and treatment options are available for families at risk.
Diagnosis
To find the cause of symptoms, the doctor asks about the patient's medical history, does a physical examination, and may order laboratory studies.
The patient may also have one or all of these exams:
Gastroscopic exam is the diagnostic method of choice. This involves insertion of a fibre optic camera into the stomach to visualise it.
Upper GI series (may be called barium roentgenogram)
Computed tomography or CT scanning of the abdomen may reveal gastric cancer. It is more useful to determine invasion into adjacent tissues or the presence of spread to local lymph nodes. Wall thickening of more than 1 cm that is focal, eccentric, and enhancing favours malignancy.
In 2013, Chinese and Israeli scientists reported a successful pilot study of a breathalyzer-style breath test intended to diagnose stomach cancer by analyzing exhaled chemicals without the need for an intrusive endoscopy. A larger-scale clinical trial of this technology was completed in 2014.
Abnormal tissue seen in a gastroscope examination is biopsied by the surgeon or gastroenterologist. This tissue is then sent to a pathologist for histological examination under a microscope to check for the presence of cancerous cells. A biopsy, with subsequent histological analysis, is the only sure way to confirm the presence of cancer cells.
Various gastroscopic modalities have been developed to increase yield of detected mucosa with a dye that accentuates the cell structure and can identify areas of dysplasia. Endocytoscopy involves ultra-high magnification to visualise cellular structure to better determine areas of dysplasia. Other gastroscopic modalities such as optical coherence tomography are being tested investigationally for similar applications.
A number of cutaneous conditions are associated with gastric cancer. A condition of darkened hyperplasia of the skin, frequently of the axilla and groin, known as acanthosis nigricans, is associated with intra-abdominal cancers such as gastric cancer. Other cutaneous manifestations of gastric cancer include "tripe palms" (a similar darkening hyperplasia of the skin of the palms) and the Leser-Trelat sign, which is the rapid development of skin lesions known as seborrheic keratoses.
Various blood tests may be done, including a complete blood count to check for anaemia, and a fecal occult blood test to check for blood in the stool.
Histopathology
Gastric adenocarcinoma is a malignant epithelial tumour, originating from glandular epithelium of the gastric mucosa. Stomach cancers are about 90% adenocarcinomas. Histologically, there are two major types of gastric adenocarcinoma (Lauren classification): intestinal type or diffuse type. Adenocarcinomas tend to aggressively invade the gastric wall, infiltrating the muscularis mucosae, the submucosa and then the muscularis propria. Intestinal type adenocarcinoma tumour cells describe irregular tubular structures, harbouring pluristratification, multiple lumens, reduced stroma ("back to back" aspect). Often, it associates intestinal metaplasia in neighbouring mucosa. Depending on glandular architecture, cellular pleomorphism and mucosecretion, adenocarcinoma may present 3 degrees of differentiation: well, moderate and poorly differentiated. Diffuse type adenocarcinoma (mucinous, colloid, linitis plastica or leather-bottle stomach) tumour cells are discohesive and secrete mucus, which is delivered in the interstitium, producing large pools of mucus/colloid (optically "empty" spaces). It is poorly differentiated. In signet ring cell carcinomas, the mucus remains inside the tumour cell and pushes the nucleus to the periphery, giving rise to signet-ring cells.
Around 5% of gastric cancers are lymphomas. These may include extranodal marginal zone B-cell lymphomas (MALT type) and to a lesser extent diffuse large B-cell lymphomas. MALT type make up about half of stomach lymphomas.
Carcinoid and stromal tumors may occur.
Staging
If cancer cells are found in the tissue sample, the next step is to stage, or find out the extent of the disease. Various tests determine whether the cancer has spread, and if so, what parts of the body are affected. Because stomach cancer can spread to the liver, pancreas, and other organs near the stomach, as well as to the lungs, the doctor may order a CT scan, a PET scan, an endoscopic ultrasound exam, or other tests to check these areas. Blood tests for tumor markers, such as carcinoembryonic antigen and carbohydrate antigen may be ordered, as their levels correlate to extent of metastasis, especially to the liver, and the cure rate.
Staging may not be complete until after surgery. The surgeon removes nearby lymph nodes and possibly samples of tissue from other areas in the abdomen for examination by a pathologist.
The clinical stages of stomach cancer are:
Stage 0 – Limited to the inner lining of the stomach, it is treatable by endoscopic mucosal resection when found very early (in routine screenings), or otherwise by gastrectomy and lymphadenectomy without need for chemotherapy or radiation.
Stage I – Penetration to the second or third layers of the stomach (stage 1A) or to the second layer and nearby lymph nodes (stage 1B): Stage 1A is treated by surgery, including removal of the omentum. Stage 1B may be treated with chemotherapy (5-fluorouracil) and radiation therapy.
Stage II – Penetration to the second layer and more distant lymph nodes, or the third layer and only nearby lymph nodes, or all four layers but not the lymph nodes, it is treated as for stage I, sometimes with additional neoadjuvant chemotherapy.
Stage III – Penetration to the third layer and more distant lymph nodes, or penetration to the fourth layer and either nearby tissues or nearby or more distant lymph nodes, it is treated as for stage II; a cure is still possible in some cases.
Stage IV – Cancer has spread to nearby tissues and more distant lymph nodes, or has metastasized to other organs. A cure is very rarely possible at this stage. Some other techniques to prolong life or improve symptoms are used, including laser treatment, surgery, and/or stents to keep the digestive tract open, and chemotherapy by drugs such as 5-fluorouracil, cisplatin, epirubicin, etoposide, docetaxel, oxaliplatin, capecitabine, or irinotecan.
The TNM staging system is also used.
In a study of open-access endoscopy in Scotland, patients were diagnosed 7% in stage I, 17% in stage II, and 28% in stage III. A Minnesota population was diagnosed 10% in stage I, 13% in stage II, and 18% in stage III. However, in a high-risk population in the Valdivia Province of southern Chile, only 5% of patients were diagnosed in the first two stages and 10% in stage III.
Prevention
Getting rid of H. pylori in those who are infected decreases the risk of stomach cancer. A 2014 meta-analysis of observational studies found that a diet high in fruits, mushrooms, garlic, soybeans, and green onions was associated with a lower risk of stomach cancer in the Korean population. Low doses of vitamins, especially from a healthy diet, decrease the risk of stomach cancer. A previous review of antioxidant supplementation did not find supporting evidence and possibly worse outcomes. Modern technology is used to promote early diagnosis, e.g. based on serum markers.
Management
Cancer of the stomach is difficult to cure unless it is found at an early stage (before it has begun to spread). Unfortunately, because early stomach cancer causes few symptoms, the disease is usually advanced when the diagnosis is made.
Treatment for stomach cancer may include surgery, chemotherapy, or radiation therapy. New treatment approaches such as immunotherapy or gene therapy and improved ways of using current methods are being studied in clinical trials.
Surgery
Surgery remains the only curative therapy for stomach cancer. A 2016 Cochrane review found low-quality evidence of no difference in short-term mortality between laparoscopic and open gastrectomy (removal of stomach), and that benefits or harms of laparoscopic gastrectomy cannot be ruled out. Post-operatively, up to 70% of people undergoing total gastrectomy develop complications such as dumping syndrome and reflux esophagitis. Construction of a "pouch", which serves as a "stomach substitute", reduced the incidence of dumping syndrome and reflux esophagitis by 73% and 63% respectively, and led to improvements in quality-of-life, nutritional outcomes, and body mass index. Proximal gastrectomy (PG) can be considered a viable alternative for upper third early gastric cancer (EGC) Of the different surgical techniques, endoscopic mucosal resection (EMR) is a treatment for early gastric cancer (tumor only involves the mucosa) that was pioneered in Japan and is available in the United States at some centers. In EMR, the tumor, together with the inner lining of stomach (mucosa), is removed from the wall of the stomach using an electrical wire loop through the endoscope. The advantage is that it is a much smaller operation than removing the stomach. Endoscopic submucosal dissection is a similar technique pioneered in Japan, used to resect a large area of mucosa in one piece. If the pathologic examination of the resected specimen shows incomplete resection or deep invasion by tumor, the patient would need a formal stomach resection.
Those with metastatic disease at the time of presentation may receive palliative surgery, and while it remains controversial, due to the possibility of complications from the surgery itself and because it may delay chemotherapy, the data so far are mostly positive, with improved survival rates being seen in those treated with this approach.
Chemotherapy
The use of chemotherapy to treat stomach cancer has no firmly established standard of care. Unfortunately, stomach cancer has not been particularly sensitive to these drugs, and chemotherapy, if used, has usually served to palliatively reduce the size of the tumor, relieve symptoms of the disease, and increase survival time. Some drugs used in stomach cancer treatment have included: fluorouracil or its analog capecitabine, BCNU (carmustine), methyl-CCNU (semustine) and doxorubicin (Adriamycin), as well as mitomycin C, and more recently cisplatin and taxotere, often using drugs in various combinations. The relative benefits of these different drugs, alone and in combination, are unclear. Clinical researchers are exploring the benefits of giving chemotherapy before surgery to shrink the tumor, or as adjuvant therapy after surgery to destroy remaining cancer cells.
Targeted therapy
Recently, treatment with human epidermal growth factor receptor 2 (HER2) inhibitor, trastuzumab, has been demonstrated to increase overall survival in inoperable locally advanced or metastatic gastric carcinoma over-expressing the HER2/neu gene. In particular, HER2 is overexpressed in 13–22% of patients with gastric cancer. Of note, HER2 overexpression in gastric neoplasia is heterogeneous and comprises a minority of tumor cells (less than 10% of gastric cancers overexpress HER2 in more than 5% of tumor cells). Hence, this heterogeneous expression should be taken into account for HER2 testing, particularly in small samples such as biopsies, requiring the evaluation of more than one bioptic sample.
A recent clinical study reported promising results for a combination therapy using Nivolumab and Anlotinib in the treatment of advanced gastric adenocarcinoma (GAC) and esophageal squamous cell carcinoma (ESCC), that improve the immune response against cancer while simultaneously slowing tumor progression. The research, conducted by Zhongshan Hospital, Fudan University, and BGI Genomics, was published in Nature Communications in October 2024. The study evaluated the efficacy of combining Nivolumab, an immunotherapy that enhances the immune system's ability to attack cancer cells, with anlotinib hydrochloride, a drug that inhibits tumor angiogenesis by blocking signals essential for the growth of new blood vessels.
Radiation
Radiation therapy (also called radiotherapy) may be used to treat stomach cancer, often as an adjuvant to chemotherapy and/or surgery.
Lymphoma
MALT lymphomas are often completely resolved after the underlying H. pylori infection is treated. This results in remission in about 80% of cases.
Prognosis
The prognosis of stomach cancer is generally poor, because the tumor has often metastasized by the time of discovery, and most people with the condition are elderly (median age is between 70 and 75 years) at presentation. The average life expectancy after being diagnosed is around 24 months, and the five-year survival rate for stomach cancer is less than 10%.
Almost 300 genes are related to outcomes in stomach cancer, with both unfavorable genes where high expression is related to poor survival and favorable genes where high expression is associated with longer survival times. Examples of poor prognosis genes include ITGAV, DUSP1 and P2RX7.
Epidemiology
In 2018, stomach cancer was the fifth most frequently diagnosed cancer worldwide, representing 5.7% of all cancer cases, and the third leading cause of death from cancers, being responsible for 8.2% of all cancer deaths. Among men, 683 754 cases were diagnosed, accounting for 7.2% of all cancer cases, and among women, stomach cancer was diagnosed in 349 947 cases, accounting for 4.1% of all cancer cases.
In 2012, stomach cancer was the fifth most-common cancer with 952,000 cases diagnosed. It is more common both in men and in developing countries. In 2012, it represented 8.5% of cancer cases in men, making it the fourth most-common cancer in men. Also in 2012, the number of deaths was 700,000, having decreased slightly from 774,000 in 1990, making it the third-leading cause of cancer-related death (after lung cancer and liver cancer).
Less than 5% of stomach cancers occur in people under 40 years of age, with 81.1% of that 5% in the age-group of 30 to 39 and 18.9% in the age-group of 20 to 29.
In 2014, stomach cancer resulted in 0.61% of deaths (13,303 cases) in the United States. In China, stomach cancer accounted for 3.56% of all deaths (324,439 cases). The highest rate of stomach cancer was in Mongolia, at 28 cases per 100,000 people.
In the United Kingdom, stomach cancer is the 15th most-common cancer (around 7,100 people were diagnosed with stomach cancer in 2011), and it is the 10th most-common cause of cancer-related deaths (around 4,800 people died in 2012).
Incidence and mortality rates of gastric cancer vary greatly in Africa. The GLOBOCAN system is currently the most widely used method to compare these rates between countries, but African incidence and mortality rates are seen to differ among countries, possibly due to the lack of universal access to a registry system for all countries. Variation as drastic as estimated rates from 0.3/100000 in Botswana to 20.3/100000 in Mali have been observed. In Uganda, the incidence of gastric cancer has increased from the 1960s measurement of 0.8/100000 to 5.6/100000. Gastric cancer, though present, is relatively low when compared to countries with high incidence like Japan and China. One suspected cause of the variation within Africa and between other countries is due to different strains of the H. pylori bacteria. The trend commonly seen is that H. pylori infection increases the risk for gastric cancer, but this is not the case in Africa, giving this phenomenon the name the "African enigma". Although this bacterial species is found in Africa, evidence has supported that different strains with mutations in the bacterial genotype may contribute to the difference in cancer development between African countries and others outside the continent. Increasing access to health care and treatment measures have been commonly associated with the rising incidence, though, particularly in Uganda.
Other animals
The stomach is a muscular organ of the gastrointestinal tract that holds food and begins the digestive process by secreting gastric juice. The most common cancers of the stomach are adenocarcinomas, but other histological types have been reported. Signs vary, but may include vomiting (especially if blood is present), weight loss, anemia, and lack of appetite. Bowel movements may be dark and tarry in nature. To determine whether cancer is present in the stomach, special X-rays and/or abdominal ultrasounds may be performed. Gastroscopy, a test using an endoscope to examine the stomach, is a useful diagnostic tool that can also take samples of the suspected mass for histopathological analysis to confirm or rule out cancer. The most definitive method of cancer diagnosis is through open surgical biopsy. Most stomach tumors are malignant with evidence of spread to lymph nodes or liver, making treatment difficult. Except for lymphoma, surgery is the most frequent treatment option for stomach cancers but it is associated with significant risks.
A carcinogenic interaction was demonstrated between bile acids and Helicobacter pylori in a mouse model of gastric cancer.
| Biology and health sciences | Cancer | Health |
261628 | https://en.wikipedia.org/wiki/Cuckoo-roller | Cuckoo-roller | The cuckoo-roller or courol (Leptosomus discolor) is the only bird in the family Leptosomidae , which was previously often placed in the order Coraciiformes but is now placed in its own order Leptosomiformes. The cuckoo-roller is at the root of a group that contains the Trogoniformes, Bucerotiformes, Piciformes, and Coraciiformes. Despite its name, the Cuckoo-roller does not share close evolutionary origins with cuckoos or rollers.
It is a medium-large bird, inhabiting forests and woodlands in Madagascar and the Comoro Islands. Three subspecies are described: the nominate L. d. discolor is found in Madagascar and Mayotte Island, L. d. intermedius on Anjouan, and L. d. gracilis of Grand Comoro. Based on its smaller size, differences in the plumage, and minor difference in the voice, the last of these is sometimes considered a separate species, the Comoro cuckoo-roller (L. gracilis).
Description
The cuckoo-roller has a total length of ; the nominate subspecies is the largest, and L. d. gracilis the smallest. Unlike the true rollers and ground rollers, where the sexes have identical appearance, the cuckoo-roller is sexually dichromatic. Males have a mostly velvety grey chest and head, changing gradually to white on the remaining underparts (the demarcation between grey and white is stronger in L. d. gracilis). The back, tail, and wing-coverts are dark iridescent green with a purplish tinge (especially on the wing-coverts), and the crown and eye-stripe are black. Females are mostly brown, with strongly dark-spotted pale underparts (less spotting in L. d. gracilis). Juveniles are generally reported as resembling a dull female, but at least juveniles of L. d. gracilis are sexually dimorphic, and this also possibly applies to the other subspecies. The bill is stout and the eyes are set far back in the face. The legs and feet are small, and the feet have an unusual structure which has confused many ornithologists, but is now thought to be zygodactylous (two toes forwards, two toes backwards).
Distribution and habitat
The cuckoo-roller occupies a wide variety of habitats, including altered areas. They inhabit forest, including rainforest, litoral forest, deciduous forest, spiny bush-forest, and tree plantations. In the Comoros, the species is found on all the major islands, particularly in forested zones. It can be found from near sea level up to 2000 m.
Behaviour and ecology
The diet of the cuckoo-roller is not well known, but a 1931 expedition found that chameleons and insects, particularly locusts and caterpillars, are important food items. Stomachs have often been found to be lined with caterpillar hairs, and other prey taken include grasshoppers, cicadas, stick insects, and geckos. The principal foraging technique is to perch motionless, watching for prey, then to make a quick sally towards the prey when observed. They also hunt from the air. Prey is caught in the large bill and killed by beating it against a branch.
Very few studies have investigated the breeding habits of the cuckoo-roller. It has been described in the past as a polygamous breeder, but no evidence for this is available. The nest is located in tall trees, off the ground, in natural cavities. No lining is placed inside the cavity; the white eggs are laid directly on the bottom. The usual clutch size is around four eggs. Incubation is performed by the female only, while the male feeds her. The incubation period is about 20 days, after which fluffy chicks are born. Chicks remain in the nest for 30 days before fledging.
Status and conservation
The species is not generally hunted and has proven resistant to habitat change that has threatened other native birds. It is assessed as Least Concern by the IUCN. The distribution of the cuckoo-roller is vast, and populations in Madagascar persist in small forest fragments. Areas with abundant populations include broad expanses of forest associated with reserves such as Zahamena, Andringitra, Andohahela, and Marojejy.
Relations with humans
The cuckoo-roller is very tame, and it is generally not disturbed by the inhabitants of Madagascar, many of whom have legends and myths about the species. It is often considered a good omen, as the harbinger of clear weather and (because it is often seen in pairs) as associated with couples and love.
| Biology and health sciences | Basics | Animals |
261690 | https://en.wikipedia.org/wiki/Air%20mass | Air mass | In meteorology, an air mass is a volume of air defined by its temperature and humidity. Air masses cover many hundreds or thousands of square miles, and adapt to the characteristics of the surface below them. They are classified according to latitude and their continental or maritime source regions. Colder air masses are termed polar or arctic, while warmer air masses are deemed tropical. Continental and superior air masses are dry, while maritime and monsoon air masses are moist. Weather fronts separate air masses with different density (temperature or moisture) characteristics. Once an air mass moves away from its source region, underlying vegetation and water bodies can quickly modify its character. Classification schemes tackle an air mass's characteristics, as well as modification.
Classification and notation
The Bergeron classification is the most widely accepted form of air mass classification, though others have produced more refined versions of this scheme over different regions of the globe. Air mass classification involves three letters. The first letter describes its moisture properties – "c" represents continental air masses (dry), and "m" represents maritime air masses (moist). Its source region follows: "T" stands for Tropical, "P" stands for Polar, "A" stands for Arctic or Antarctic, "M" stands for monsoon, "E" stands for Equatorial, and "S" stands for adiabatically drying and warming air formed by significant downward motion in the atmosphere. For instance, an air mass originating over the desert southwest of the United States in summer may be designated "cT". An air mass originating over northern Siberia in winter may be indicated as "cA".
The stability of an air mass may be shown using a third letter, either "k" (air mass colder than the surface below it) or "w" (air mass warmer than the surface below it). An example of this might be a polar air mass blowing over the Gulf Stream, denoted as "cPk". Occasionally, one may also encounter the use of an apostrophe or "degree tick" denoting that a given air mass having the same notation as another it is replacing is colder than the replaced air mass (usually for polar air masses). For example, a series of fronts over the Pacific might show an air mass denoted mPk followed by another denoted mPk'.
Another convention utilizing these symbols is the indication of modification or transformation of one type to another. For instance, an Arctic air mass blowing out over the Gulf of Alaska may be shown as "cA-mPk". Yet another convention indicates the layering of air masses in certain situations. For instance, the overrunning of a polar air mass by an air mass from the Gulf of Mexico over the Central United States might be shown with the notation "mT/cP" (sometimes using a horizontal line as in fraction notation).
Characteristics
Tropical and equatorial air masses are hot as they develop over lower latitudes. Tropical air masses have lower pressure because hot air rises and cold air sinks. Those that develop over land (continental) are drier and hotter than those that develop over oceans, and travel poleward on the southern periphery of the subtropical ridge. Maritime tropical air masses are sometimes referred to as trade air masses. Maritime tropical air masses that affect the United States originate in the Caribbean Sea, southern Gulf of Mexico, and tropical Atlantic east of Florida through the Bahamas. Monsoon air masses are moist and unstable. Superior air masses are dry, and rarely reach the ground. They normally reside over maritime tropical air masses, forming a warmer and drier layer over the more moderate moist air mass below, forming what is known as a trade wind inversion over the maritime tropical air mass.
Continental Polar air masses (cP) are air masses that are cold and dry due to their continental source region. Continental polar air masses that affect North America form over interior Canada. Continental Tropical air masses (cT) are a type of tropical air produced by the subtropical ridge over large areas of land and typically originate from low-latitude deserts such as the Sahara Desert in northern Africa, which is the major source of these air masses. Other less important sources producing cT air masses are the Arabian Peninsula, the central arid/semi-arid part of Australia and deserts lying in the Southwestern United States. Continental tropical air masses are extremely hot and dry. Arctic, Antarctic, and polar air masses are cold. The qualities of arctic air are developed over ice and snow-covered ground. Arctic air is deeply cold, colder than polar air masses. Arctic air can be shallow in the summer, and rapidly modify as it moves equatorward. Polar air masses develop over higher latitudes over the land or ocean, are very stable, and generally shallower than arctic air. Polar air over the ocean (maritime) loses its stability as it gains moisture over warmer ocean waters.
Movement and fronts
A weather front is a boundary separating two masses of air of different densities, and is the principal cause of meteorological phenomena. In surface weather analyses, fronts are depicted using various colored lines and symbols, depending on the type of front. The air masses separated by a front usually differ in temperature and humidity.
Cold fronts may feature narrow bands of thunderstorms and severe weather, and may on occasion be preceded by squall lines or dry lines. Warm fronts are usually preceded by stratiform precipitation and fog. The weather usually clears quickly after a front's passage. Some fronts produce no precipitation and little cloudiness, although there is invariably a wind shift.
Cold fronts and occluded fronts generally move from west to east, while warm fronts move poleward. Because of the greater density of air in their wake, cold fronts and cold occlusions move faster than warm fronts and warm occlusions. Mountains and warm bodies of water can slow the movement of fronts. When a front becomes stationary, and the density contrast across the frontal boundary vanishes, the front can degenerate into a line which separates regions of differing wind velocity, known as a shearline. This is most common over the open ocean.
Modification
Air masses can be modified in a variety of ways. Surface flux from underlying vegetation, such as forest, acts to moisten the overlying air mass. Heat from underlying warmer waters can significantly modify an air mass over distances as short as to . For example, southwest of extratropical cyclones, curved cyclonic flow bringing cold air across the relatively warm water bodies can lead to narrow lake-effect snow bands. Those bands bring strong localized precipitation since large water bodies such as lakes efficiently store heat that results in significant temperature differences (larger than 13 °C or 23 °F) between the water surface and the air above. Because of this temperature difference, warmth and moisture are transported upward, condensing into vertically oriented clouds (see satellite picture) which produce snow showers. The temperature decrease with height and cloud depth are directly affected by both the water temperature and the large-scale environment. The stronger the temperature decrease with height, the deeper the clouds get, and the greater the precipitation rate becomes.
| Physical sciences | Atmosphere | null |
261787 | https://en.wikipedia.org/wiki/Burmese%20cat | Burmese cat | The Burmese cat (, , or , or , meaning copper colour) is a breed of domestic cat, originating in Burma, believed to have its roots near the Thai-Burma border and developed in the United States and Britain.
Most modern Burmese are descendants of one female cat called Wong Mau, which was brought from Burma to the United States in 1930 and bred with American Siamese. From there, American and British breeders developed distinctly different Burmese breed standards, which is unusual among pedigreed domestic cats. Most modern cat registries do not formally recognise the two as separate breeds, but those that do refer to the British type as the European Burmese.
Originally, all Burmese cats were dark brown (genetically black), but are now available in a wide variety of colours; formal recognition of these also varies by standard. Both versions of the breed are known for their uniquely social and playful temperament and persistent vocalisation.
History
In 1871, Harrison Weir organised a cat show at the Crystal Palace, London. A pair of Siamese cats were on display that closely resembled modern American Burmese cats in build, thus probably similar to the modern Tonkinese breed. The first attempt to deliberately develop the Burmese in the late 19th century in Britain resulted in what were known as Chocolate Siamese rather than a breed in their own right; this view persisted for many years, encouraging crossbreeding between Burmese and Siamese in an attempt to more closely conform to the Siamese build. The breed thus slowly died out in Britain.
Joseph Cheesman Thompson imported Wong Mau, a dark female cat, into San Francisco in 1930. Thompson considered the cat's build to be sufficiently different from the Siamese to still have potential as a fully separate breed. Wong Mau was bred with Tai Mau, a seal point Siamese, and then bred with her son to produce dark brown kittens that became the foundation of a new, distinctive strain of Burmese. In 1936, the Cat Fanciers' Association (CFA) granted the breed formal recognition. However, due to continued extensive outcrossing with Siamese cats to increase the population, the original type was overwhelmed, and the CFA suspended breed recognition a decade later. Attempts by various American breeders to refine the unique Burmese standard persisted, however, and in 1954, the CFA lifted the suspension permanently. In 1958, the United Burmese Cat Fanciers (UBCF) compiled an American judging standard that has remained essentially unchanged since its adoption.
Meanwhile, in the UK, interest in the breed was reviving. The cats that composed the new British breeding program were of a variety of builds, including some imported from the United States. By 1952, three true generations had been produced in Britain and the breed was recognised by the United Kingdom's Governing Council of the Cat Fancy (GCCF). Since the 1950s, countries in the Commonwealth and Europe started importing British Burmese; as a result, most countries have based their standard on the British model.
Historically, the two versions of the breed were kept strictly distinct genetically. European Burmese (also known as "traditional") were declassed as a breed by the CFA in the 1980s. The GCCF banned the registration of all Burmese imported from the United States in order to preserve the "traditional" bloodlines. Most modern cat registries do not formally recognise these dual standards as representing separate breeds, but those that do refer to the British type as the European Burmese. Recently, The International Cat Association (TICA) and CFA clubs have started using the American breed standard at select shows in Europe.
During the early period of breed development, it became clear that Wong Mau herself was genetically a crossbreed between a Siamese and Burmese type. This early crossbreed type was later developed as a separate breed, known today as the Tonkinese. Burmese cats have also been instrumental in the development of the Bombay and the Burmilla, among others.
Description
Appearance
The two standards differ mainly in head and body shape. The British or traditional ideal tends toward a more slender, long-bodied cat with a wedge-shaped head, large pointed ears, long tapering muzzle and moderately almond-shaped eyes. The legs should likewise be long, with neat oval paws. The tail tapers to medium length. The American (also called "contemporary") Burmese is a noticeably stockier cat, with a much broader head, round eyes and distinctively shorter, flattened muzzle; the ears are wider at the base. Legs and tail should be proportionate to the body, medium-length, and the paws also rounded.
In either case, Burmese are a small to medium size breed, tending to be about , but are nevertheless substantially-built, muscular cats and should feel heavy for their size when held – "a brick wrapped in silk".
Coat and colour
In either standard, the coat should be very short, fine and glossy, with a satin-like finish. Colour is solid and must be uniform over the body, only gradually shading to lighter underparts. Faint colourpoint markings may be visible, but any barring or spotting is considered a serious fault. The eyes are green or gold depending on coat colour.
The breed's original standard colour is a distinctively rich dark brown (genetically black), variously known as sable (USA), brown (UK, Australia) or seal (New Zealand). It is the result of the Burmese gene (cb), part of the albino series. This gene causes a reduction in the amount of pigment produced, converting black into brown and rendering all other colours likewise paler than their usual expression. The action of the gene also produces the modified colourpoint effect, which is more noticeable in young kittens.
The first blue Burmese was born in 1955 in Britain, followed by red, cream, and tortoiseshell over the next decades. Chocolate ("champagne" in the USA) first appeared in the United States. Lilac ("platinum" in the USA), the last major variant to appear, was likewise developed in the USA beginning in 1971. Currently, the British GCCF standard recognises solid brown, chocolate, blue, lilac, red and cream, as well as the tortoiseshell pattern on a base of brown, chocolate, blue or lilac.
In the USA, chocolate ("champagne"), blue, and lilac ("platinum") cats were first formally considered a separate breed, the Malayan, in 1979. This distinction was abolished in 1984, but until 2010, the CFA continued to place the brown ("sable") Burmese into a separate division, bundling all other recognised colours into a "dilute division" and judging them separately. Currently, the CFA standard still recognises the Burmese only in sable, blue, chocolate ("champagne"), and lilac ("platinum").
Other colours have been developed from this initial base set, with varying degrees of popularity and recognition. In 1989 a cinnamon breeding programme was started in the Netherlands; the first fawn kitten was born in 1998. Cinnamon, fawn, caramel, and apricot Burmese have also been developed in New Zealand, as have tortoiseshell variants of all these colours. A new colour mutation ("russet") appeared in New Zealand in 2007. This line has an initially dark pigment in the cats' coats, which fades as they grow, eventually becoming a paler orange colour.
Temperament
Burmese are a notably people-oriented breed, maintaining their kitten-like energy and playfulness into adulthood. They are also said to have a number of overtly puppy-like characteristics, forming strong bonds with their owners and gravitating toward human activity. The cats often learn to play games such as 'fetch' and 'tag'. Veterinarian Joan O. Joshua has written that the "dog-like attachment to the owners" of the Burmese, as with the similarly behaving Abyssinians, causes "greater dependence on human contacts". This stands in contrast to the mere "tolerant acceptance of human company" based around "comforts" that other breeds display. They are persistently vocal, in a manner reminiscent of their Siamese ancestry, yet they have softer, sweeter voices. Burmese are not as independent as other breeds and are not suited to being left alone for extended periods of time.
Genetics
The Burmese gene is also present in some other cat breeds, particularly the established rex breeds, where it can be fully expressed in its homozygous form (cbcb) (referred to as Burmese Colour Restriction or Sepia). The same gene can also be combined with the Siamese gene (cbcs) to produce either darker points or a light-on-dark-brown coat, similar to the Burmese chocolate (champagne in the USA), known as "mink".
The Asian Group cat breed is related to the Burmese; the Asian is physically similar but comes in different patterns and colours. The Singapura is always homozygous for the Burmese gene, combining it with a ticked tabby pattern. Snow Bengals with eye colours other than blue also have the gene.
The lineage of Burmese cats known as "Contemporary Burmese" often hosts a 4-aminoacid deletion on the ALX1 gene. Heterozygosity of the mutation results in brachicephaly, while homozygosity results in a profound head malformation known as the Burmese head defect, which is always fatal.
Genetic diversity
A 2008 study conducted at UC Davis by the team led by feline geneticist Dr Leslie Lyons found that the American Burmese has the second lowest level of genetic diversity (after the Singapura) of all the breeds studied, and concludes that this situation should be addressed. The CFA observes that "breeders are reporting less hearty litters, smaller adults, smaller litters, and immune system problems, all of which point towards inbreeding depression becoming more common." The Burmese breed council currently allows outcrossing using Bombay, Tonkinese and Burmese type cats imported from Southeast Asia to improve genetic diversity. The Fédération Internationale Féline (FIFe) excludes novice show cats from breeding.
Health
A 2016 study in England of veterinary records found the Burmese to have a higher prevalence of diabetes mellitus compared to other breeds with 2.27% of Burmese having the condition compared to the overall rate of 0.58%. An Australian study in 2009 found a prevalence of 22.1% compared to an overall rate of 7.4%.
A study of veterinary records in the UK found an average life expectancy of 14.42 years for the Burmese with a sample size of 45, the highest in the study and higher than the overall of 11.74 years.
Certain UK bloodlines suffer from an acute teething disorder in young kittens (FOPS: Feline Orofacial Pain Syndrome), where the eruption of the second teeth causes extreme discomfort and the young cat tears at its face to try to alleviate the pain. Eruption of the new teeth in the jaw that causes the problem; these cannot be removed until they have erupted, by which time the problem ceases. Pain relief intervention should be considered, to prevent overt self-trauma. Apart from scarring caused by the self-mutilation, the cat seems to recover completely.
The Burmese is predisposed to congenital hypotrichosis.
The Burmese is one of the more commonly affected breeds for gangliosidosis 2. An autosomal recessive mutation of the HEXB gene is responsible for the condition in the breed.
The Burmese is the cat breed most often affected by hypokalaemia. An autosomal recessive mutation of the WNK4 gene is responsible for congenital forms of hypokalaemia in the breed.
The Burmese is the most commonly affected breed for feline orofacial pain syndrome.
| Biology and health sciences | Cats | Animals |
261803 | https://en.wikipedia.org/wiki/Ventilator | Ventilator | A ventilator is a type of breathing apparatus, a class of medical technology that provides mechanical ventilation by moving breathable air into and out of the lungs, to deliver breaths to a patient who is physically unable to breathe, or breathing insufficiently. Ventilators may be computerized microprocessor-controlled machines, but patients can also be ventilated with a simple, hand-operated bag valve mask. Ventilators are chiefly used in intensive-care medicine, home care, and emergency medicine (as standalone units) and in anesthesiology (as a component of an anesthesia machine).
Ventilators are sometimes called "respirators", a term commonly used for them in the 1950s (particularly the "Bird respirator"). However, contemporary medical terminology uses the word "respirator" to refer to a face-mask that protects wearers against hazardous airborne substances.
Function
In its simplest form, a modern positive pressure ventilator, consists of a compressible air reservoir or turbine, air and oxygen supplies, a set of valves and tubes, and a disposable or reusable "patient circuit". The air reservoir is pneumatically compressed several times a minute to deliver room-air, or in most cases, an air/oxygen mixture to the patient. If a turbine is used, the turbine pushes air through the ventilator, with a flow valve adjusting pressure to meet patient-specific parameters. When over pressure is released, the patient will exhale passively due to the lungs' elasticity, the exhaled air being released usually through a one-way valve within the patient circuit called the patient manifold.
Ventilators may also be equipped with monitoring and alarm systems for patient-related parameters (e.g., pressure, volume, and flow) and ventilator function (e.g., air leakage, power failure, mechanical failure), backup batteries, oxygen tanks, and remote control. The pneumatic system is nowadays often replaced by a computer-controlled turbopump.
Ventilator pressures labeled
Modern ventilators are electronically controlled by a small embedded system to allow exact adaptation of pressure and flow characteristics to an individual patient's needs. Fine-tuned ventilator settings also serve to make ventilation more tolerable and comfortable for the patient. In Canada and the United States, respiratory therapists are responsible for tuning these settings, while biomedical technologists are responsible for the maintenance. In the United Kingdom and Europe the management of the patient's interaction with the ventilator is done by critical care nurses.
The patient circuit usually consists of a set of three durable, yet lightweight plastic tubes, separated by function (e.g. inhaled air, patient pressure, exhaled air). Determined by the type of ventilation needed, the patient-end of the circuit may be either noninvasive or invasive.
Noninvasive methods, such as continuous positive airway pressure (CPAP) and non-invasive ventilation, which are adequate for patients who require a ventilator only while sleeping and resting, mainly employ a nasal mask. Invasive methods require intubation, which for long-term ventilator dependence will normally be a tracheotomy cannula, as this is much more comfortable and practical for long-term care than is larynx or nasal intubation.
Safety-critical system
As failure may result in death, mechanical ventilation systems are classified as safety-critical systems, and precautions must be taken to ensure that they are highly reliable, including their power supply. Ventilatory failure is the inability to sustain a sufficient rate of CO2 elimination to maintain a stable pH without mechanical assistance, muscle fatigue, or intolerable dyspnea. Mechanical ventilators are therefore carefully designed so that no single point of failure can endanger the patient. They may have manual backup mechanisms to enable hand-driven respiration in the absence of power (such as the mechanical ventilator integrated into an anaesthetic machine). They may also have safety valves, which open to atmosphere in the absence of power to act as an anti-suffocation valve for spontaneous breathing of the patient. Some systems are also equipped with compressed-gas tanks, air compressors or backup batteries to provide ventilation in case of power failure or defective gas supplies, and methods to operate or call for help if their mechanisms or software fail. Power failures, such as during a natural disaster, can create a life-threatening emergency for people using ventilators in a home care setting. Battery power may be sufficient for a brief loss of electricity, but longer power outages may require going to a hospital.
History
The history of mechanical ventilation begins with various versions of what was eventually called the iron lung, a form of noninvasive negative-pressure ventilator widely used during the polio epidemics of the twentieth century after the introduction of the "Drinker respirator" in 1928, improvements introduced by John Haven Emerson in 1931, and the Both respirator in 1937. Other forms of noninvasive ventilators, also used widely for polio patients, include Biphasic Cuirass Ventilation, the rocking bed, and rather primitive positive pressure machines.
In 1949, John Haven Emerson developed a mechanical assister for anaesthesia with the cooperation of the anaesthesia department at Harvard University. Mechanical ventilators began to be used increasingly in anaesthesia and intensive care during the 1950s. Their development was stimulated both by the need to treat polio patients and the increasing use of muscle relaxants during anaesthesia. Relaxant drugs paralyse the patient and improve operating conditions for the surgeon but also paralyse the respiratory muscles. In 1953 Bjørn Aage Ibsen set up what became the world's first Medical/Surgical ICU utilizing muscle relaxants and controlled ventilation.
In the United Kingdom, the East Radcliffe and Beaver models were early examples. The former used a Sturmey-Archer bicycle hub gear to provide a range of speeds, and the latter an automotive windscreen wiper motor to drive the bellows used to inflate the lungs. Electric motors were, however, a problem in the operating theatres of that time, as their use caused an explosion hazard in the presence of flammable anaesthetics such as ether and cyclopropane. In 1952, Roger Manley of the Westminster Hospital, London, developed a ventilator which was entirely gas-driven and became the most popular model used in Europe. It was an elegant design, and became a great favourite with European anaesthetists for four decades, prior to the introduction of models controlled by electronics. It was independent of electrical power and caused no explosion hazard. The original Mark I unit was developed to become the Manley Mark II in collaboration with the Blease company, which manufactured many thousands of these units. Its principle of operation was very simple, an incoming gas flow was used to lift a weighted bellows unit, which fell intermittently under gravity, forcing breathing gases into the patient's lungs. The inflation pressure could be varied by sliding the movable weight on top of the bellows. The volume of gas delivered was adjustable using a curved slider, which restricted bellows excursion. Residual pressure after the completion of expiration was also configurable, using a small weighted arm visible to the lower right of the front panel. This was a robust unit and its availability encouraged the introduction of positive pressure ventilation techniques into mainstream European anesthetic practice.
The 1955 release of Forrest Bird's "Bird Universal Medical Respirator" in the United States changed the way mechanical ventilation was performed, with the small green box becoming a familiar piece of medical equipment. The unit was sold as the Bird Mark 7 Respirator and informally called the "Bird". It was a pneumatic device and therefore required no electrical power source to operate.
In 1965, the Army Emergency Respirator was developed in collaboration with the Harry Diamond Laboratories (now part of the U.S. Army Research Laboratory) and Walter Reed Army Institute of Research. Its design incorporated the principle of fluid amplification in order to govern pneumatic functions. Fluid amplification allowed the respirator to be manufactured entirely without moving parts, yet capable of complex resuscitative functions. Elimination of moving parts increased performance reliability and minimized maintenance. The mask is composed of a poly(methyl methacrylate) (commercially known as Lucite) block, about the size of a pack of cards, with machined channels and a cemented or screwed-in cover plate. The reduction of moving parts cut manufacturing costs and increased durability.
The bistable fluid amplifier design allowed the respirator to function as both a respiratory assistor and controller. It could functionally transition between assistor and controller automatically, based on the patient's needs. The dynamic pressure and turbulent jet flow of gas from inhalation to exhalation allowed the respirator to synchronize with the breathing of the patient.
Intensive care environments around the world revolutionized in 1971 by the introduction of the first SERVO 900 ventilator (Elema-Schönander), constructed by Björn Jonson. It was a small, silent and effective electronic ventilator, with the famous SERVO feedback system controlling what had been set and regulating delivery. For the first time, the machine could deliver the set volume in volume control ventilation.
Microprocessor ventilators
Microprocessor control led to the third generation of intensive care unit (ICU) ventilators, starting with the Dräger EV-A in 1982 in Germany which allowed monitoring the patient's breathing curve on an LCD monitor. One year later followed Puritan Bennett 7200 and Bear 1000, SERVO 300 and Hamilton Veolar over the next decade. Microprocessors enable customized gas delivery and monitoring, and mechanisms for gas delivery that are much more responsive to patient needs than previous generations of mechanical ventilators.
Open-source ventilators
An open-source ventilator is a disaster-situation ventilator made using a freely-licensed design, and ideally, freely-available components and parts. Designs, components, and parts may be anywhere from completely reverse-engineered to completely new creations, components may be adaptations of various inexpensive existing products, and special hard-to-find and/or expensive parts may be 3D printed instead of sourced.
During the 2019–2020 COVID-19 pandemic, various kinds of ventilators have been considered. Deaths caused by COVID-19 have occurred when the most severely infected experience acute respiratory distress syndrome, a widespread inflammation in the lungs that impairs the lungs' ability to absorb oxygen and expel carbon dioxide. These patients require a capable ventilator to continue breathing.
Among ventilators that might be brought into use for treating people with COVID-19, there have been many concerns. These include current availability, the challenge of making more and lower cost ventilators, effectiveness, functional design, safety, portability, suitability for infants, assignment to treat other illnesses, and operator training. Deploying the best possible mix of ventilators can save the most lives.
Although not formally open-sourced, the Ventec V+ Pro ventilator was developed in April 2020 as a shared effort between Ventec Life Systems and General Motors, to provide a rapid supply of 30,000 ventilators capable of treating COVID-19 patients.
A major worldwide design effort began during the 2019-2020 coronavirus pandemic after a Hackaday project was started, in order to respond to expected ventilator shortages causing higher mortality rate among severe patients.
On March 20, 2020, the Irish Health Service began reviewing designs. A prototype is being designed and tested in Colombia.
The Polish company Urbicum reports successful testing of a 3D-printed open-source prototype device called VentilAid. The makers describe it as a last resort device when professional equipment is missing. The design is publicly available. The first Ventilaid prototype requires compressed air to run.
On March 21, 2020, the New England Complex Systems Institute (NECSI) began maintaining a strategic list of open source designs being worked on. The NECSI project considers manufacturing capability, medical safety and need for treating patients in various conditions, speed dealing with legal and political issues, logistics and supply. NECSI is staffed with scientists from Harvard and MIT and others who have an understanding of pandemics, medicine, systems, risk, and data collection.
The University of Minnesota Bakken Medical Device Center initiated a collaboration with various companies to bring a ventilator alternative to the market that works as a one-armed robot and replaces the need for manual ventilation in emergency situations. The Coventor device was developed in a very short time and approved on April 15, 2020, by the FDA, only 30 days after conception. The mechanical ventilator is designed for use by trained medical professionals in intensive care units and easy to operate. It has a compact design and is relatively inexpensive to manufacture and distribute. The cost is only about 4% of a normal ventilator. In addition, this device does not require pressurized oxygen or air supply, as is normally the case. A first series is manufactured by Boston Scientific. The plans are to be freely available online to the general public without royalties.
COVID-19 pandemic
The COVID-19 pandemic has led to shortages of essential goods and services - from hand sanitizers to masks to beds to ventilators. Countries around the world have experienced shortages of ventilators. Furthermore, fifty-four governments, including many in Europe and Asia, imposed restrictions on medical supply exports in response to the coronavirus pandemic.
The capacities to produce and distribute invasive and non-invasive ventilators vary by country. In the initial phase of the pandemic, China ramped up its production of ventilators, secured large amounts of donations from private firms, and dramatically increased imports of medical devices worldwide. As a result, the country accumulated a reservoir of ventilators throughout the pandemic in Wuhan. Western Europe and the United States, which outrank China in their production capacities, suffered a shortage of supplies due to the sudden and scattered outbreaks throughout the North American and European continents. Finally, Central Asia, Africa, and Latin America, which depend almost entirely on importing ventilators, suffered severe shortages of supplies.
Healthcare policy-makers have met serious challenges to estimate the number of ventilators needed and used during the pandemic. When data is often not available for ventilators specifically, estimates are sometimes made based on the number of intensive care unit beds available, which often contain ventilators.
United States
In 2006, president George W. Bush signed the Pandemic and All-Hazards Preparedness Act, which created the Biomedical Advanced Research and Development Authority (BARDA) within the United States Department of Health and Human Services. In preparation for a possible epidemic of respiratory disease, the newly created office awarded a $6 million contract to Newport Medical Instruments, a small company in California, to make 40,000 ventilators for under $3,000 apiece. In 2011, Newport sent three prototypes to the Centers for Disease Control. In 2012, Covidien, a $12 billion/year medical device manufacturer, which manufactured more expensive competing ventilators, bought Newport for $100 million. Covidien delayed and in 2014 cancelled the contract.
BARDA started over again with a new company, Philips, and in July 2019, the FDA approved the Philips ventilator, and the government ordered 10,000 ventilators for delivery in mid-2020.
On April 23, 2020, NASA reported building, in 37 days, a successful COVID-19 ventilator, named VITAL ("Ventilator Intervention Technology Accessible Locally"). On April 30, NASA reported receiving fast-track approval for emergency use by the United States Food and Drug Administration for the new ventilator. On May 29, NASA reported that eight manufacturers were selected to manufacture the new ventilator.
Canada
On April 7, 2020, Prime Minister Justin Trudeau announced that the Canadian Federal Government would be sourcing thousands of 'Made in Canada' ventilators. A number of organisations responded from across the country. They delivered a large quantity of ventilators to the National Emergency Strategic Stockpile. From west to east, the companies include Canadian Emergency Ventilators Inc, Bayliss Medical Inc, Thornhill Medical, Vexos Inc, and CAE Inc.
| Technology | Devices | null |
261827 | https://en.wikipedia.org/wiki/Cardiopulmonary%20bypass | Cardiopulmonary bypass | Cardiopulmonary bypass (CPB) or heart-lung machine, also called the pump or CPB pump, is a machine that temporarily takes over the function of the heart and lungs during open-heart surgery by maintaining the circulation of blood and oxygen throughout the body. As such it is an extracorporeal device.
CPB is operated by a perfusionist. The machine mechanically circulates and oxygenates blood throughout the patient's body while bypassing the heart and lungs allowing the surgeon to work in a bloodless surgical field.
Uses
CPB is commonly used in operations or surgical procedures involving the heart. The technique allows the surgical team to oxygenate and circulate the patient's blood, thus allowing the surgeon to operate safely on the heart. In many operations, such as coronary artery bypass grafting (CABG), the heart is arrested, due to the degree of the difficulty of operating on a beating heart.
Operations requiring the opening of the chambers of the heart, for example mitral valve repair or replacement, requires the use of CPB. This is to avoid engulfing air systemically, and to provide a bloodless field to increase visibility for the surgeon. The machine pumps the blood and, using an oxygenator, allows red blood cells to pick up oxygen, as well as allowing carbon dioxide levels to decrease. This mimics the function of the heart and the lungs, respectively.
Hypothermia
CPB can be used for the induction of total body hypothermia, a state in which the body can be maintained for up to 45 minutes without perfusion (blood flow). If blood flow is stopped at normal body temperature, permanent brain damage can occur in three to four minutes — death may follow. Similarly, CPB can be used to rewarm individuals who have hypothermia. This rewarming method of using CPB is successful if the core temperature of the patient is above 16 °C.
Cooled blood
The blood is cooled during CPB and is returned to the body. The cooled blood slows the body's basal metabolic rate, decreasing its demand for oxygen. Cooled blood usually has a higher viscosity, but the various crystalloid or colloidal solutions that are used to prime the bypass tubing serve to dilute the blood. Maintaining appropriate blood pressure for organs is a challenge, but it is monitored carefully during the procedure. Hypothermia is also maintained (if necessary), and the body temperature is usually kept at .
Extracorporeal membrane oxygenation
Extracorporeal membrane oxygenation (ECMO) is a simplified version of the heart lung machine that includes a centrifugal pump and an oxygenator to temporarily take over the function of heart and/or the lungs. ECMO is useful for post-cardiac surgery patients with cardiac or pulmonary dysfunction, patients with acute pulmonary failure, massive pulmonary embolisms, lung trauma from infections, and a range of other problems that impair cardiac or pulmonary function.
ECMO gives the heart and/or lungs time to repair and recover, but is only a temporary solution. Patients with terminal conditions, cancer, severe nervous system damage, uncontrolled sepsis, and other conditions may not be candidates for ECMO.
Usage scenarios
CPB is used in scenarios such as:
Coronary artery bypass surgery
Cardiac valve repair and/or replacement (aortic valve, mitral valve, tricuspid valve, pulmonic valve)
Repair of large septal defects (atrial septal defect, ventricular septal defect, atrioventricular septal defect)
Repair and/or palliation of congenital heart defects (Tetralogy of Fallot, transposition of the great vessels)
Transplantation (heart transplantation, lung transplantation, heart–lung transplantation, liver transplantation)
Repair of some large aneurysms (aortic aneurysms, cerebral aneurysms)
Pulmonary thromboendarterectomy
Pulmonary thrombectomy
Isolated limb perfusion
Contraindications and special considerations
There are no absolute contraindications to cardiopulmonary bypass. However, there are several factors that need to be considered by the care team when planning an operation.
Heparin-induced thrombocytopenia and heparin-induced thrombocytopenia and thrombosis are potentially life-threatening conditions associated with the administration of heparin. In both of these conditions, antibodies against heparin are formed which causes platelet activation and the formation of blood clots. Because heparin is typically used in CPB, patients who are known to have the antibodies responsible for heparin-induced thrombocytopenia and heparin-induced thrombocytopenia and thrombosis require alternative forms of anticoagulation. Bivalirudin is the most studied heparin-alternative in people with heparin-induced thrombocytopenia and heparin-induced thrombocytopenia and thrombosis requiring CPB.
A small percentage of patients, such as those with an antithrombin III deficiency, may exhibit resistance to heparin. In these patients, patients may need additional heparin, fresh frozen plasma, or other blood products such as recombinant anti-thrombin III to achieve adequate anticoagulation.
A persistent left superior vena cava is thoracic system variation in which the left-sided vena cava fails to involute during normal development. It is the most common variation of the thoracic venous system, occurring in approximately 0.3% of the population. The abnormality is often detected on pre-operative imaging studies, but may also be discovered intra-operatively. A persistent left superior vena cava may make it difficult to achieve proper venous drainage or deliver of retrograde cardioplegia. Management of a persistent left superior vena cava during CPB depends on factors such as the size and drainage site of the vena cava variation.
Cerebral perfusion, brain blood circulation, always has to be under consideration when using CPB. Due to the nature of CPB and its impact on circulation, the body's own cerebral autoregulation is affected. The occurrence and attempts of preventing this issue has been addressed many times, but still without complete understanding.
Risks and complications
CPB is not without risk, and there are a number of associated problems. As a consequence, CPB is only used during the several hours a cardiac surgery may take. CPB is known to activate the coagulation cascade and stimulate inflammatory mediators, leading to hemolysis and coagulopathies. This problem worsens as complement proteins build on the membrane oxygenators. For this reason, most oxygenators come with a manufacturer's recommendation that they are only used for a maximum of six hours, although they are sometimes used for up to ten hours, with care being taken to ensure they do not clot off and stop working. For longer periods than this, a membrane oxygenator is used, which can be in operation for up to 31 days — such as in a Taiwanese case, for 16 days, after which the patient received a heart transplant.
The most common complication associated with CPB is a protamine reaction during anticoagulation reversal. There are three types of protamine reactions, and each may cause life-threatening hypotension (type I), anaphylaxis (type II), or pulmonary hypertension (type III). Patients with prior exposure to protamine, such as those who have had a previous vasectomy (protamine is contained in sperm) or diabetics (protamine is contained in neutral protamine hagedorn (NPH) insulin formulations), are at an increased risk of type II protamine reactions due to cross-sensitivity. Because protamine is a fast-acting drug, it is typically given slowly to allow for monitoring of possible reactions. The first step in management of a protamine reaction is to immediately stop the protamine infusion. Corticosteroids are used for all types of protamine reactions. Chlorphenamine is used for type II (anaphylactic) reactions. For type III reactions, heparin is redosed and the patient may need to go back on bypass.
CPB may contribute to immediate cognitive decline. The heart-lung blood circulation system and the connection surgery itself release a variety of debris into the bloodstream, including bits of blood cells, tubing, and plaque. For example, when surgeons clamp and connect the aorta to tubing, resulting emboli may block blood flow and cause mini strokes. Other heart surgery factors related to mental damage may be events of hypoxia, high or low body temperature, abnormal blood pressure, irregular heart rhythms, and fever after surgery.
Components
Cardiopulmonary bypass devices consist of two main functional units: the pump and the oxygenator. These units remove oxygen-depleted blood from a patient's body and replace it with oxygen-rich blood through a series of tubes, or hoses. Additionally, a heat exchanger is used to control body temperature by heating or cooling the blood in the circuit. All components of the circuit are coated internally by heparin or another anticoagulant to prevent clotting within the circuit.
Tubing
The components of the CPB circuit are interconnected by a series of tubes made of silicone rubber or PVC.
Pumps
Centrifugal pump
Many CPB circuits now employ a centrifugal pump for the maintenance and control of blood flow during CPB. By altering the speed of revolution (RPM) of the pump head, blood flow is produced by centrifugal force. This type of pumping action is considered to be superior to the roller pump because it is thought to prevent over-pressurization, clamping, or kinking of lines, and makes less damage to blood products (hemolysis, etc.).
Roller pump
The pump console usually comprises several rotating, motor-driven pumps that peristaltically "massage" the tubing. This action gently propels the blood through the tubing. This is commonly referred to as a roller pump, or peristaltic pump. The pumps are more affordable than their centrifugal counterparts but are susceptible to over-pressurization if the lines become clamped or kinked. They are also more likely to cause a massive air embolism and require constant, close supervision by the perfusionist.
Oxygenator
The oxygenator is designed to add oxygen to infused blood and remove some carbon dioxide from the venous blood.
Heat exchangers
Because hypothermia is frequently used in CPB (to reduce metabolic demands), heat exchangers are implemented to warm and cool blood within the circuit. Heating and cooling is accomplished by passing the line through a warm or ice water bath, and a separate heat exchanger is required for the cardioplegia line.
Cannulae
Multiple cannulae are sewn into the patient's body in a variety of locations, depending on the type of surgery. A venous cannula removes oxygen depleted venous blood from a patient's body, and an arterial cannula infuses oxygen-rich blood into the arterial system. The main determinants of cannula size selection is determined by the patient's size and weight, anticipated flow rate, and the size of the vessel being cannulated. A Cardioplegia cannula delivers a Cardioplegia solution to cause the heart to stop beating.
Some commonly used cannulation sites:
Cardioplegia
Cardioplegia is a fluid solution used to protect the heart during CPB. It is delivered via a cannula to the opening of the coronary arteries (usually by way of the aortic root) and/or to the cardiac veins (by way of the coronary sinus). These delivery methods are referred to antegrade or retrograde, respectively. Cardioplegia solution protects the heart by arresting, or stopping the heart. This then decreases the heart's metabolic demand. There are multiple types of cardioplegia solutions, but most work by inhibiting fast sodium currents in the heart, which prevent conduction of the action potential. Other types of solutions act by inhibiting calcium's actions on myocytes.
Technique
Pre-operative planning
CPB requires significant forethought before surgery. In particular, the cannulation, cooling, and cardio-protective strategies must be coordinated between the surgeon, anesthesiologist, perfusionist, and nursing staff.
Cannulation strategy
The cannulation strategy varies on several operation-specific and patient-specific details. Nonetheless, a surgeon will place a cannula in the right atrium, vena cava, or femoral vein to withdraw blood from the body. The cannula used to return oxygenated blood is usually inserted in the ascending aorta, but there is a possibility that it is inserted in the femoral artery, axillary artery, or brachiocephalic artery according to the demand of the surgery. After the cannula is inserted, venous blood is drained from the body by the cannula into a reservoir. This blood is then filtered, cooled, or warmed, and oxygenated before it returns to the body through a mechanical pump.
Intra-operative technique
A CPB circuit must be primed with fluid and all air expunged from the arterial line/cannula before connection to the patient. The circuit is primed with a crystalloid solution and sometimes blood products are also added. Prior to cannulation (typically after opening the pericardium when using central cannulation), heparin or another anticoagulant is administered until the activated clotting time is above 480 seconds.
The arterial cannulation site is inspected for calcification or other disease. Preoperative imaging or an ultrasound probe may be used to help identify aortic calcifications that could potentially become dislodged and cause an occlusion or stroke. Once the cannulation site has been deemed safe, two concentric, diamond-shaped pursestring sutures are placed in the distal ascending aorta. A stab incision with a scalpel is made within the pursestrings and the arterial cannula is passed through the incision. It is important the cannula is passed perpendicular to the aorta to avoid creating an aortic dissection. The pursestrings sutures are cinched around the cannula using a tourniquet and secured to the cannula. At this point, the perfusionist advances the arterial line of the CPB circuit and the surgeon connects the arterial line coming from the patient to the arterial line coming from the CPB machine. Care must be taken to ensure no air is in the circuit when the two are connected, or else the patient could develop an air embolism. Other sites for arterial cannulation include the axillary artery, brachiocephalic artery, or femoral artery.
Aside from the differences in location, venous cannulation is performed similarly to arterial cannulation. Since calcification of the venous system is less common, the inspection or use of an ultrasound for calcification at the cannulation sites is unnecessary. Also, because the venous system is under much less pressure than the arterial system, only a single suture is required to hold the cannula in place. If only a single cannula is to be used (dual-stage cannulation), it is passed through the right atrial appendage, through the tricuspid valve, and into the inferior vena cava. If two cannula are required (single-stage cannulation), the first one is typically passed through the superior vena cava and the second through the inferior vena cava. The femoral vein may also be cannulated in select patients.
If the heart must be stopped for the operation, cardioplegia cannulas are also required. Antegrade cardioplegia (forward flowing, through the heart's arteries), retrograde cardioplegia (backwards flowing, through the heart's veins), or both types may be used depending on the operation and surgeon preference. For antegrade cardioplegia, a small incision is made in the aorta proximal to the arterial cannulation site (between the heart and arterial cannulation site) and the cannula is placed through this to deliver cardioplegia to the coronary arteries. For retrograde cardioplegia, an incision is made on the posterior (back) surface of the heart through the right ventricle. The cannula is placed in this incision, passed through the tricuspid valve, and into the coronary sinus. The cardioplegia lines are connected to the CPB machine.
At this point, the patient is ready to go on bypass. Blood from the venous cannula(s) enters the CPB machine by gravity where it is oxygenated and cooled (if necessary) before returning to the body through the arterial cannula. Cardioplegia can now be administered to stop the heart, and a cross-clamp is placed across the aorta between the arterial cannula and cardioplegia cannula to prevent the arterial blood from flowing backwards into the heart. Setting appropriate blood pressure targets to maintain the health and function of the organs including the brain and kidney are important considerations.
Once the patient is ready to come off of bypass support, the cross-clamp and cannulas are removed and protamine sulfate is administered to reverse the anticoagulative effects of heparin.
History
The Austrian-German physiologist Maximilian von Frey constructed an early prototype of a heart-lung machine in 1885. This was conducted at Carl Ludwig's Physiological Institute of the University of Leipzig. However, such machines were not feasible before the discovery of heparin in 1916, which prevents blood coagulation.
The Soviet scientist Sergei Brukhonenko developed a heart-lung machine for total body perfusion in 1926 named the Autojektor, which was used in experiments with dogs, some of which were showcased in the 1940 film Experiments in the Revival of Organisms. A team of scientists at the University of Birmingham (including Eric Charles, a chemical engineer) were among the pioneers of this technology.
For four years work was undertaken to improve the machine, and on April 5, 1951, Dr. Clarence Dennis led the team at the University of Minnesota Medical Center that conducted the first human operation involving open cardiotomy with temporary mechanical takeover of both heart and lung functions. The patient did not survive due to an unexpected complex congenital heart defect, but the machine had proved to be workable. One member of the team was Dr Russell M. Nelson, (who later became president of The Church of Jesus Christ of Latter-day Saints), and he performed the first open heart surgery in Utah in November 1951 which was successful.
The first successful mechanical support of left ventricular function was performed on July 3, 1952, by Forest Dewey Dodrill using a machine co-developed with General Motors, the Dodrill-GMR. The machine was later used to support the right ventricular function.
The first successful open heart procedure on a human utilizing the heart lung machine was performed by John Gibbon and Frank F. Allbritten Jr. on May 6, 1953, at Thomas Jefferson University Hospital in Philadelphia. Gibbon's machine was further developed into a reliable instrument by a surgical team led by John W. Kirklin at the Mayo Clinic in Rochester, Minnesota in the mid-1950s.
The oxygenator was first conceptualized in the 17th century by Robert Hooke and developed into practical extracorporeal oxygenators by French and German experimental physiologists in the 19th century. Bubble oxygenators have no intervening barrier between blood and oxygen, these are called 'direct contact' oxygenators. Membrane oxygenators introduce a gas-permeable membrane between blood and oxygen that decreases the blood trauma of direct-contact oxygenators. Much work since the 1960s focused on overcoming the gas exchange handicap of the membrane barrier, leading to the development of high-performance microporous hollow-fibre oxygenators that eventually replaced direct-contact oxygenators in cardiac theatres.
In 1983, Ken Litzie patented a closed emergency heart bypass system which reduced circuit and component complexity. This device improved patient survival after cardiac arrest because it could be rapidly deployed in non-surgical settings.
| Technology | Techniques | null |
261925 | https://en.wikipedia.org/wiki/Health%20care | Health care | Health care, or healthcare, is the improvement of health via the prevention, diagnosis, treatment, amelioration or cure of disease, illness, injury, and other physical and mental impairments in people. Health care is delivered by health professionals and allied health fields. Medicine, dentistry, pharmacy, midwifery, nursing, optometry, audiology, psychology, occupational therapy, physical therapy, athletic training, and other health professions all constitute health care. The term includes work done in providing primary care, secondary care, tertiary care, and public health.
Access to health care may vary across countries, communities, and individuals, influenced by social and economic conditions and health policies. Providing health care services means "the timely use of personal health services to achieve the best possible health outcomes". Factors to consider in terms of health care access include financial limitations (such as insurance coverage), geographical and logistical barriers (such as additional transportation costs and the ability to take paid time off work to use such services), sociocultural expectations, and personal limitations (lack of ability to communicate with health care providers, poor health literacy, low income). Limitations to health care services affect negatively the use of medical services, the efficacy of treatments, and overall outcome (well-being, mortality rates).
Health systems are the organizations established to meet the health needs of targeted populations. According to the World Health Organization (WHO), a well-functioning health care system requires a financing mechanism, a well-trained and adequately paid workforce, reliable information on which to base decisions and policies, and well-maintained health facilities to deliver quality medicines and technologies.
An efficient health care system can contribute to a significant part of a country's economy, development, and industrialization. Health care is an important determinant in promoting the general physical and mental health and well-being of people around the world. An example of this was the worldwide eradication of smallpox in 1980, declared by the WHO, as the first disease in human history to be eliminated by deliberate health care interventions.
Delivery
The delivery of modern health care depends on groups of trained professionals and paraprofessionals coming together as interdisciplinary teams. This includes professionals in medicine, psychology, physiotherapy, nursing, dentistry, midwifery and allied health, along with many others such as public health practitioners, community health workers and assistive personnel. These professionals systematically provide personal and population-based preventive, curative and rehabilitative care services.
While the definitions of the various types of health care vary depending on the different cultural, political, organizational, and disciplinary perspectives, there appears to be some consensus that primary care constitutes the first element of a continuing health care process and may also include the provision of secondary and tertiary levels of care. Health care can be defined as either public or private.
Primary care
Primary care refers to the work of health professionals who act as a first point of consultation for all patients within the health care system. The primary care model supports first-contact, accessible, continuous, comprehensive and coordinated person-focused care. Such a professional would usually be a primary care physician, such as a general practitioner or family physician. Another professional would be a licensed independent practitioner such as a physiotherapist, or a non-physician primary care provider such as a physician assistant or nurse practitioner. Depending on the locality and health system organization, the patient may see another health care professional first, such as a pharmacist or nurse. Depending on the nature of the health condition, patients may be referred for secondary or tertiary care.
Primary care is often used as the term for the health care services that play a role in the local community. It can be provided in different settings, such as Urgent care centers that provide same-day appointments or services on a walk-in basis.
Primary care involves the widest scope of health care, including all ages of patients, patients of all socioeconomic and geographic origins, patients seeking to maintain optimal health, and patients with all types of acute and chronic physical, mental and social health issues, including multiple chronic diseases. Consequently, a primary care practitioner must possess a wide breadth of knowledge in many areas. Continuity is a key characteristic of primary care, as patients usually prefer to consult the same practitioner for routine check-ups and preventive care, health education, and every time they require an initial consultation about a new health problem. The International Classification of Primary Care (ICPC) is a standardized tool for understanding and analyzing information on interventions in primary care based on the reason for the patient's visit.
Common chronic illnesses usually treated in primary care may include, for example, hypertension, diabetes, asthma, COPD, depression and anxiety, back pain, arthritis or thyroid dysfunction. Primary care also includes many basic maternal and child health care services, such as family planning services and vaccinations. In the United States, the 2013 National Health Interview Survey found that skin disorders (42.7%), osteoarthritis and joint disorders (33.6%), back problems (23.9%), disorders of lipid metabolism (22.4%), and upper respiratory tract disease (22.1%, excluding asthma) were the most common reasons for accessing a physician.
In the United States, primary care physicians have begun to deliver primary care outside of the managed care (insurance-billing) system through direct primary care which is a subset of the more familiar concierge medicine. Physicians in this model bill patients directly for services, either on a pre-paid monthly, quarterly, or annual basis, or bill for each service in the office. Examples of direct primary care practices include Foundation Health in Colorado and Qliance in Washington.
In the context of global population aging, with increasing numbers of older adults at greater risk of chronic non-communicable diseases, rapidly increasing demand for primary care services is expected in both developed and developing countries. The World Health Organization attributes the provision of essential primary care as an integral component of an inclusive primary health care strategy.
Secondary care
Secondary care includes acute care: necessary treatment for a short period of time for a brief but serious illness, injury, or other health condition. This care is often found in a hospital emergency department. Secondary care also includes skilled attendance during childbirth, intensive care, and medical imaging services.
The term "secondary care" is sometimes used synonymously with "hospital care". However, many secondary care providers, such as psychiatrists, clinical psychologists, occupational therapists, most dental specialties or physiotherapists, do not necessarily work in hospitals. Some primary care services are delivered within hospitals. Depending on the organization and policies of the national health system, patients may be required to see a primary care provider for a referral before they can access secondary care.
In countries that operate under a mixed market health care system, some physicians limit their practice to secondary care by requiring patients to see a primary care provider first. This restriction may be imposed under the terms of the payment agreements in private or group health insurance plans. In other cases, medical specialists may see patients without a referral, and patients may decide whether self-referral is preferred.
In other countries patient self-referral to a medical specialist for secondary care is rare as prior referral from another physician (either a primary care physician or another specialist) is considered necessary, regardless of whether the funding is from private insurance schemes or national health insurance.
Allied health professionals, such as physical therapists, respiratory therapists, occupational therapists, speech therapists, and dietitians, also generally work in secondary care, accessed through either patient self-referral or through physician referral.
Tertiary care
Tertiary care is specialized consultative health care, usually for inpatients and on referral from a primary or secondary health professional, in a facility that has personnel and facilities for advanced medical investigation and treatment, such as a tertiary referral hospital.
Examples of tertiary care services are cancer management, neurosurgery, cardiac surgery, plastic surgery, treatment for severe burns, advanced neonatology services, palliative, and other complex medical and surgical interventions.
Quaternary care
The term quaternary care is sometimes used as an extension of tertiary care in reference to advanced levels of medicine which are highly specialized and not widely accessed. Experimental medicine and some types of uncommon diagnostic or surgical procedures are considered quaternary care. These services are usually only offered in a limited number of regional or national health care centers.
Home and community care
Many types of health care interventions are delivered outside of health facilities. They include many interventions of public health interest, such as food safety surveillance, distribution of condoms and needle-exchange programs for the prevention of transmissible diseases.
They also include the services of professionals in residential and community settings in support of self-care, home care, long-term care, assisted living, treatment for substance use disorders among other types of health and social care services.
Community rehabilitation services can assist with mobility and independence after the loss of limbs or loss of function. This can include prostheses, orthotics, or wheelchairs.
Many countries are dealing with aging populations, so one of the priorities of the health care system is to help seniors live full, independent lives in the comfort of their own homes. There is an entire section of health care geared to providing seniors with help in day-to-day activities at home such as transportation to and from doctor's appointments along with many other activities that are essential for their health and well-being. Although they provide home care for older adults in cooperation, family members and care workers may harbor diverging attitudes and values towards their joint efforts. This state of affairs presents a challenge for the design of ICT (information and communication technology) for home care.
Because statistics show that over 80 million Americans have taken time off of their primary employment to care for a loved one, many countries have begun offering programs such as the Consumer Directed Personal Assistant Program to allow family members to take care of their loved ones without giving up their entire income.
With obesity in children rapidly becoming a major concern, health services often set up programs in schools aimed at educating children about nutritional eating habits, making physical education a requirement and teaching young adolescents to have a positive self-image.
Ratings
Health care ratings are ratings or evaluations of health care used to evaluate the process of care and health care structures and/or outcomes of health care services. This information is translated into report cards that are generated by quality organizations, nonprofit, consumer groups and media. This evaluation of quality is based on measures of:
health plan quality
hospital quality
of patient experience
physician quality
quality for other health professionals
Access to health care
Access to health care may vary across countries, communities, and individuals, influenced by social and economic conditions as well as health policies. Providing health care services means "the timely use of personal health services to achieve the best possible health outcomes". Factors to consider in terms of health care access include financial limitations (such as insurance coverage), geographical and logistical barriers (such as additional transportation costs and the ability to take paid time off work to use such services), sociocultural expectations, and personal limitations (lack of ability to communicate with health care providers, poor health literacy, low income). Limitations to health care services affects negatively the use of medical services, the efficacy of treatments, and overall outcome (well-being, mortality rates).
Related sectors
Health care extends beyond the delivery of services to patients, encompassing many related sectors, and is set within a bigger picture of financing and governance structures.
Health system
A health system, also sometimes referred to as health care system or healthcare system, is the organization of people, institutions, and resources that deliver health care services to populations in need.
Industry
The healthcare industry incorporates several sectors that are dedicated to providing health care services and products. As a basic framework for defining the sector, the United Nations' International Standard Industrial Classification categorizes health care as generally consisting of hospital activities, medical and dental practice activities, and "other human health activities." The last class involves activities of, or under the supervision of, nurses, midwives, physiotherapists, scientific or diagnostic laboratories, pathology clinics, residential health facilities, patient advocates or other allied health professions.
In addition, according to industry and market classifications, such as the Global Industry Classification Standard and the Industry Classification Benchmark, health care includes many categories of medical equipment, instruments and services including biotechnology, diagnostic laboratories and substances, drug manufacturing and delivery.
For example, pharmaceuticals and other medical devices are the leading high technology exports of Europe and the United States. The United States dominates the biopharmaceutical field, accounting for three-quarters of the world's biotechnology revenues.
Research
The quantity and quality of many health care interventions are improved through the results of science, such as advanced through the medical model of health which focuses on the eradication of illness through diagnosis and effective treatment. Many important advances have been made through health research, biomedical research and pharmaceutical research, which form the basis for evidence-based medicine and evidence-based practice in health care delivery. Health care research frequently engages directly with patients, and as such issues for whom to engage and how to engage with them become important to consider when seeking to actively include them in studies. While single best practice does not exist, the results of a systematic review on patient engagement suggest that research methods for patient selection need to account for both patient availability and willingness to engage.
Health services research can lead to greater efficiency and equitable delivery of health care interventions, as advanced through the social model of health and disability, which emphasizes the societal changes that can be made to make populations healthier. Results from health services research often form the basis of evidence-based policy in health care systems. Health services research is also aided by initiatives in the field of artificial intelligence for the development of systems of health assessment that are clinically useful, timely, sensitive to change, culturally sensitive, low-burden, low-cost, built into standard procedures, and involve the patient.
Financing
There are generally five primary methods of funding health care systems:
General taxation to the state, county or municipality
Social health insurance
Voluntary or private health insurance
Out-of-pocket payments
Donations to health charities
In most countries, there is a mix of all five models, but this varies across countries and over time within countries. Aside from financing mechanisms, an important question should always be how much to spend on health care. For the purposes of comparison, this is often expressed as the percentage of GDP spent on health care. In OECD countries for every extra $1000 spent on health care, life expectancy falls by 0.4 years. A similar correlation is seen from the analysis carried out each year by Bloomberg. Clearly this kind of analysis is flawed in that life expectancy is only one measure of a health system's performance, but equally, the notion that more funding is better is not supported.
In 2011, the health care industry consumed an average of 9.3 percent of the GDP or US$ 3,322 (PPP-adjusted) per capita across the 34 members of OECD countries. The US (17.7%, or US$ PPP 8,508), the Netherlands (11.9%, 5,099), France (11.6%, 4,118), Germany (11.3%, 4,495), Canada (11.2%, 5669), and Switzerland (11%, 5,634) were the top spenders, however life expectancy in total population at birth was highest in Switzerland (82.8 years), Japan and Italy (82.7), Spain and Iceland (82.4), France (82.2) and Australia (82.0), while OECD's average exceeds 80 years for the first time ever in 2011: 80.1 years, a gain of 10 years since 1970. The US (78.7 years) ranges only on place 26 among the 34 OECD member countries, but has the highest costs by far. All OECD countries have achieved universal (or almost universal) health coverage, except the US and Mexico. (see also international comparisons.)
In the United States, where around 18% of GDP is spent on health care, the Commonwealth Fund analysis of spend and quality shows a clear correlation between worse quality and higher spending.
Expand the OECD charts below to see the breakdown:
"Government/compulsory": Government spending and compulsory health insurance.
"Voluntary": Voluntary health insurance and private funds such as households' out-of-pocket payments, NGOs and private corporations.
They are represented by columns starting at zero. They are not stacked. The 2 are combined to get the total.
At the source you can run your cursor over the columns to get the year and the total for that country.
Click the table tab at the source to get 3 lists (one after another) of amounts by country: "Total", "Government/compulsory", and "Voluntary".
Administration and regulation
The management and administration of health care is vital to the delivery of health care services. In particular, the practice of health professionals and the operation of health care institutions is typically regulated by national or state/provincial authorities through appropriate regulatory bodies for purposes of quality assurance. Most countries have credentialing staff in regulatory boards or health departments who document the certification or licensing of health workers and their work history.
Health information technology
Health information technology (HIT) is "the application of information processing involving both computer hardware and software that deals with the storage, retrieval, sharing, and use of health care information, data, and knowledge for communication and decision making."
Health information technology components:
Electronic health record (EHR) – An EHR contains a patient's comprehensive medical history, and may include records from multiple providers.
Electronic Medical Record (EMR) – An EMR contains the standard medical and clinical data gathered in one's provider's office.
Health information exchange (HIE) – Health Information Exchange allows health care professionals and patients to appropriately access and securely share a patient's vital medical information electronically.
Medical practice management software (MPM) – is designed to streamline the day-to-day tasks of operating a medical facility. Also known as practice management software or practice management system (PMS).
Personal health record (PHR) – A PHR is a patient's medical history that is maintained privately, for personal use.
| Biology and health sciences | General concepts | null |
262019 | https://en.wikipedia.org/wiki/Oilbird | Oilbird | The oilbird (Steatornis caripensis), locally known as the , is a bird species found in the northern areas of South America including the Caribbean island of Trinidad. It is the only living species in the genus Steatornis, the family Steatornithidae, and the order Steatornithiformes. Nesting in colonies in caves, oilbirds are nocturnal feeders on the fruits of the oil palm and tropical laurels. They are the only nocturnal flying fruit-eating birds in the world (the kākāpō, also nocturnal, is flightless). They forage at night, with specially adapted eyesight. However, they navigate by echolocation in the same way as bats, one of the few birds to do so. They produce a high-pitched clicking sound of around 2 kHz that is audible to humans.
Taxonomy and etymology
Oilbirds are related to the nightjars and have sometimes been placed with these in the order Caprimulgiformes. However, the nightjars and their relatives are insectivores while the oilbird is a specialist fructivore, and it is sufficiently distinctive to be placed in a family (Steatornithidae) and suborder (Steatornithes) of its own. Some research indicates that it should even be considered a distinct order (Steatornithiformes).
The specific name caripensis means 'of Caripe', and the generic name Steatornis means 'fat bird', in reference to the fatness of the chicks. The oilbird is called a or in Spanish, both terms being of indigenous origin. In Trinidad it was sometimes called (French for 'little devil'), presumably referring to its loud cries, which have been likened to those of tortured men. The common name oilbird comes from the fact that in the past chicks were captured and boiled down in order to make oil.
The fossil record of the family suggests that they were once more widely distributed around the globe. The first fossil oilbird was described by Storrs Olson in 1987 from a fossil found in the Green River Formation in Wyoming. The species, Prefica nivea, was probably not adapted to hovering flight or living in caves, unlike the oilbird. Some of the same families and genera of plants the present day oilbird feeds on have been found in the Green River Formation, suggesting that prehistoric species may have eaten the same fruit and spread the same seeds. Another species from the Upper Eocene has been discovered in France.
Description
This is a large, slim bird at , with a wing span of . It has a flattened, powerfully hooked, beak surrounded by deep chestnut rictal bristles up to long. The adult weighs but the chicks can weigh considerably more, at up to , when their parents feed them a good deal of fruit before they fly. The feathers of the oilbird are soft like those of many nightbirds, but not as soft as those of owls or nightjars, as they do not need to be silent like predatory species. The oilbird is mainly reddish-brown with white spots on the nape and wings. Lower parts are cinnamon-buff with white diamond-shaped spots edged in black, these spots start small towards the throat and get larger towards the back. The stiff tail feathers are a rich brown spotted with white on either side.
The feet are small and almost useless, other than for clinging to vertical surfaces. The long wings have evolved to make it capable of hovering and twisting flight, which enables it to navigate through restricted areas of its caves. For example, the wings have deep wingtip slotting, like New World vultures, to reduce the stalling speed, and the wings have a low aspect ratio and low wing-loading, all to make the oilbird capable of flying at low speeds.
The eyes of oilbirds are highly adapted to nocturnal foraging. The eyes are small, but the pupils are relatively large, allowing the highest light-gathering capacity of any bird (f-number of 1.07). The retina is dominated by rod cells, 1,000,000 rods per mm2, the highest density of any vertebrate eye, which are organised in layers, an arrangement unique among birds but shared by deep-sea fish. They have low numbers of cone cells, and the whole arrangement would allow them to capture more light in low light conditions but probably have poor vision in daylight.
Although they have specially adapted vision to forage by sight, they are among the few birds known to supplement sight by echolocation in sufficiently poor light conditions, using a series of sharp audible clicks for this purpose. The only other birds known to do this are some species of swift.
In addition to clicks used for echolocation, oilbirds also produce a variety of harsh screams while in their caves. Entering a cave with a light especially provokes these raucous calls; they also may be heard as the birds prepare to emerge from a cave at dusk.
Distribution and habitat
The oilbird ranges from Guyana and the island of Trinidad to Venezuela, Colombia, Ecuador, Peru, Bolivia and Brazil. They range from sea-level to . The species has highly specific habitat requirements, needing both caves to breed in and roost in frequently, and forest containing fruiting trees. Where suitable caves are absent, oilbirds will roost and breed in narrow gorges and grottos with suitable rock shelves.
One such colony in Ecuador held a population of a hundred birds in a canyon with ledges protected by vegetation. Some smaller caves and gorges are used only for roosting. While it was once thought that oilbirds always or nearly always roosted in caves, canyons or gullies, researchers placing GPS trackers on non-breeding birds found that they regularly roost in trees in the forest as well as in caves.
It is a seasonal migrant across some of its range, moving from its breeding caves in search of fruit trees. It has occurred as a rare vagrant to Costa Rica, Panama and Aruba. The Guácharo Cave (Oilbird Cave), in the mountainous Caripe district of northern Monagas, Venezuela, is where Alexander von Humboldt first studied the species.
Behaviour
Oilbirds are nocturnal. During the day the birds rest on cave ledges and leave at night to find fruit outside the cave. It was once thought that oilbirds only roosted in caves, and indeed never saw daylight, but studies using GPS/acceleration loggers found that non-breeding birds only roosted in caves or other rock shelters one night in three, the other nights roosting in trees.
The scientists responsible for the discovery also found that birds roosting in caves were highly active through the night, whereas birds roosting in the forest were far less active. They hypothesised that each environment carried costs; birds roosting in the forest were more vulnerable to predators and birds roosting in caves expended considerable energy competing with rivals and defending nesting and roosting ledges.
Breeding
Oilbirds are colonial cave nesters. The nest is a heap of droppings, usually above water—either a stream or the sea—on which 2–4 glossy white eggs are laid which soon become stained brown. These are rounded but with a distinctly pointed smaller end and average by . The squabs become very fat before fledging, weighing around a third more than the adult birds.
Status and conservation
The Guácharo Cave was Venezuela's first national monument and is the centerpiece of a national park; according to some estimates there may be 15,000 or more birds living there. Colombia also has a national park named after its "Cueva de los Guácharos", near the southern border with Ecuador. Oilbirds have been reported in various other places along the Andean mountain chain, including near Ecuador's Cueva de los Tayos and in Brazil: they are known to dwell as far south as the Carrasco National Park in Bolivia. Dunston Cave, at the Asa Wright Nature Centre in Trinidad, is home to about 200 nesting pairs. The species is classified as 'Least Concern' by the IUCN red list as of October 2016, despite a decreasing population.
| Biology and health sciences | Caprimulgiformes and related | Animals |
262028 | https://en.wikipedia.org/wiki/Potoo | Potoo | Potoos (family Nyctibiidae) are a group of birds related to the nightjars and frogmouths. They are sometimes called poor-me-ones, after their haunting calls. The family Nyctibiidae was formerly included with the nightjars in the order Caprimulgiformes but is now placed in a separate order, Nyctibiiformes. There are seven species in two genera in tropical Central and South America. Fossil evidence indicates that they also inhabited Europe during the Paleogene.
Potoos are nocturnal insectivores that lack the bristles around the mouth found in the true nightjars. They hunt from a perch like a shrike or flycatcher. During the day they perch upright on tree stumps, camouflaged to look like part of the stump. The single spotted egg is laid directly on the top of a stump.
In Argentina, they are known as kakuy or cacuy from Quechua meaning 'to remain'. In Bolivia they are called guajojo, for the sound of their call. In Brazil and Paraguay, they are called urutau from Guaraní guyra 'bird' and tau 'ghost'.
Evolution and taxonomy
The potoos today are exclusively found in the Americas, but they apparently had a much more widespread distribution in the past. Fossil remains of potoos dating from the Eocene have been found in Germany. A complete skeleton of the genus Paraprefica has been found in Messel, Germany. It had skull and leg features similar to those of modern potoos, suggesting that it may be an early close relative of the modern potoos. Because the only fossils other than these ancient ones that have been found are recent ones of extinct species, it is unknown if the family once had a global distribution which has contracted, or if the distribution of the family was originally restricted to Europe and has shifted to the Americas.
A 1996 study of the mitochondrial DNA of the potoos supported the monophyly of the family although it did not support the previous assumption that it was closely related to the oilbirds. The study also found a great deal of genetic divergence between the species, suggesting that these species are themselves very old. The level of divergence is the highest of any genus of birds, being more typical of the divergence between genera or even families. The northern potoo was for a long time considered to be the same species as the common potoo, but the two species have now been separated on the basis of their calls. In spite of this there is no morphological way to separate the two species.
The family Nyctibiidae was introduced (as Nyctibie) in 1853 by the French naturalists Jean-Charles Chenu and Œillet des Murs. Prior to this, its species were classified in the Caprimulgidae.
Species
The family Nyctibiidae contains seven species in two genera:
Family Nyctibiidae Chenu & Des Murs, 1851
Subfamily Nyctibiinae Chenu & Des Murs, 1851
Genus Phyllaemulor Costa, Whitney, Braun, White, Silveira & Cleere 2018
Rufous potoo, Phyllaemulor bracteatus (Gould 1846)
Genus Nyctibius Vieillot 1816
Great potoo, Nyctibius grandis (Gmelin 1789)
Long-tailed potoo, Nyctibius aethereus (zu Wied-Neuwied 1820)
Northern potoo, Nyctibius jamaicensis (Gmelin 1789)
Common potoo or lesser potoo, Nyctibius griseus (Gmelin 1789)
Andean potoo, Nyctibius maculosus Ridgway 1912
White-winged potoo, Nyctibius leucopterus (zu Wied-Neuwied 1821)
Prior to 2018, Nyctibius was considered the only extant genus within the Nyctibiidae; however, a study that year found a deep divergence between the rufous potoo and all other species in the genus, leading it to be described in the new genus Phyllaemulor and expanding the number of genera within the family. This was followed by the International Ornithological Congress in 2022.
In addition, the fossil genus Paraprefica, the only member of the extinct subfamily Parapreficinae, is known from the Eocene of Germany (the Messel pit), marking the earliest fossil evidence of potoos. The fossil genus Euronyctibius, from the Oligocene of France, was formerly considered a potoo, but analysis supports it instead being a close relative of the oilbird (family Steatornithidae).
Description
The potoos are a highly conservative family in appearance, with all the species closely resembling one another; species accounts in ornithological literature remark on their unusual appearance. Potoos range from in length. They resemble upright sitting nightjars, a closely related family (Caprimulgidae). They also resemble the frogmouths of Australasia, which are stockier and have much heavier bills. They have proportionally large heads for their body size and long wings and tails. The large head is dominated by a massive broad bill and enormous eyes. In the treatment of the family in the Handbook of the Birds of the World, Cohn-Haft describes the potoos as "little more than a flying mouth and eyes". The bill, while large and broad, is also short, barely projecting past the face. It is delicate, but has a unique "tooth" on the cutting edge of the upper mandible that may assist in foraging. Unlike the closely related nightjars, the potoos lack rictal bristles around the mouth. The legs and feet are weak and used only for perching.
The eyes are large, even larger than those of nightjars. As in many species of nocturnal birds, they reflect the light of flashlights. Their eyes, which could be conspicuous to potential predators during the day, have unusual slits in the lids, which allow potoos to sense movement even when their eyes are closed. Their plumage is cryptic, helping them blend into the branches on which they spend their days.
Distribution and habitat
The potoos have a Neotropical distribution. They range from Mexico to Argentina, with the greatest diversity occurring in the Amazon Basin, which holds five species. They are found in every Central and South American country. They also occur on three Caribbean islands: Jamaica, Hispaniola and Tobago. The potoos are generally highly sedentary, although there are occasional reports of vagrants, particularly species that have traveled on ships. All species occur in humid forests, although a few species also occur in drier forests.
Behavior
The potoos are highly nocturnal and generally do not fly during the day. They spend the day perched on branches with the eyes half closed. With their cryptic plumage they resemble stumps, and should they detect potential danger they adopt a "freeze" position which even more closely resembles a broken branch. The transition between perching and the freeze position is gradual and hardly perceptible to the observer.
The English zoologist Hugh Cott, describing Nyctibius griseus as "this wonderful bird", writes that it "habitually selects the top of an upright stump as a receptacle for its egg, which usually occupies a small hollow just, and only just, large enough to contain it ... the stump selected had thrown up a new leader just below the point of fracture ... and the birds sat facing this in such a way that when viewed from behind they came into line and blended with the grey stem."
Food and feeding
Potoos feed at dusk and at night on flying insects. Their typical foraging technique is to perch on a branch and occasionally fly out in the manner of a flycatcher in order to snatch a passing insect. They occasionally fly to vegetation to glean an insect off it before returning to their perch, but they do not attempt to obtain prey from the ground. Beetles form a large part of their diet, but they also take moths, grasshoppers and termites. One northern potoo was found with a small bird in its stomach as well. Having caught an insect, potoos swallow it whole without beating or crushing it.
Breeding
Potoos are monogamous breeders and both parents share responsibilities for incubating the egg and raising the chick. The family does not construct a nest of any kind, instead laying the single egg on a depression in a branch or at the top of a rotten stump. The egg is white with purple-brown spots. One parent, often the male, incubates the egg during the day, then the duties are shared during the night. Changeovers to relieve incubating parents and feed chicks are infrequent to minimise attention to the nest, as potoos are entirely reliant on camouflage to protect themselves and their nesting site from predators. The chick hatches about one month after laying and the nestling phase is two months, a considerable length of time for a landbird. The plumage of nestling potoos is white and once they are too large to hide under their parents they adopt the same freeze position as their parents, resembling clumps of fungus.
Defense
The behaviors described above suggest that the common potoo adopts different defensive strategies to suit its circumstances. For a lone potoo, or a brooding adult with a potential predator close to the nest, the bird attempts to avoid detection by remaining motionless and relying on camouflage. If ineffective, the potoo breaks cover and attempts to intimidate the predator by opening its beak and eyes wide open while vocalizing or simply flies out of reach. Nocturnal predators rely less on vision for locating prey therefore a different strategy may be required at night.
| Biology and health sciences | Caprimulgiformes and related | Animals |
262032 | https://en.wikipedia.org/wiki/Frogmouth | Frogmouth | The frogmouths (Podargidae) are a group of nocturnal birds related to owlet-nightjars, swifts, and hummingbirds. Species in the group are distributed in the Indomalayan and Australasian realms.
Biology
They are named for their large flattened hooked bill and huge frog-like gape, which they use to capture insects. The three Podargus species are large frogmouths restricted to Australia and New Guinea, that have massive flat broad bills. They are known to take larger prey, such as small vertebrates (frogs, mice, etc.), which are sometimes beaten against a stone before swallowing. The ten Batrachostomus frogmouths are found in tropical Asia. They have smaller, more rounded bills and are predominantly insectivorous. Both Podargus and Batrachostomus have bristles around the base of the bill, and Batrachostomus has other, longer bristles which may exist to protect the eyes from insect prey. In April 2007, a new species of frogmouth was described from the Solomon Islands and placed in a newly established genus, Rigidipenna.
Their flight is weak. They rest horizontally on branches during the day, camouflaged by their cryptic plumage. Through convergent evolution as night hunters, they resemble owls, with large front-facing eyes.
Up to three white eggs are laid in the fork of a branch, and are incubated by the female at night and the male in the day.
Taxonomy
DNA-DNA hybridisation studies had suggested that the two frogmouth groups may not be as closely related as previously thought, and that the Asian species may be separable as a new family, the Batrachostomidae. Although frogmouths were formerly included in the order Caprimulgiformes, a 2019 study estimated the divergence between Podargus and Batrachostomus to between 30 and 50 mya and forming a clade well separated from the nightjars and being a sister group of the swifts, hummingbirds, and owlet-nightjars. The name Podargiformes proposed in 1918 by Gregory Mathews was reinstated for the clade.
Species
Genus Podargus
Tawny frogmouth, Podargus strigoides
Marbled frogmouth, Podargus ocellatus
Papuan frogmouth, Podargus papuensis
Genus Batrachostomus
Large frogmouth, Batrachostomus auritus
Dulit frogmouth, Batrachostomus harterti
Philippine frogmouth, Batrachostomus septimus
Gould's frogmouth, Batrachostomus stellatus
Sri Lanka frogmouth, Batrachostomus moniliger
Hodgson's frogmouth, Batrachostomus hodgsoni
Sumatran frogmouth, Batrachostomus poliolophus
Javan frogmouth, Batrachostomus javensis
Blyth's frogmouth, Batrachostomus affinis
Sunda frogmouth, Batrachostomus cornutus
Palawan frogmouth, Batrachostomus chaseni
Bornean frogmouth, Batrachostomus mixtus
Genus Rigidipenna
Solomons frogmouth, Rigidipenna inexpectata
In culture
In a journal article published in April 2021, researchers Katja Thömmes and Gregor Hayn-Leichsenring from the Experimental Aesthetics group at the University Hospital Jena, Germany, found the frogmouth to be the most "instagrammable" bird species. Using an algorithm to analyze the aesthetic appeal of more than 27,000 bird photographs on Instagram, they found that photos depicting frogmouths received the highest number of likes relative to the posts' exposure to users. The journal article was picked up by several news outlets, including The New York Times and The Guardian.
| Biology and health sciences | Caprimulgiformes and related | Animals |
262084 | https://en.wikipedia.org/wiki/Container%20ship | Container ship | A container ship (also called boxship or spelled containership) is a cargo ship that carries all of its load in truck-size intermodal containers, in a technique called containerization. Container ships are a common means of commercial intermodal freight transport and now carry most seagoing non-bulk cargo.
Container ship capacity is measured in twenty-foot equivalent units (TEU). Typical loads are a mix of 20-foot (1-TEU) and 40-foot (2-TEU) ISO-standard containers, with the latter predominant.
Today, about 90% of non-bulk cargo worldwide is transported by container ships, the largest of which, from 2023 onward, can carry over 24,000 TEU.
History
There are two main types of dry cargo: bulk cargo and break bulk cargo. Bulk cargoes, like grain or coal, are transported unpackaged in the hull of the ship, generally in large volume. Break-bulk cargoes, in contrast, are transported in packages, and are generally manufactured goods.
Before the advent of containerization in the 1950s, break-bulk items required manual loading, lashing, unlashing and unloading from the ship one piece at a time. This stevedoring process became more efficient by grouping cargo into containers, of cargo, or up to about , is moved at once and each container is secured to the ship once in a standardized way. Containerization has increased the efficiency of moving traditional break-bulk cargoes significantly, reducing shipping time by 84% and costs by 35%. In 2001, more than 90% of world trade in non-bulk goods was transported in ISO containers. In 2009, almost one quarter of the world's dry cargo was shipped by container, an estimated 125 million TEU or 1.19 billion tonnes worth of cargo.
The first ships designed to carry standardized load units were used in the late 18th century in England. In 1766 James Brindley designed the box boat "Starvationer" with 10 wooden containers, to transport coal from Worsley Delph to Manchester via the Bridgewater Canal. Before the Second World War, the first container ships were used to carry the baggage of the luxury passenger train from London to Paris (Southern Railway's Golden Arrow / La Flèche d'Or). These containers were loaded in London or Paris, and carried to ports of Dover or Calais on flat cars. In February 1931, the first container ship in the world was launched; the Autocarrier, owned by the Southern Railway. It had 21 slots for containers of Southern Railway.
The earliest container ships after the Second World War were converted oil tankers, built up from surplus T2 tankers after World War II. In 1951, the first purpose-built container vessels began operating in Denmark, and between Seattle and Alaska. The first commercially successful container ship was , a T2 tanker, owned by Malcom McLean, which carried 58 metal containers between Newark, New Jersey and Houston, Texas, on its first voyage. In 1955, McLean built his company, McLean Trucking into one of the United States' biggest freighter fleets. In 1955, he purchased the small Pan Atlantic Steamship Company from Waterman Steamship and adapted its ships to carry cargo in large uniform metal containers. On April 26, 1956, the first of these rebuilt container vessels, Ideal X, left the Port Newark in New Jersey and a new revolution in modern shipping resulted.
In the 1950s, a new standardized steel Intermodal container based on specifications from the United States Department of Defense began to revolutionize freight transportation.
The White Pass & Yukon Route railway acquired the world's first purpose built container ship, the Clifford J. Rogers, built in 1955, and introduced containers to its railway in 1956.
MV Kooringa was the world's first fully cellular, purpose-built container ship. and was built by Australian company Associated Steamships, a partnership formed by the 1964 merger of the Adelaide Steamship Company with McIlwraith, McEacharn & Co, then commissioned in May 1964.
Container ships were designed to accommodate intermodal transport of goods, and eliminated requirements for the individual hatches, holds and other dividers of traditional cargo ships. The hull of a typical container ship is similar to an airport hangar, or a huge warehouse, which is divided into individual holding cells, using vertical guide rails. The ship's cells are designed to hold cargo containers, which are typically constructed of steel, though sometimes of aluminum, fiberglass or plywood, and designed for intermodal transfers between ship and train, truck or semi-trailer. Shipping containers are categorized by type, size and function.
Today, about 90% of non-bulk cargo worldwide is transported by container by about 50,000 container ships. Modern container ships can carry over 24,000 TEU. The largest container ships measure about in length, and carry loads equal to the cargo-carrying capacity of sixteen to seventeen pre-World War II freighter ships.
Architecture
There are several key points in the design of modern container ships. The hull, similar to that of bulk carriers and general cargo ships, is built around a strong keel. Into this frame is set one or more below-deck cargo holds, numerous tanks, and the engine room. The holds are topped by hatch covers, onto which more containers can be stacked. Many container ships have cargo cranes installed on them, and some have specialized systems for securing containers on board.
The hull of a modern cargo ship is a complex arrangement of steel plates and strengthening beams. Resembling ribs, and fastened at right angles to the keel, are the ship's frames. The ship's main deck, the metal platework that covers the top of the hull framework, is supported by beams that are attached to the tops of the frames and run the full breadth of the ship. The beams not only support the deck, but along with the deck, frames, and transverse bulkheads, strengthen and reinforce the shell. Another feature of recent hulls is a set of double-bottom tanks, which provide a second watertight shell that runs most of the length of a ship. The double-bottoms generally hold liquids such as fuel oil, ballast water or fresh water.
A ship's engine room houses its main engines and auxiliary machinery such as the fresh water and sewage systems, electrical generators, fire pumps, and air conditioners. In most new ships, the engine room is located in the aft portion.
Size categories
Container ships are distinguished into 7 major size categories: small feeder, feeder, feedermax, Panamax, Post-Panamax, Neopanamax and ultra-large. As of December 2012, there were 161 container ships in the VLCS class (Very Large Container Ships, more than 10,000 TEU), and 51 ports in the world can accommodate them.
The size of a Panamax vessel is limited by the original Panama canal's lock chambers, which can accommodate ships with a beam of up to 32.31 m, a length overall of up to 294.13 m, and a draft of up to 12.04 m. The Post-Panamax category has historically been used to describe ships with a moulded breadth over 32.31 m, however the Panama Canal expansion project has caused some changes in terminology. The Neopanamax category is based on the maximum vessel size that is able to transit a new third set of locks, which opened in June 2016. The third set of locks were built to accommodate a container ship with a length overall of , a maximum beam (width) of , and tropical fresh-water draft of . Such a vessel, called Neopanamax class, is wide enough to carry 19 columns of containers, can have a total capacity of approximately 12,000 TEU and is comparable in size to a capesize bulk carrier or a Suezmax tanker.
Container ships under 3,000 TEU are generally called feeder ships or feeders. They are small ships that typically operate between smaller container ports. Some feeders collect their cargo from small ports, drop it off at large ports for transshipment on larger ships, and distribute containers from the large port to smaller regional ports. This size of vessel is the most likely to carry cargo cranes on board.
Cargo cranes
A major characteristic of a container ship is whether it has cranes installed for handling its cargo. Those that have cargo cranes are called geared and those that do not are called ungeared or gearless. The earliest purpose-built container ships in the 1970s were all gearless. Since then, the percentage of geared newbuilds has fluctuated widely, but has been decreasing overall, with only 7.5% of the container ship capacity in 2009 being equipped with cranes.
While geared container ships are more flexible in that they can visit ports that are not equipped with pierside container cranes, they suffer from several drawbacks. To begin with, geared ships will cost more to purchase than a gearless ship. Geared ships also incur greater recurring expenses, such as maintenance and fuel costs. The United Nations Council on Trade and Development characterizes geared ships as a "niche market only appropriate for those ports where low cargo volumes do not justify investment in port cranes or where the public sector does not have the financial resources for such investment".
Instead of the rotary cranes, some geared ships have gantry cranes installed. These cranes, specialized for container work, are able to roll forward and aft on rails. In addition to the additional capital expense and maintenance costs, these cranes generally load and discharge containers much more slowly than their shoreside counterparts.
The introduction and improvement of shoreside container cranes have been a key to the success of the container ship. The first crane that was specifically designed for container work was built in California's Port of Alameda in 1959. By the 1980s, shoreside gantry cranes were capable of moving containers on a 3-minute-cycle, or up to 400 tons per hour. In March 2010, at Port Klang in Malaysia, a new world record was set when 734 container moves were made in a single hour. The record was achieved using 9 cranes to simultaneously load and unload , a ship with a capacity of 9,600 TEU.
Vessels in the 1,500–2,499 TEU range are the most likely size class to have cranes, with more than 60% of this category being geared ships. Slightly less than a third of the very smallest ships (from 100–499 TEU) are geared, and almost no ships with a capacity of over 4,000 TEU are geared.
Cargo holds
Efficiency has always been key in the design of container ships. While containers may be carried on conventional break-bulk ships, cargo holds for dedicated container ships are specially constructed to speed loading and unloading, and to efficiently keep containers secure while at sea. A key aspect of container ship specialization is the design of the hatches, the openings from the main deck to the cargo holds. The hatch openings stretch the entire breadth of the cargo holds, and are surrounded by a raised steel structure known as the hatch coaming. On top of the hatch coamings are the hatch covers. Until the 1950s, hatches were typically secured with wooden boards and tarpaulins held down with battens. Today, some hatch covers can be solid metal plates that are lifted on and off the ship by cranes, while others are articulated mechanisms that are opened and closed using powerful hydraulic rams.
Another key component of dedicated container-ship design is the use of cell guides. Cell guides are strong vertical structures constructed of metal installed into a ship's cargo holds. These structures guide containers into well-defined rows during loading and provide some support for containers against the ship's rolling at sea. So fundamental to container ship design are cell guides that organizations such as the United Nations Conference on Trade and Development use their presence to distinguish dedicated container ships from general break-bulk cargo ships.
A system of three dimensions is used in cargo plans to describe the position of a container aboard the ship. The first coordinate is the bay, which starts at the front of the ship and increases aft. The second coordinate is the row. Rows on the starboard side are given odd numbers and those on the port side are given even numbers. The rows nearest the centerline are given low numbers, and the numbers increase for slots further from the centerline. The third coordinate is the tier, with the first tier at the bottom of the cargo holds, the second tier on top of that, and so forth.
Container ships typically take 20 foot and 40 foot containers. Some ships can take 45 footers above deck. A few ships (APL since 2007, Carrier53 since 2022 ) can carry 53 foot containers. 40 foot containers are the primary container size, making up about 90% of all container shipping and since container shipping moves 90% of the world's freight, over 80% of the world's freight moves via 40 foot containers.
Lashing systems
Numerous systems are used to secure containers aboard ships, depending on factors such as the type of ship, the type of container, and the location of the container. Stowage inside the holds of fully cellular (FC) ships is simplest, typically using simple metal forms called container guides, locating cones, and anti-rack spacers to lock the containers together. Above-decks, without the extra support of the cell guides, more complicated equipment is used. Three types of systems are currently in wide use: lashing systems, locking systems, and buttress systems. Lashing systems secure containers to the ship using devices made from wire rope, rigid rods, or chains and devices to tension the lashings, such as turnbuckles. The effectiveness of lashings is increased by securing containers to each other, either by simple metal forms (such as stacking cones) or more complicated devices such as twist-lock stackers. A typical twist-lock is inserted into the casting hole of one container and rotated to hold it in place, then another container is lowered on top of it. The two containers are locked together by twisting the device's handle. A typical twist-lock is constructed of forged steel and ductile iron and has a shear strength of 48 tonnes.
The buttress system, used on some large container ships, uses a system of large towers attached to the ship at both ends of each cargo hold. As the ship is loaded, a rigid, removable stacking frame is added, structurally securing each tier of containers together.
Bridge
Container ships have typically had a single bridge and accommodation unit towards the rear, but to reconcile demand for larger container capacity with SOLAS visibility requirements, several new designs have been developed. , some large container ships are being developed with the bridge further forward, separate from the exhaust stack. Some smaller container ships working in European ports and rivers have liftable wheelhouses, which can be lowered to pass under low bridges.
Fleet characteristics
, container ships made up 13.3% of the world's fleet in terms of deadweight tonnage. The world's total of container ship deadweight tonnage has increased from in 1980 to in 2010. The combined deadweight tonnage of container ships and general cargo ships, which also often carry containers, represents 21.8% of the world's fleet.
, the average age of container ships worldwide was 10.6 years, making them the youngest general vessel type, followed by bulk carriers at 16.6 years, oil tankers at 17 years, general cargo ships at 24.6 years, and others at 25.3 years.
Most of the world's carrying capacity in fully cellular container ships is in the liner service, where ships trade on scheduled routes. As of January 2010, the top 20 liner companies controlled 67.5% of the world's fully cellular container capacity, with 2,673 vessels of an average capacity of 3,774 TEU. The remaining 6,862 fully cellular ships have an average capacity of 709 TEU each.
The vast majority of the capacity of fully cellular container ships used in the liner trade is owned by German shipowners, with approximately 75% owned by Hamburg brokers. It is a common practice for the large container lines to supplement their own ships with chartered-in ships, for example in 2009, 48.9% of the tonnage of the top 20 liner companies was chartered-in in this manner.
Flag states
International law requires that every merchant ship be registered in a country, called its flag state. A ship's flag state exercises regulatory control over the vessel and is required to inspect it regularly, certify the ship's equipment and crew, and issue safety and pollution prevention documents. , the United States Bureau of Transportation Statistics count 2,837 container ships of or greater worldwide. Panama was the world's largest flag state for container ships, with 541 of the vessels in its registry. Seven other flag states had more than 100 registered container ships: Liberia (415), Germany (248), Singapore (177), Cyprus (139), the Marshall Islands (118) and the United Kingdom (104). The Panamanian, Liberian, and Marshallese flags are open registries and considered by the International Transport Workers' Federation to be flags of convenience. By way of comparison, traditional maritime nations such as the United States and Japan only had 75 and 11 registered container ships, respectively.
Vessel purchases
In recent years, oversupply of container ship capacity has caused prices for new and used ships to fall. From 2008 to 2009, new container ship prices dropped by 19–33%, while prices for 10-year-old container ships dropped by 47–69%. In March 2010, the average price for a geared 500-TEU container ship was $10 million, while gearless ships of 6,500 and 12,000 TEU averaged prices of $74 million and $105 million respectively. At the same time, secondhand prices for 10-year-old geared container ships of 500-, 2,500-, and 3,500-TEU capacity averaged prices of $4 million, $15 million, and $18 million respectively.
In 2009, 11,669,000 gross tons of newly built container ships were delivered. Over 85% of this new capacity was built in the Republic of Korea, China, and Japan, with Korea accounting for over 57% of the world's total alone. New container ships accounted for 15% of the total new tonnage that year, behind bulk carriers at 28.9% and oil tankers at 22.6%.
Scrapping
Most ships are removed from the fleet through a process known as scrapping. Scrapping is rare for ships under 18 years old and common for those over 40 years in age. Ship-owners and buyers negotiate scrap prices based on factors such as the ship's empty weight (called light ton displacement or LTD) and prices in the scrap metal market. Scrapping rates are volatile, the price per light ton displacement has swung from a high of $650 per LTD in mid-2008 to $200 per LTD in early 2009, before building to $400 per LTD in March 2010. , over 96% of the world's scrapping activity takes place in China, India, Bangladesh, and Pakistan.
The global economic downturn of 2008–2009 resulted in more ships than usual being sold for scrap. In 2009, 364,300 TEU worth of container ship capacity was scrapped, up from 99,900 TEU in 2008. Container ships accounted for 22.6% of the total gross tonnage of ships scrapped that year. Despite the surge, the capacity removed from the fleet only accounted for 3% of the world's container ship capacity. The average age of container ships scrapped in 2009 was 27.0 years.
Largest ships
Economies of scale have dictated an upward trend in the size of container ships in order to reduce expenses. However, there are certain limitations to the size of container ships. Primarily, these are the availability of sufficiently large main engines and the availability of a sufficient number of ports and terminals prepared and equipped to handle ultra-large container ships. Furthermore, the permissible maximum ship dimensions in some of the world's main waterways could present an upper limit in terms of vessel growth. This primarily concerns the Suez Canal and the Singapore Strait.
In 2008 the South Korean shipbuilder STX announced plans to construct a container ship capable of carrying , and with a proposed length of and a beam of . If constructed, the container ship would become the largest seagoing vessel in the world.
Since even very large container ships are vessels with relatively low draft compared to large tankers and bulk carriers, there is still considerable room for vessel growth. Compared to today's largest container ships, Maersk Line's Emma Mærsk-type series, a container ship would only be moderately larger in terms of exterior dimensions. According to a 2011 estimate, an ultra-large container ship of would measure , compared to for the Emma Mærsk class. It would have an estimated deadweight of circa 220,000 tons. While such a vessel might be near the upper limit for a Suez Canal passage, the so-called Malaccamax concept (for Straits of Malacca) does not apply for container ships, since the Malacca and Singapore Straits' draft limit of about is still above that of any conceivable container ship design. In 2011, Maersk announced plans to build a new "Triple E" family of container ships with a capacity of 18,000 TEU, with an emphasis on lower fuel consumption.
In the present market situation, main engines will not be as much of a limiting factor for vessel growth either. The steadily rising expense of fuel oil in the early 2010s had prompted most container lines to adapt a slower, more economical voyage speed of about 21 knots, compared to earlier top speeds of 25 or more knots. Subsequently, newly built container ships can be fitted with a smaller main engine. Engine types fitted to today's ships of are thus sufficiently large to propel future vessels of or more. Maersk Line, the world's largest container shipping line, nevertheless opted for twin engines (two smaller engines working two separate propellers), when ordering a series of ten 18,000 TEU vessels from Daewoo Shipbuilding in February 2011. The ships were delivered between 2013 and 2014. In 2016, some experts believed that the current largest container ships are at the optimum size, and could not economically be larger, as port facilities would be too expensive, port handling too time consuming, the number of suitable ports too low, and insurance cost too high.
In March 2017 the first ship with an official capacity over 20,000 TEUs was christened at Samsung Heavy Industries. MOL Triumph has a capacity of 20,150 TEUs. Samsung Heavy Industries was expected to deliver several ships of over 20,000 TEUs in 2017, and has orders for at least ten vessels in that size range for OOCL and MOL.
The world's largest container ship, MSC Irina, was delivered March 9, 2023 by builder Yangzi Xinfu Shipbuilding to the Mediterranean Shipping Company (MSC), with a capacity of 24,346 TEUs. Measuring 399.99 metres in length and 61.3 metres in beam, the ship is one of four ordered from the builder in 2020, and exceeded MSC's 24,116 TEU MSC Tessa, which had been delivered that same day by the China State Shipbuilding Corporation (CSSC). In April, MSC Irina sister ship MSC Loreto, with an equal capacity of 24,346 TEU was received by MSC.
On June 2, 2023 Ocean Network Express took delivery of the ONE Innovation with a capacity of 24,136 TEUs. ONE Innovation is one of six new Megamax vessels ordered by Ocean Network Express in December 2020 to be built by a consortium of Imabari Shipbuilding and Japan Marine United.
Freight market
The act of hiring a ship to carry cargo is called chartering. Outside special bulk cargo markets, ships are hired by three types of charter agreements: the voyage charter, the time charter, and the bareboat charter. In a voyage charter, the charterer rents the vessel from the loading port to the discharge port. In a time charter, the vessel is hired for a set period of time, to perform voyages as the charterer directs. In a bareboat charter, the charterer acts as the ship's operator and manager, taking on responsibilities such as providing the crew and maintaining the vessel. The completed chartering contract is known as a charter party.
The United Nations Conference on Trade and Development [UNCTAD], tracks in its 2010 Review of Maritime Trade two aspects of container shipping prices: The first one is a chartering price, specifically the price to time-charter a 1 TEU slot for 14 tonnes of cargo on a container ship. The other is the freight rate; or comprehensive daily cost to deliver one-TEU worth of cargo on a given route. As a result of the late-2000s recession, both indicators showed sharp drops during 2008–2009, and have shown signs of stabilization since 2010.
UNCTAD uses the Hamburg Shipbrokers' Association (formally the Vereinigung Hamburger Schiffsmakler und Schiffsagenten e. V. or VHSS for short) as its main industry source for container ship freight prices. The VHSS maintains a few indices of container ship charter prices. The oldest, which dates back to 1998, is called the Hamburg Index. This index considers time-charters on fully cellular container ships controlled by Hamburg brokers. It is limited to charters of 3 months or more, and presented as the average daily cost in U.S. dollars for a one-TEU slot with a weight of 14 tonnes. The Hamburg Index data is divided into ten categories based primarily on vessel carrying capacity. Two additional categories exist for small vessels of under 500 TEU that carry their own cargo cranes. In 2007, VHSS started another index, the New ConTex which tracks similar data obtained from an international group of shipbrokers.
The Hamburg Index shows some clear trends in recent chartering markets. First, rates were generally increasing from 2000 to 2005. From 2005 to 2008, rates slowly decreased, and in mid-2008 began a "dramatic decline" of approximately 75%, which lasted until rates stabilized in April 2009. Rates have ranged from $2.70 to $35.40 in this period, with prices generally lower on larger ships. The most resilient sized vessel in this time period were those from 200 to 300 TEU, a fact that the United Nations Council on Trade and Development attributes to lack of competition in this sector. Overall, in 2010, these rates rebounded somewhat, but remained at approximately half of their 2008 values. As of 2011, the index shows signs of recovery for container shipping, and combined with increases in global capacity, indicates a positive outlook for the sector in the near future.
UNCTAD also tracks container freight rates. Freight rates are expressed as the total price in U.S. dollars for a shipper to transport one TEU worth of cargo along a given route. Data is given for the three main container liner routes: U.S.-Asia, U.S.-Europe, and Europe-Asia. Prices are typically different between the two legs of a voyage, for example the Asia-U.S. rates have been significantly higher than the return U.S.-Asia rates in recent years. Generally, from the fourth quarter of 2008 through the third quarter of 2009, both the volume of container cargo and freight rates have dropped sharply. In 2009, the freight rates on the U.S.–Europe route were sturdiest, while the Asia-U.S. route fell the most.
Liner companies responded to their overcapacity in several ways. For example, in early 2009, some container lines dropped their freight rates to zero on the Asia-Europe route, charging shippers only a surcharge to cover operating costs. They decreased their overcapacity by lowering the ships' speed (a strategy called "slow steaming") and by laying up ships. Slow steaming increased the length of the Europe-Asia routes to a record high of over 40 days. Another strategy used by some companies was to manipulate the market by publishing notices of rate increases in the press, and when "a notice had been issued by one carrier, other carriers followed suit".
The Trans-Siberian Railroad (TSR) has recently become a more viable alternative to container ships on the Asia-Europe route. This railroad can typically deliver containers in 1/3 to 1/2 of the time of a sea voyage, and in late 2009 announced a 20% reduction in its container shipping rates. With its 2009 rate schedule, the TSR will transport a forty-foot container to Poland from Yokohama for $2,820, or from Pusan for $2,154.
Shipping industry alliances
In an effort to control costs and maximize capacity utilization on ever-larger ships, vessel sharing agreements, co-operative agreements, and slot-exchanges have become a growing feature of the maritime container shipping industry. As of March 2015, 16 of the world's largest container shipping lines had consolidated their routes and services accounting for 95 percent of container cargo volumes moving in the dominant east-west trade routes. Carriers remain operationally independent, as they are forbidden by antitrust regulators in multiple jurisdictions from colluding on freight rates or capacity. Similarities can be drawn with airline alliances.
In July 2016 the European Commission reported that it had raised concerns with 14 container shipping carriers regarding their practice of announcing General Rate Increases (GRIs) in a coordinated manner, which potentially conflicted with the EU and EEA rules on concerted practices which could distort competition (Article 101 of the Treaty on the Functioning of the European Union). The shipping companies announced a series of commitments aiming to address the Commission's concerns, which for its part the Commission accepted as "legally binding" for the period from 2016 to 2019. General Rate Increases continue to be published in the industry either annually or sixth-monthly.
Container ports
Container traffic through a port is often tracked in terms of twenty foot equivalent units or TEU of throughput. , the Port of Shanghai was the world's busiest container port, with 43,303,000 TEU handled.
That year, seven of the busiest ten container ports were in the People's Republic of China, with Shanghai in 1st place, Ningbo 3rd, Shenzhen 4th, Guangzhou 5th, Qingdao 7th, Hong Kong 8th and Tianjin 9th.
Rounding out the top ten ports were Singapore at 2nd, Busan in South Korea at 6th and Rotterdam in the Netherlands in the 10th position.
In total, the busiest twenty container ports handled 220,905,805 TEU in 2009, almost half of the world's total estimated container traffic that year of 465,597,537 TEU.
Losses and safety problems
It has been estimated that container ships lose between 2,000 and 10,000 containers at sea each year, costing $370 million. A survey for the six years 2008 through 2013 estimates average losses of individual containers overboard at 546 per year, and average total losses including catastrophic events such as vessel sinkings or groundings at 1,679 per year More recently, a survey conducted by the WSC from 2008–2019, saw an average of 1,382 shipping containers lost at sea. However, in the 3-year period from 2017–2019, that number was nearly halved, down to an average of 779 containers lost annually. Most go overboard on the open sea during storms but there are some examples of whole ships being lost with their cargo. One major shipping accident occurred in 2013 when the MOL Comfort sank with 4,293 containers onboard in the Indian Ocean. When containers are dropped, they immediately become an environmental threat – termed "marine debris". Once in the ocean, they fill with water and sink if the contents cannot hold air. Rough waters smash the container, sinking it quickly.
As container ships get larger and stacking becomes higher, the threat of containers toppling into the sea during a storm increases. This results from a phenomenon called "parametric rolling," by which a ship can roll 30-40 degrees during rough seas creating a powerful torque on a 10-high stack of containers which can easily snap lashings and locks of the stack, resulting in losses into the sea.
| Technology | Naval transport | null |
262135 | https://en.wikipedia.org/wiki/Rocket%20engine | Rocket engine | A rocket engine is a reaction engine, producing thrust in accordance with Newton's third law by ejecting reaction mass rearward, usually a high-speed jet of high-temperature gas produced by the combustion of rocket propellants stored inside the rocket. However, non-combusting forms such as cold gas thrusters and nuclear thermal rockets also exist. Rocket vehicles carry their own oxidiser, unlike most combustion engines, so rocket engines can be used in a vacuum, and they can achieve great speed, beyond escape velocity. Vehicles commonly propelled by rocket engines include missiles, artillery shells, ballistic missiles and rockets of any size, from tiny fireworks to man-sized weapons to huge spaceships.
Compared to other types of jet engine, rocket engines are the lightest and have the highest thrust, but are the least propellant-efficient (they have the lowest specific impulse). The ideal exhaust is hydrogen, the lightest of all elements, but chemical rockets produce a mix of heavier species, reducing the exhaust velocity.
Terminology
Here, "rocket" is used as an abbreviation for "rocket engine".
Thermal rockets use an inert propellant, heated by electricity (electrothermal propulsion) or a nuclear reactor (nuclear thermal rocket).
Chemical rockets are powered by exothermic reduction-oxidation chemical reactions of the propellant:
Solid-fuel rockets (or solid-propellant rockets or motors) are chemical rockets which use propellant in a solid state.
Liquid-propellant rockets use one or more propellants in a liquid state fed from tanks.
Hybrid rockets use a solid propellant in the combustion chamber, to which a second liquid or gas oxidiser or propellant is added to permit combustion.
Monopropellant rockets use a single propellant decomposed by a catalyst. The most common monopropellants are hydrazine and hydrogen peroxide.
Principle of operation
Rocket engines produce thrust by the expulsion of an exhaust fluid that has been accelerated to high speed through a propelling nozzle. The fluid is usually a gas created by high pressure () combustion of solid or liquid propellants, consisting of fuel and oxidiser components, within a combustion chamber. As the gases expand through the nozzle, they are accelerated to very high (supersonic) speed, and the reaction to this pushes the engine in the opposite direction. Combustion is most frequently used for practical rockets, as the laws of thermodynamics (specifically Carnot's theorem) dictate that high temperatures and pressures are desirable for the best thermal efficiency. Nuclear thermal rockets are capable of higher efficiencies, but currently have environmental problems which preclude their routine use in the Earth's atmosphere and cislunar space.
For model rocketry, an available alternative to combustion is the water rocket pressurized by compressed air, carbon dioxide, nitrogen, or any other readily available, inert gas.
Propellant
Rocket propellant is mass that is stored, usually in some form of tank, or within the combustion chamber itself, prior to being ejected from a rocket engine in the form of a fluid jet to produce thrust.
Chemical rocket propellants are the most commonly used. These undergo exothermic chemical reactions producing a hot gas jet for propulsion. Alternatively, a chemically inert reaction mass can be heated by a high-energy power source through a heat exchanger in lieu of a combustion chamber.
Solid rocket propellants are prepared in a mixture of fuel and oxidising components called grain, and the propellant storage casing effectively becomes the combustion chamber.
Injection
Liquid-fuelled rockets force separate fuel and oxidiser components into the combustion chamber, where they mix and burn. Hybrid rocket engines use a combination of solid and liquid or gaseous propellants. Both liquid and hybrid rockets use injectors to introduce the propellant into the chamber. These are often an array of simple jets – holes through which the propellant escapes under pressure; but sometimes may be more complex spray nozzles. When two or more propellants are injected, the jets usually deliberately cause the propellants to collide as this breaks up the flow into smaller droplets that burn more easily.
Combustion chamber
For chemical rockets the combustion chamber is typically cylindrical, and flame holders, used to hold a part of the combustion in a slower-flowing portion of the combustion chamber, are not needed. The dimensions of the cylinder are such that the propellant is able to combust thoroughly; different rocket propellants require different combustion chamber sizes for this to occur.
This leads to a number called , the characteristic length:
where:
is the volume of the chamber
is the area of the throat of the nozzle.
L* is typically in the range of .
The temperatures and pressures typically reached in a rocket combustion chamber in order to achieve practical thermal efficiency are extreme compared to a non-afterburning airbreathing jet engine. No atmospheric nitrogen is present to dilute and cool the combustion, so the propellant mixture can reach true stoichiometric ratios. This, in combination with the high pressures, means that the rate of heat conduction through the walls is very high.
In order for fuel and oxidiser to flow into the chamber, the pressure of the propellants entering the combustion chamber must exceed the pressure inside the combustion chamber itself. This may be accomplished by a variety of design approaches including turbopumps or, in simpler engines, via sufficient tank pressure to advance fluid flow. Tank pressure may be maintained by several means, including a high-pressure helium pressurization system common to many large rocket engines or, in some newer rocket systems, by a bleed-off of high-pressure gas from the engine cycle to autogenously pressurize the propellant tanks For example, the self-pressurization gas system of the SpaceX Starship is a critical part of SpaceX strategy to reduce launch vehicle fluids from five in their legacy Falcon 9 vehicle family to just two in Starship, eliminating not only the helium tank pressurant but all hypergolic propellants as well as nitrogen for cold-gas reaction-control thrusters.
Nozzle
The hot gas produced in the combustion chamber is permitted to escape through an opening (the "throat"), and then through a diverging expansion section. When sufficient pressure is provided to the nozzle (about 2.5–3 times ambient pressure), the nozzle chokes and a supersonic jet is formed, dramatically accelerating the gas, converting most of the thermal energy into kinetic energy. Exhaust speeds vary, depending on the expansion ratio the nozzle is designed for, but exhaust speeds as high as ten times the speed of sound in air at sea level are not uncommon. About half of the rocket engine's thrust comes from the unbalanced pressures inside the combustion chamber, and the rest comes from the pressures acting against the inside of the nozzle (see diagram). As the gas expands (adiabatically) the pressure against the nozzle's walls forces the rocket engine in one direction while accelerating the gas in the other.
The most commonly used nozzle is the de Laval nozzle, a fixed geometry nozzle with a high expansion-ratio. The large bell- or cone-shaped nozzle extension beyond the throat gives the rocket engine its characteristic shape.
The exit static pressure of the exhaust jet depends on the chamber pressure and the ratio of exit to throat area of the nozzle. As exit pressure varies from the ambient (atmospheric) pressure, a choked nozzle is said to be
under-expanded (exit pressure greater than ambient),
perfectly expanded (exit pressure equals ambient),
over-expanded (exit pressure less than ambient; shock diamonds form outside the nozzle), or
grossly over-expanded (a shock wave forms inside the nozzle extension).
In practice, perfect expansion is only achievable with a variable–exit-area nozzle (since ambient pressure decreases as altitude increases), and is not possible above a certain altitude as ambient pressure approaches zero. If the nozzle is not perfectly expanded, then loss of efficiency occurs. Grossly over-expanded nozzles lose less efficiency, but can cause mechanical problems with the nozzle. Fixed-area nozzles become progressively more under-expanded as they gain altitude. Almost all de Laval nozzles will be momentarily grossly over-expanded during startup in an atmosphere.
Nozzle efficiency is affected by operation in the atmosphere because atmospheric pressure changes with altitude; but due to the supersonic speeds of the gas exiting from a rocket engine, the pressure of the jet may be either below or above ambient, and equilibrium between the two is not reached at all altitudes (see diagram).
Back pressure and optimal expansion
For optimal performance, the pressure of the gas at the end of the nozzle should just equal the ambient pressure: if the exhaust's pressure is lower than the ambient pressure, then the vehicle will be slowed by the difference in pressure between the top of the engine and the exit; on the other hand, if the exhaust's pressure is higher, then exhaust pressure that could have been converted into thrust is not converted, and energy is wasted.
To maintain this ideal of equality between the exhaust's exit pressure and the ambient pressure, the diameter of the nozzle would need to increase with altitude, giving the pressure a longer nozzle to act on (and reducing the exit pressure and temperature). This increase is difficult to arrange in a lightweight fashion, although is routinely done with other forms of jet engines. In rocketry a lightweight compromise nozzle is generally used and some reduction in atmospheric performance occurs when used at other than the 'design altitude' or when throttled. To improve on this, various exotic nozzle designs such as the plug nozzle, stepped nozzles, the expanding nozzle and the aerospike have been proposed, each providing some way to adapt to changing ambient air pressure and each allowing the gas to expand further against the nozzle, giving extra thrust at higher altitudes.
When exhausting into a sufficiently low ambient pressure (vacuum) several issues arise. One is the sheer weight of the nozzle—beyond a certain point, for a particular vehicle, the extra weight of the nozzle outweighs any performance gained. Secondly, as the exhaust gases adiabatically expand within the nozzle they cool, and eventually some of the chemicals can freeze, producing 'snow' within the jet. This causes instabilities in the jet and must be avoided.
On a de Laval nozzle, exhaust gas flow detachment will occur in a grossly over-expanded nozzle. As the detachment point will not be uniform around the axis of the engine, a side force may be imparted to the engine. This side force may change over time and result in control problems with the launch vehicle.
Advanced altitude-compensating designs, such as the aerospike or plug nozzle, attempt to minimize performance losses by adjusting to varying expansion ratio caused by changing altitude.
Propellant efficiency
For a rocket engine to be propellant efficient, it is important that the maximum pressures possible be created on the walls of the chamber and nozzle by a specific amount of propellant; as this is the source of the thrust. This can be achieved by all of:
heating the propellant to as high a temperature as possible (using a high energy fuel, containing hydrogen and carbon and sometimes metals such as aluminium, or even using nuclear energy)
using a low specific density gas (as hydrogen rich as possible)
using propellants which are, or decompose to, simple molecules with few degrees of freedom to maximise translational velocity
Since all of these things minimise the mass of the propellant used, and since pressure is proportional to the mass of propellant present to be accelerated as it pushes on the engine, and since from Newton's third law the pressure that acts on the engine also reciprocally acts on the propellant, it turns out that for any given engine, the speed that the propellant leaves the chamber is unaffected by the chamber pressure (although the thrust is proportional). However, speed is significantly affected by all three of the above factors and the exhaust speed is an excellent measure of the engine propellant efficiency. This is termed exhaust velocity, and after allowance is made for factors that can reduce it, the effective exhaust velocity is one of the most important parameters of a rocket engine (although weight, cost, ease of manufacture etc. are usually also very important).
For aerodynamic reasons the flow goes sonic ("chokes") at the narrowest part of the nozzle, the 'throat'. Since the speed of sound in gases increases with the square root of temperature, the use of hot exhaust gas greatly improves performance. By comparison, at room temperature the speed of sound in air is about 340 m/s while the speed of sound in the hot gas of a rocket engine can be over 1700 m/s; much of this performance is due to the higher temperature, but additionally rocket propellants are chosen to be of low molecular mass, and this also gives a higher velocity compared to air.
Expansion in the rocket nozzle then further multiplies the speed, typically between 1.5 and 2 times, giving a highly collimated hypersonic exhaust jet. The speed increase of a rocket nozzle is mostly determined by its area expansion ratio—the ratio of the area of the exit to the area of the throat, but detailed properties of the gas are also important. Larger ratio nozzles are more massive but are able to extract more heat from the combustion gases, increasing the exhaust velocity.
Thrust vectoring
Vehicles typically require the overall thrust to change direction over the length of the burn. A number of different ways to achieve this have been flown:
The entire engine is mounted on a hinge or gimbal and any propellant feeds reach the engine via low pressure flexible pipes or rotary couplings.
Just the combustion chamber and nozzle is gimballed, the pumps are fixed, and high pressure feeds attach to the engine.
Multiple engines (often canted at slight angles) are deployed but throttled to give the overall vector that is required, giving only a very small penalty.
High-temperature vanes protrude into the exhaust and can be tilted to deflect the jet.
Overall performance
Rocket technology can combine very high thrust (meganewtons), very high exhaust speeds (around 10 times the speed of sound in air at sea level) and very high thrust/weight ratios (>100) simultaneously as well as being able to operate outside the atmosphere, and while permitting the use of low pressure and hence lightweight tanks and structure.
Rockets can be further optimised to even more extreme performance along one or more of these axes at the expense of the others.
Specific impulse
The most important metric for the efficiency of a rocket engine is impulse per unit of propellant, this is called specific impulse (usually written ). This is either measured as a speed (the effective exhaust velocity in metres/second or ft/s) or as a time (seconds). For example, if an engine producing 100 pounds of thrust runs for 320 seconds and burns 100 pounds of propellant, then the specific impulse is 320 seconds. The higher the specific impulse, the less propellant is required to provide the desired impulse.
The specific impulse that can be achieved is primarily a function of the propellant mix (and ultimately would limit the specific impulse), but practical limits on chamber pressures and the nozzle expansion ratios reduce the performance that can be achieved.
Net thrust
Below is an approximate equation for calculating the net thrust of a rocket engine:
Since, unlike a jet engine, a conventional rocket motor lacks an air intake, there is no 'ram drag' to deduct from the gross thrust. Consequently, the net thrust of a rocket motor is equal to the gross thrust (apart from static back pressure).
The term represents the momentum thrust, which remains constant at a given throttle setting, whereas the term represents the pressure thrust term. At full throttle, the net thrust of a rocket motor improves slightly with increasing altitude, because as atmospheric pressure decreases with altitude, the pressure thrust term increases. At the surface of the Earth the pressure thrust may be reduced by up to 30%, depending on the engine design. This reduction drops roughly exponentially to zero with increasing altitude.
Maximum efficiency for a rocket engine is achieved by maximising the momentum contribution of the equation without incurring penalties from over expanding the exhaust. This occurs when . Since ambient pressure changes with altitude, most rocket engines spend very little time operating at peak efficiency.
Since specific impulse is force divided by the rate of mass flow, this equation means that the specific impulse varies with altitude.
Vacuum specific impulse, Isp
Due to the specific impulse varying with pressure, a quantity that is easy to compare and calculate with is useful. Because rockets choke at the throat, and because the supersonic exhaust prevents external pressure influences travelling upstream, it turns out that the pressure at the exit is ideally exactly proportional to the propellant flow , provided the mixture ratios and combustion efficiencies are maintained. It is thus quite usual to rearrange the above equation slightly:
and so define the vacuum Isp to be:
where:
And hence:
Throttling
Rockets can be throttled by controlling the propellant combustion rate (usually measured in kg/s or lb/s). In liquid and hybrid rockets, the propellant flow entering the chamber is controlled using valves, in solid rockets it is controlled by changing the area of propellant that is burning and this can be designed into the propellant grain (and hence cannot be controlled in real-time).
Rockets can usually be throttled down to an exit pressure of about one-third of ambient pressure (often limited by flow separation in nozzles) and up to a maximum limit determined only by the mechanical strength of the engine.
In practice, the degree to which rockets can be throttled varies greatly, but most rockets can be throttled by a factor of 2 without great difficulty; the typical limitation is combustion stability, as for example, injectors need a minimum pressure to avoid triggering damaging oscillations (chugging or combustion instabilities); but injectors can be optimised and tested for wider ranges.
For example, some more recent liquid-propellant engine designs that have been optimised for greater throttling capability (BE-3, Raptor) can be throttled to as low as 18–20 per cent of rated thrust.
Solid rockets can be throttled by using shaped grains that will vary their surface area over the course of the burn.
Energy efficiency
Rocket engine nozzles are surprisingly efficient heat engines for generating a high speed jet, as a consequence of the high combustion temperature and high compression ratio. Rocket nozzles give an excellent approximation to adiabatic expansion which is a reversible process, and hence they give efficiencies which are very close to that of the Carnot cycle. Given the temperatures reached, over 60% efficiency can be achieved with chemical rockets.
For a vehicle employing a rocket engine the energetic efficiency is very good if the vehicle speed approaches or somewhat exceeds the exhaust velocity (relative to launch); but at low speeds the energy efficiency goes to 0% at zero speed (as with all jet propulsion). See Rocket energy efficiency for more details.
Thrust-to-weight ratio
Rockets, of all the jet engines, indeed of essentially all engines, have the highest thrust-to-weight ratio. This is especially true for liquid-fuelled rocket engines.
This high performance is due to the small volume of pressure vessels that make up the engine—the pumps, pipes and combustion chambers involved. The lack of inlet duct and the use of dense liquid propellant allows the pressurisation system to be small and lightweight, whereas duct engines have to deal with air which has around three orders of magnitude lower density.
Of the liquid fuels used, density is lowest for liquid hydrogen. Although hydrogen/oxygen burning has the highest specific impulse of any in-use chemical rocket, hydrogen's very low density (about one-fourteenth that of water) requires larger and heavier turbopumps and pipework, which decreases the engine's thrust-to-weight ratio (for example the RS-25) compared to those that do not use hydrogen (NK-33).
Mechanical issues
Rocket combustion chambers are normally operated at fairly high pressure, typically 10–200bar (1–20MPa, 150–3,000psi). When operated within significant atmospheric pressure, higher combustion chamber pressures give better performance by permitting a larger and more efficient nozzle to be fitted without it being grossly overexpanded.
However, these high pressures cause the outermost part of the chamber to be under very large hoop stresses – rocket engines are pressure vessels.
Worse, due to the high temperatures created in rocket engines the materials used tend to have a significantly lowered working tensile strength.
In addition, significant temperature gradients are set up in the walls of the chamber and nozzle, these cause differential expansion of the inner liner that create internal stresses.
Hard starts
A hard start refers to an over-pressure condition during start of a rocket engine at ignition. In the worst cases, this takes the form of an unconfined explosion, resulting in the damage or destruction of the engine.
Rocket fuels, hypergolic or otherwise, must be introduced into the combustion chamber at the correct rate in order to have a controlled rate of production of hot gas. A "hard start" indicates that the quantity of combustible propellant that entered the combustion chamber prior to ignition was too large. The result is an excessive spike of pressure, possibly leading to structural failure or explosion.
Avoiding hard starts involves careful timing of the ignition relative to valve timing or varying the mixture ratio so as to limit the maximum pressure that can occur or simply ensuring an adequate ignition source is present well prior to propellant entering the chamber.
Explosions from hard starts usually cannot happen with purely gaseous propellants, since the amount of the gas present in the chamber is limited by the injector area relative to the throat area, and for practical designs, propellant mass escapes too quickly to be an issue.
A famous example of a hard start was the explosion of Wernher von Braun's "1W" engine during a demonstration to General Walter Dornberger on December 21, 1932. Delayed ignition allowed the chamber to fill with alcohol and liquid oxygen, which exploded violently. Shrapnel was embedded in the walls, but nobody was hit.
Acoustic issues
The extreme vibration and acoustic environment inside a rocket motor commonly result in peak stresses well above mean values, especially in the presence of organ pipe-like resonances and gas turbulence.
Combustion instabilities
The combustion may display undesired instabilities, of sudden or periodic nature. The pressure in the injection chamber may increase until the propellant flow through the injector plate decreases; a moment later the pressure drops and the flow increases, injecting more propellant in the combustion chamber which burns a moment later, and again increases the chamber pressure, repeating the cycle. This may lead to high-amplitude pressure oscillations, often in ultrasonic range, which may damage the motor. Oscillations of ±200 psi at 25 kHz were the cause of failures of early versions of the Titan II missile second stage engines. The other failure mode is a deflagration to detonation transition; the supersonic pressure wave formed in the combustion chamber may destroy the engine.
Combustion instability was also a problem during Atlas development. The Rocketdyne engines used in the Atlas family were found to suffer from this effect in several static firing tests, and three missile launches exploded on the pad due to rough combustion in the booster engines. In most cases, it occurred while attempting to start the engines with a "dry start" method whereby the igniter mechanism would be activated prior to propellant injection. During the process of man-rating Atlas for Project Mercury, solving combustion instability was a high priority, and the final two Mercury flights sported an upgraded propulsion system with baffled injectors and a hypergolic igniter.
The problem affecting Atlas vehicles was mainly the so-called "racetrack" phenomenon, where burning propellant would swirl around in a circle at faster and faster speeds, eventually producing vibration strong enough to rupture the engine, leading to complete destruction of the rocket. It was eventually solved by adding several baffles around the injector face to break up swirling propellant.
More significantly, combustion instability was a problem with the Saturn F-1 engines. Some of the early units tested exploded during static firing, which led to the addition of injector baffles.
In the Soviet space program, combustion instability also proved a problem on some rocket engines, including the RD-107 engine used in the R-7 family and the RD-216 used in the R-14 family, and several failures of these vehicles occurred before the problem was solved. Soviet engineering and manufacturing processes never satisfactorily resolved combustion instability in larger RP-1/LOX engines, so the RD-171 engine used to power the Zenit family still used four smaller thrust chambers fed by a common engine mechanism.
The combustion instabilities can be provoked by remains of cleaning solvents in the engine (e.g. the first attempted launch of a Titan II in 1962), reflected shock wave, initial instability after ignition, explosion near the nozzle that reflects into the combustion chamber, and many more factors. In stable engine designs the oscillations are quickly suppressed; in unstable designs they persist for prolonged periods. Oscillation suppressors are commonly used.
Three different types of combustion instabilities occur:
Chugging
A low frequency oscillation in chamber pressure below 200 Hertz. Usually it is caused by pressure variations in feed lines due to variations in acceleration of the vehicle, when rocket engines are building up thrust, are shut down or are being throttled.
Chugging can cause a worsening feedback loop, as cyclic variation in thrust causes longitudinal vibrations to travel up the rocket, causing the fuel lines to vibrate, which in turn do not deliver propellant smoothly into the engines. This phenomenon is known as "pogo oscillations" or "pogo", named after the pogo stick.
In the worst case, this may result in damage to the payload or vehicle. Chugging can be minimised by using several methods, such as installing energy-absorbing devices on feed lines. Chugging may cause Screeching.
Buzzing
An intermediate frequency oscillation in chamber pressure between 200 and 1000 Hertz. Usually caused due to insufficient pressure drop across the injectors. It generally is mostly annoying, rather than being damaging.
Buzzing is known to have adverse effects on engine performance and reliability, primarily as it causes material fatigue. In extreme cases combustion can end up being forced backwards through the injectors – this can cause explosions with monopropellants. Buzzing may cause Screeching.
Screeching
A high frequency oscillation in chamber pressure above 1000 Hertz, sometimes called screaming or squealing. The most immediately damaging, and the hardest to control. It is due to acoustics within the combustion chamber that often couples to the chemical combustion processes that are the primary drivers of the energy release, and can lead to unstable resonant "screeching" that commonly leads to catastrophic failure due to thinning of the insulating thermal boundary layer. Acoustic oscillations can be excited by thermal processes, such as the flow of hot air through a pipe or combustion in a chamber. Specifically, standing acoustic waves inside a chamber can be intensified if combustion occurs more intensely in regions where the pressure of the acoustic wave is maximal.
Such effects are very difficult to predict analytically during the design process, and have usually been addressed by expensive, time-consuming and extensive testing, combined with trial and error remedial correction measures.
Screeching is often dealt with by detailed changes to injectors, changes in the propellant chemistry, vaporising the propellant before injection or use of Helmholtz dampers within the combustion chambers to change the resonant modes of the chamber.
Testing for the possibility of screeching is sometimes done by exploding small explosive charges outside the combustion chamber with a tube set tangentially to the combustion chamber near the injectors to determine the engine's impulse response and then evaluating the time response of the chamber pressure- a fast recovery indicates a stable system.
Exhaust noise
For all but the very smallest sizes, rocket exhaust compared to other engines is generally very noisy. As the hypersonic exhaust mixes with the ambient air, shock waves are formed. The Space Shuttle generated over 200 dB(A) of noise around its base. To reduce this, and the risk of payload damage or injury to the crew atop the stack, the mobile launcher platform was fitted with a Sound Suppression System that sprayed of water around the base of the rocket in 41 seconds at launch time. Using this system kept sound levels within the payload bay to 142 dB.
The sound intensity from the shock waves generated depends on the size of the rocket and on the exhaust velocity. Such shock waves seem to account for the characteristic crackling and popping sounds produced by large rocket engines when heard live. These noise peaks typically overload microphones and audio electronics, and so are generally weakened or entirely absent in recorded or broadcast audio reproductions. For large rockets at close range, the acoustic effects could actually kill.
More worryingly for space agencies, such sound levels can also damage the launch structure, or worse, be reflected back at the comparatively delicate rocket above. This is why so much water is typically used at launches. The water spray changes the acoustic qualities of the air and reduces or deflects the sound energy away from the rocket.
Generally speaking, noise is most intense when a rocket is close to the ground, since the noise from the engines radiates up away from the jet, as well as reflecting off the ground. Also, when the vehicle is moving slowly, little of the chemical energy input to the engine can go into increasing the kinetic energy of the rocket (since useful power P transmitted to the vehicle is for thrust F and speed V). Then the largest portion of the energy is dissipated in the exhaust's interaction with the ambient air, producing noise. This noise can be reduced somewhat by flame trenches with roofs, by water injection around the jet and by deflecting the jet at an angle.
Rocket engine development
United States
The development of the US rocket engine industry has been shaped by a complex web of relationships between government agencies, private companies, research institutions, and other stakeholders.
Since the establishment of the first liquid-propellant rocket engine company (Reaction Motors, Inc.) in 1941 and the first government laboratory (GALCIT) devoted to the subject, the US liquid-propellant rocket engine (LPRE) industry has undergone significant changes. At least 14 US companies have been involved in the design, development, manufacture, testing, and flight support operations of various types of rocket engines from 1940 to 2000. In contrast to other countries like Russia, China, or India, where only government or pseudogovernment organisations engage in this business, the US government relies heavily on private industry. These commercial companies are essential to the continued viability of the United States and its form of governance, as they compete with one another to provide cutting-edge rocket engines that meet the needs of the government, the military, and the private sector. In the United States the company that develops the LPRE usually is awarded the production contract.
Generally, the need or demand for a new rocket engine comes from government agencies such as NASA or the Department of Defense. Once the need is identified, government agencies may issue requests for proposals (RFPs) to solicit proposals from private companies and research institutions. Private companies and research institutions, in turn, may invest in research and development (R&D) activities to develop new rocket engine technologies that meet the needs and specifications outlined in the RFPs.
Alongside private companies, universities, independent research institutes and government laboratories also play a critical role in the research and development of rocket engines.
Universities provide graduate and undergraduate education to train qualified technical personnel, and their research programs often contribute to the advancement of rocket engine technologies. More than 25 universities in the US have taught or are currently teaching courses related to Liquid Propellant Rocket Engines (LPREs), and their graduate and undergraduate education programs are considered one of their most important contributions. Universities such as Princeton University, Cornell University, Purdue University, Pennsylvania State University, University of Alabama, the Navy's Post-Graduate School, or the California Institute of Technology have conducted excellent R&D work on topics related to the rocket engine industry. One of the earliest examples of the contribution of universities to the rocket engine industry is the work of the GALCIT in 1941. They demonstrated the first jet-assisted takeoff (JATO) rockets to the Army, leading to the establishment of the Jet Propulsion Laboratory.
However the transfer of knowledge from research professors and their projects to the rocket engine industry has been a mixed experience. While some notable professors and relevant research projects have positively influenced industry practices and understanding of LPREs, the connection between university research and commercial companies has been inconsistent and weak. Universities were not always aware of the industry's specific needs, and engineers and designers in the industry had limited knowledge of university research. As a result, many university research programs remained relatively unknown to industry decision-makers. Furthermore, in the last few decades, certain university research projects, while interesting to professors, were not useful to the industry due to a lack of communication or relevance to industry needs.
Government laboratories, including the Rocket Propulsion Laboratory (now part of Air Force Research Laboratory), Arnold Engineering Test Center, NASA Marshall Space Flight Center, Jet Propulsion Laboratory, Stennis Space Center, White Sands Proving Grounds, and NASA John H. Glenn Research Center, have played crucial roles in the development of liquid rocket propulsion engines (LPREs). They have conducted unbiased testing, guided work at US and some non-US contractors, performed research and development, and provided essential testing facilities including hover test facilities and simulated altitude test facilities and resources. Initially, private companies or foundations financed smaller test facilities, but since the 1950s, the U.S. government has funded larger test facilities at government laboratories. This approach reduced costs for the government by not building similar facilities at contractors' plants but increased complexity and expenses for contractors. Nonetheless, government laboratories have solidified their significance and contributed to LPRE advancements.
LPRE programs have been subject to several cancellations in the United States, even after spending millions of dollars on their development. For example, the M-l LOX/LH2 LPRE, Titan I, and the RS-2200 aerospike, as well as several JATO units and large uncooled thrust chambers were cancelled. The cancellations of these programs were not related to the specific LPRE's performance or any issues with it. Instead, they were due to the cancellation of the vehicle programs the engine was intended for or budget cuts imposed by the government.
USSR
Russia and the former Soviet Union was and still is the world's foremost nation in developing and building rocket engines. From 1950 to 1998, their organisations developed, built, and put into operation a larger number and a larger variety of liquid propellant rocket engine (LPRE) designs than any other country. Approximately 500 different LPREs have been developed before 2003. For comparison the United States has developed slightly more than 300 (before 2003). The Soviets also had the most rocket-propelled flight vehicles. They had more liquid propellant ballistic missiles and more space launch vehicles derived or converted from these decommissioned ballistic missiles than any other nation. As of the end of 1998, the Russians (or earlier the Soviet Union) had successfully launched 2573 satellites with LPREs or almost 65% of the world total of 3973. All of these vehicle flights were made possible by the timely development of suitable high-performance reliable LPREs.
Institutions and actors
Unlike many other countries where the development and production of rocket engines were consolidated within a single organisation, the Soviet Union took a different approach, they established numerous specialised design bureaus (DB) which would compete for development contracts. These design bureaus, or "konstruktorskoye buro" (KB) in Russian were state run organisations which were primarily responsible for carrying out research, development and prototyping of advanced technologies usually related to military hardware, such as turbojet engines, aircraft components, missiles, or space launch vehicles.
Design Bureaus which specialised in rocket engines often possessed the necessary personnel, facilities, and equipment to conduct laboratory tests, flow tests, and ground testing of experimental rocket engines. Some even had specialised facilities for testing very large engines, conducting static firings of engines installed in vehicle stages, or simulating altitude conditions during engine tests. In certain cases, engine testing, certification and quality control were outsourced to other organisations and locations with more suitable test facilities. Many DBs also had housing complexes, gymnasiums, and medical facilities intended to support the needs of their employees and their families.
The Soviet Union's LPRE development effort saw significant growth during the 1960s and reached its peak in the 1970s. This era coincided with the Cold War between the Soviet Union and the United States, characterised by intense competition in spaceflight achievements. Between 14 and 17 Design Bureaus and research institutes were actively involved in developing LPREs during this period. These organisations received relatively steady support and funding due to high military and spaceflight priorities, which facilitated the continuous development of new engine concepts and manufacturing methods.
Once a mission with a new vehicle (missile or spacecraft) was established it was passed on to a design bureau whose role was to oversee the development of the entire rocket. If none of the previously developed rocket engines met the needs of the mission, a new rocket engine with specific requirements would be contracted to another DB specialised in LPRE development (oftentimes each DB had expertise in specific types of LPREs with different applications, propellants, or engine sizes). This meant that the development or design study of a rocket engine was always aimed at a specific application which entailed set requirements.
When it comes to which DBs were awarded contracts for the development of new rocket engines either a single design bureau would be chosen or several design bureaus would be given the same contract which sometimes led to fierce competition between DBs.
When only one DB was picked for the development, it was often the result of the relationship between a vehicle or system's chief designer and the chief designer of a rocket engine specialised DB. If the vehicle's chief designer was happy with previous work done by a certain design bureau it was not unusual to see continued reliance on that LPRE bureau for that class of engines. For example, all but one of the LPREs for submarine-launched missiles were developed by the same design bureau for the same vehicle development prime contractor.
However, when two parallel engine development programs were supported in order to select the superior one for a specific application, several qualified rocket engine models were never used. This luxury of choice was not commonly available in other nations. However, the use of design bureaus also led to certain issues, including program cancellations and duplication. Some major programs were cancelled, resulting in the disposal or storage of previously developed engines.
One notable example of duplication and cancellation was the development of engines for the R-9A ballistic missile. Two sets of engines were supported, but ultimately only one set was selected, leaving several perfectly functional engines unused. Similarly, for the ambitious heavy N-l space launch vehicle intended for lunar and planetary missions, the Soviet Union developed and put into production at least two engines for each of the six stages. Additionally, they developed alternate engines for a more advanced N-l vehicle. However, the program faced multiple flight failures, and with the United States' successful Moon landing, the program was ultimately cancelled, leaving the Soviet Union with a surplus of newly qualified engines without a clear purpose.
These examples demonstrate the complex dynamics and challenges faced by the Soviet Union in managing the development and production of rocket engines through Design Bureaus.
Accidents
The development of rocket engines in the Soviet Union was marked by significant achievements, but it also carried ethical considerations due to numerous accidents and fatalities. From a Science and Technology Studies point of view, the ethical implications of these incidents shed light on the complex relationship between technology, human factors, and the prioritisation of scientific advancement over safety.
The Soviet Union encountered a series of tragic accidents and mishaps in the development and operation of rocket engines. Notably, the USSR holds the unfortunate distinction of having experienced more injuries and deaths resulting from liquid propellant rocket engine (LPRE) accidents than any other country. These incidents brought into question the ethical considerations surrounding the development, testing, and operational use of rocket engines.
One of the most notable disasters occurred in 1960 when the R-16 ballistic missile suffered a catastrophic accident on the launchpad at the Tyuratam launch facility. This incident resulted in the deaths of 124 engineers and military personnel, including Marshal M.I. Nedelin, a former deputy minister of defence. The explosion occurred after the second-stage rocket engine suddenly ignited, causing the fully loaded missile to disintegrate. The explosion resulted from the ignition and explosion of the mixed hypergolic propellants, consisting of nitric acid with additives and UDMH (unsymmetrical dimethylhydrazine).
While the immediate cause of the 1960 accident was attributed to a lack of protective circuits in the missile control unit, the ethical considerations surrounding LPRE accidents in the USSR extend beyond specific technical failures. The secrecy surrounding these accidents, which remained undisclosed for approximately three decades, raises concerns about transparency, accountability, and the protection of human life.
The decision to keep fatal LPRE accidents hidden from the public eye reflects a broader ethical dilemma. The Soviet government, driven by the pursuit of scientific and technological superiority during the Cold War, sought to maintain an image of invincibility and conceal the failures that accompanied their advancements. This prioritisation of national prestige over the well-being and safety of workers raises questions about the ethical responsibility of the state and the organisations involved.
Testing
Rocket engines are usually statically tested at a test facility before being put into production. For high altitude engines, either a shorter nozzle must be used, or the rocket must be tested in a large vacuum chamber.
Safety
Rocket vehicles have a reputation for unreliability and danger; especially catastrophic failures. Contrary to this reputation, carefully designed rockets can be made arbitrarily reliable. In military use, rockets are not unreliable. However, one of the main non-military uses of rockets is for orbital launch. In this application, the premium has typically been placed on minimum weight, and it is difficult to achieve high reliability and low weight simultaneously. In addition, if the number of flights launched is low, there is a very high chance of a design, operations or manufacturing error causing destruction of the vehicle.
Saturn family (1961–1975)
The Rocketdyne H-1 engine, used in a cluster of eight in the first stage of the Saturn I and Saturn IB launch vehicles, had no catastrophic failures in 152 engine-flights. The Pratt and Whitney RL10 engine, used in a cluster of six in the Saturn I second stage, had no catastrophic failures in 36 engine-flights. The Rocketdyne F-1 engine, used in a cluster of five in the first stage of the Saturn V, had no failures in 65 engine-flights. The Rocketdyne J-2 engine, used in a cluster of five in the Saturn V second stage, and singly in the Saturn IB second stage and Saturn V third stage, had no catastrophic failures in 86 engine-flights.
Space Shuttle (1981–2011)
The Space Shuttle Solid Rocket Booster, used in pairs, caused one notable catastrophic failure in 270 engine-flights.
The RS-25, used in a cluster of three, flew in 46 refurbished engine units. These made a total of 405 engine-flights with no catastrophic in-flight failures. A single in-flight RS-25 engine failure occurred during 's STS-51-F mission. This failure had no effect on mission objectives or duration.
Cooling
For efficiency reasons, higher temperatures are desirable, but materials lose their strength if the temperature becomes too high. Rockets run with combustion temperatures that can reach .
Most other jet engines have gas turbines in the hot exhaust. Due to their larger surface area, they are harder to cool and hence there is a need to run the combustion processes at much lower temperatures, losing efficiency. In addition, duct engines use air as an oxidant, which contains 78% largely unreactive nitrogen, which dilutes the reaction and lowers the temperatures. Rockets have none of these inherent combustion temperature limiters.
The temperatures reached by combustion in rocket engines often substantially exceed the melting points of the nozzle and combustion chamber materials (about 1,200 K for copper). Most construction materials will also combust if exposed to high temperature oxidiser, which leads to a number of design challenges. The nozzle and combustion chamber walls must not be allowed to combust, melt, or vaporize (sometimes facetiously termed an "engine-rich exhaust").
Rockets that use common construction materials such as aluminium, steel, nickel or copper alloys must employ cooling systems to limit the temperatures that engine structures experience. Regenerative cooling, where the propellant is passed through tubes around the combustion chamber or nozzle, and other techniques, such as film cooling, are employed to give longer nozzle and chamber life. These techniques ensure that a gaseous thermal boundary layer touching the material is kept below the temperature which would cause the material to catastrophically fail.
Material exceptions that can sustain rocket combustion temperatures to a certain degree are carbon–carbon materials and rhenium, although both are subject to oxidation under certain conditions. Other refractory alloys, such as alumina, molybdenum, tantalum or tungsten have been tried, but were given up on due to various issues.
Materials technology, combined with the engine design, is a limiting factor in chemical rockets.
In rockets, the heat fluxes that can pass through the wall are among the highest in engineering; fluxes are generally in the range of 0.8–80 MW/m (0.5-50 BTU/in-sec). The strongest heat fluxes are found at the throat, which often sees twice that found in the associated chamber and nozzle. This is due to the combination of high speeds (which gives a very thin boundary layer), and although lower than the chamber, the high temperatures seen there. (See above for temperatures in nozzle).
In rockets the coolant methods include:
Ablative: The combustion chamber inside walls are lined with a material that traps heat and carries it away with the exhaust as it vaporizes.
Radiative cooling: The engine is made of one or several refractory materials, which take heat flux until its outer thrust chamber wall glows red- or white-hot, radiating the heat away.
Dump cooling: A cryogenic propellant, usually hydrogen, is passed around the nozzle and dumped. This cooling method has various issues, such as wasting propellant. It is only used rarely.
Regenerative cooling: The fuel (and possibly, the oxidiser) of a liquid rocket engine is routed around the nozzle before being injected into the combustion chamber or preburner. This is the most widely applied method of rocket engine cooling.
Film cooling: The engine is designed with rows of multiple orifices lining the inside wall through which additional propellant is injected, cooling the chamber wall as it evaporates. This method is often used in cases where the heat fluxes are especially high, likely in combination with regenerative cooling. A more efficient subtype of film cooling is transpiration cooling, in which propellant passes through a porous inner combustion chamber wall and transpirates. So far, this method has not seen usage due to various issues with this concept.
Rocket engines may also use several cooling methods. Examples:
Regeneratively and film cooled combustion chamber and nozzle: V-2 Rocket Engine
Regeneratively cooled combustion chamber with a film cooled nozzle extension: Rocketdyne F-1 Engine
Regeneratively cooled combustion chamber with an ablatively cooled nozzle extension: The LR-91 rocket engine
Ablatively and film cooled combustion chamber with a radiatively cooled nozzle extension: Lunar module descent engine (LMDE), Service propulsion system engine (SPS)
Radiatively and film cooled combustion chamber with a radiatively cooled nozzle extension: R-4D storable propellant thrusters
In all cases, another effect that aids in cooling the rocket engine chamber wall is a thin layer of combustion gases (a boundary layer) that is notably cooler than the combustion temperature. Disruption of the boundary layer may occur during cooling failures or combustion instabilities, and wall failure typically occurs soon after.
With regenerative cooling a second boundary layer is found in the coolant channels around the chamber. This boundary layer thickness needs to be as small as possible, since the boundary layer acts as an insulator between the wall and the coolant. This may be achieved by making the coolant velocity in the channels as high as possible.
Liquid-fuelled engines are often run fuel-rich, which lowers combustion temperatures. This reduces heat loads on the engine and allows lower cost materials and a simplified cooling system. This can also increase performance by lowering the average molecular weight of the exhaust and increasing the efficiency with which combustion heat is converted to kinetic exhaust energy.
Chemistry
Rocket propellants require a high energy per unit mass (specific energy), which must be balanced against the tendency of highly energetic propellants to spontaneously explode. Assuming that the chemical potential energy of the propellants can be safely stored, the combustion process results in a great deal of heat being released. A significant fraction of this heat is transferred to kinetic energy in the engine nozzle, propelling the rocket forward in combination with the mass of combustion products released.
Ideally all the reaction energy appears as kinetic energy of the exhaust gases, as exhaust velocity is the single most important performance parameter of an engine. However, real exhaust species are molecules, which typically have translation, vibrational, and rotational modes with which to dissipate energy. Of these, only translation can do useful work to the vehicle, and while energy does transfer between modes this process occurs on a timescale far in excess of the time required for the exhaust to leave the nozzle.
The more chemical bonds an exhaust molecule has, the more rotational and vibrational modes it will have. Consequently, it is generally desirable for the exhaust species to be as simple as possible, with a diatomic molecule composed of light, abundant atoms such as H2 being ideal in practical terms. However, in the case of a chemical rocket, hydrogen is a reactant and reducing agent, not a product. An oxidizing agent, most typically oxygen or an oxygen-rich species, must be introduced into the combustion process, adding mass and chemical bonds to the exhaust species.
An additional advantage of light molecules is that they may be accelerated to high velocity at temperatures that can be contained by currently available materials - the high gas temperatures in rocket engines pose serious problems for the engineering of survivable motors.
Liquid hydrogen (LH2) and oxygen (LOX, or LO2), are the most effective propellants in terms of exhaust velocity that have been widely used to date, though a few exotic combinations involving boron or liquid ozone are potentially somewhat better in theory if various practical problems could be solved.
When computing the specific reaction energy of a given propellant combination, the entire mass of the propellants (both fuel and oxidiser) must be included. The exception is in the case of air-breathing engines, which use atmospheric oxygen and consequently have to carry less mass for a given energy output. Fuels for car or turbojet engines have a much better effective energy output per unit mass of propellant that must be carried, but are similar per unit mass of fuel.
Computer programs that predict the performance of propellants in rocket engines are available.
Ignition
With liquid and hybrid rockets, immediate ignition of the propellants as they first enter the combustion chamber is essential.
With liquid propellants (but not gaseous), failure to ignite within milliseconds usually causes too much liquid propellant to be inside the chamber, and if/when ignition occurs the amount of hot gas created can exceed the maximum design pressure of the chamber, causing a catastrophic failure of the pressure vessel. This is sometimes called a hard start or a rapid unscheduled disassembly (RUD).
Ignition can be achieved by a number of different methods; a pyrotechnic charge can be used, a plasma torch can be used, or electric spark ignition may be employed. Some fuel/oxidiser combinations ignite on contact (hypergolic), and non-hypergolic fuels can be "chemically ignited" by priming the fuel lines with hypergolic propellants (popular in Russian engines).
Gaseous propellants generally will not cause hard starts, with rockets the total injector area is less than the throat thus the chamber pressure tends to ambient prior to ignition and high pressures cannot form even if the entire chamber is full of flammable gas at ignition.
Solid propellants are usually ignited with one-shot pyrotechnic devices and combustion usually proceeds through total consumption of the propellants.
Once ignited, rocket chambers are self-sustaining and igniters are not needed and combustion usually proceeds through total consumption of the propellants. Indeed, chambers often spontaneously reignite if they are restarted after being shut down for a few seconds. Unless designed for re-ignition, when cooled, many rockets cannot be restarted without at least minor maintenance, such as replacement of the pyrotechnic igniter or even refueling of the propellants.
Jet physics
Rocket jets vary depending on the rocket engine, design altitude, altitude, thrust and other factors.
Carbon-rich exhausts from kerosene-based fuels such as RP-1 are often orange in colour due to the black-body radiation of the unburnt particles, in addition to the blue Swan bands. Peroxide oxidiser-based rockets and hydrogen rocket jets contain largely steam and are nearly invisible to the naked eye but shine brightly in the ultraviolet and infrared ranges. Jets from solid-propellant rockets can be highly visible, as the propellant frequently contains metals such as elemental aluminium which burns with an orange-white flame and adds energy to the combustion process. Rocket engines which burn liquid hydrogen and oxygen will exhibit a nearly transparent exhaust, due to it being mostly superheated steam (water vapour), plus some unburned hydrogen.
The nozzle is usually over-expanded at sea level, and the exhaust can exhibit visible shock diamonds through a schlieren effect caused by the incandescence of the exhaust gas.
The shape of the jet varies for a fixed-area nozzle as the expansion ratio varies with altitude: at high altitude all rockets are grossly under-expanded, and a quite small percentage of exhaust gases actually end up expanding forwards.
Types of rocket engines
Physically powered
Chemically powered
Electrically powered
Thermal
Preheated
Solar thermal
The solar thermal rocket would make use of solar power to directly heat reaction mass, and therefore does not require an electrical generator as most other forms of solar-powered propulsion do. A solar thermal rocket only has to carry the means of capturing solar energy, such as concentrators and mirrors. The heated propellant is fed through a conventional rocket nozzle to produce thrust. The engine thrust is directly related to the surface area of the solar collector and to the local intensity of the solar radiation and inversely proportional to the Isp.
Beamed thermal
Nuclear thermal
Nuclear
Nuclear propulsion includes a wide variety of propulsion methods that use some form of nuclear reaction as their primary power source. Various types of nuclear propulsion have been proposed, and some of them tested, for spacecraft applications:
History of rocket engines
According to the writings of the Roman Aulus Gellius, the earliest known example of jet propulsion was in c. 400 BC, when a Greek Pythagorean named Archytas, propelled a wooden bird along wires using steam. However, it was not powerful enough to take off under its own thrust.
The aeolipile described in the first century BC, often known as Hero's engine, consisted of a pair of steam rocket nozzles mounted on a bearing. It was created almost two millennia before the Industrial Revolution but the principles behind it were not well understood, and it was not developed into a practical power source.
The availability of black powder to propel projectiles was a precursor to the development of the first solid rocket. Ninth Century Chinese Taoist alchemists discovered black powder in a search for the elixir of life; this accidental discovery led to fire arrows which were the first rocket engines to leave the ground.
It is stated that "the reactive forces of incendiaries were probably not applied to the propulsion of projectiles prior to the 13th century". A turning point in rocket technology emerged with a short manuscript entitled Liber Ignium ad Comburendos Hostes (abbreviated as The Book of Fires). The manuscript is composed of recipes for creating incendiary weapons from the mid-eighth to the end of the thirteenth centuries—two of which are rockets. The first recipe calls for one part of colophonium and sulfur added to six parts of saltpeter (potassium nitrate) dissolved in laurel oil, then inserted into hollow wood and lit to "fly away suddenly to whatever place you wish and burn up everything". The second recipe combines one pound of sulfur, two pounds of charcoal, and six pounds of saltpeter—all finely powdered on a marble slab. This powder mixture is packed firmly into a long and narrow case. The introduction of saltpeter into pyrotechnic mixtures connected the shift from hurled Greek fire into self-propelled rocketry.
Articles and books on the subject of rocketry appeared increasingly from the fifteenth through seventeenth centuries. In the sixteenth century, German military engineer Conrad Haas (1509–1576) wrote a manuscript which introduced the construction of multi-staged rockets.
Rocket engines were also put in use by Tippu Sultan, the king of Mysore. These usually consisted of a tube of soft hammered iron about long and diameter, closed at one end, packed with black powder propellant and strapped to a shaft of bamboo about long. A rocket carrying about one pound of powder could travel almost . These 'rockets', fitted with swords, would travel several meters in the air before coming down with sword edges facing the enemy. These were used very effectively against the British empire.
Modern rocketry
Slow development of this technology continued up to the later 19th century, when Russian Konstantin Tsiolkovsky first wrote about liquid-fuelled rocket engines. He was the first to develop the Tsiolkovsky rocket equation, though it was not published widely for some years.
The modern solid- and liquid-fuelled engines became realities early in the 20th century, thanks to the American physicist Robert Goddard. Goddard was the first to use a De Laval nozzle on a solid-propellant (gunpowder) rocket engine, doubling the thrust and increasing the efficiency by a factor of about twenty-five. This was the birth of the modern rocket engine. He calculated from his independently derived rocket equation that a reasonably sized rocket, using solid fuel, could place a one-pound payload on the Moon.
The era of liquid-fuel rocket engines
Goddard began to use liquid propellants in 1921, and in 1926 became the first to launch a liquid-fuelled rocket. Goddard pioneered the use of the De Laval nozzle, lightweight propellant tanks, small light turbopumps, thrust vectoring, the smoothly-throttled liquid fuel engine, regenerative cooling, and curtain cooling.
During the late 1930s, German scientists, such as Wernher von Braun and Hellmuth Walter, investigated installing liquid-fuelled rockets in military aircraft (Heinkel He 112, He 111, He 176 and Messerschmitt Me 163).
The turbopump was employed by German scientists in World War II. Until then cooling the nozzle had been problematic, and the A4 ballistic missile used dilute alcohol for the fuel, which reduced the combustion temperature sufficiently.
Staged combustion (Замкнутая схема) was first proposed by Alexey Isaev in 1949. The first staged combustion engine was the S1.5400 used in the Soviet planetary rocket, designed by Melnikov, a former assistant to Isaev. About the same time (1959), Nikolai Kuznetsov began work on the closed cycle engine NK-9 for Korolev's orbital ICBM, GR-1. Kuznetsov later evolved that design into the NK-15 and NK-33 engines for the unsuccessful Lunar N1 rocket.
In the West, the first laboratory staged-combustion test engine was built in Germany in 1963, by Ludwig Boelkow.
Liquid hydrogen engines were first successfully developed in America: the RL-10 engine first flew in 1962. Its successor, the Rocketdyne J-2, was used in the Apollo program's Saturn V rocket to send humans to the Moon. The high specific impulse and low density of liquid hydrogen lowered the upper stage mass and the overall size and cost of the vehicle.
The record for most engines on one rocket flight is 44, set by NASA in 2016 on a Black Brant.
| Technology | Basics_10 | null |
262205 | https://en.wikipedia.org/wiki/Psophia | Psophia | Psophia is a genus of birds restricted to the humid forests of the Amazon and Guiana Shield in South America. It is the only genus in the family Psophiidae. Birds in the genus are commonly known as trumpeters, due to the trumpeting or cackling threat call of the males. The three species resemble slightly taller, longer-legged chickens in size and appearance; they measure long and weigh . They are rotund birds with long, flexible necks and legs, downward-curving bills and a “hunched” appearance. Their heads are small, but their eyes are relatively large, making them look inquisitive and "good-natured". The plumage is soft, resembling fur or velvet on the head and neck. It is mostly black, with purple, green, or bronze iridescence, particularly on the wing coverts and the lower neck. In the best-known taxa, the secondary and tertial flight feathers are white, grey, or greenish-black and hairlike, falling over the lower back, which is the same colour. These colours give the three generally accepted species their names.
Taxonomy and systematics
The genus Psophia was introduced in 1758 by the Swedish naturalist Carl Linnaeus, in the tenth edition of his Systema Naturae, as containing a single species, the grey-winged trumpeter (Psophia crepitans). The genus name is from the Ancient Greek psophos meaning "noise".
The genus' taxonomy is far from settled; anywhere from three to six species (with varying numbers of subspecies) are recognized by different taxonomic systems.
The International Ornithological Committee's treatment is the most conservative. They recognize three species, two of which have three subspecies:
Grey-winged trumpeter, Psophia crepitans
P. c. crepitans
P. c. napensis
P. c. ochroptera
Pale-winged trumpeter, Psophia leucoptera
Dark-winged trumpeter, Psophia viridis
P. v. viridis
P. v. dextralis
P. v. obscura
The Clements taxonomy splits P. v. dextralis and adds English names to the subspecies:
Gray-winged trumpeter, Psophia crepitans
P. c. crepitans (gray-winged)
P. c. napensis (Napo)
P. c. ochroptera (ochre-winged)
Pale-winged trumpeter, Psophia leucoptera
Dark-winged trumpeter, Psophia viridis
P. v. viridis (green-backed)
P. v. dextralis (dusky-backed)
P. v. interjecta (Xingu)
P. v. obscura (black-backed)
BirdLife International's Handbook of the Birds of the World (HBW) recognizes six species:
Grey-winged trumpeter, Psophia crepitans
P. c. crepitans
P. c. napensis
Ochre-winged trumpeter, Psophia ochroptera
White-winged trumpeter, Psophia leucoptera
Green-winged trumpeter, Psophia viridis
Olive-winged trumpeter, Psophia dextralis
P. d. dextralis
P. d. interjecta
Black-winged trumpeter, Psophia obscura
Traditionally, only three species of trumpeters have been recognised. A 2008 review, of the morphology of the dark-winged trumpeter, resulted in the recommendation that it be divided into three species. A 2010 review of the phylogeny and biogeography of all members of the family resulted in a suggested total of eight species—two in the grey-winged trumpeter complex, two in the pale-winged trumpeter complex, and four in the dark-winged trumpeter complex.
Behaviour and ecology
Trumpeters fly weakly but run fast; they can easily outrun dogs. They are also capable of swimming across rivers. They spend most of the day in noisy flocks, sometimes numbering more than 100, on the forest floor. They feed on fallen fruit (particularly fruit knocked down by monkeys). They also eat a small amount of arthropods, including ants and flies, and even some reptiles and amphibians. At night they fly with difficulty into trees to roost above the ground.
Trumpeters nest in a hole in a tree or in the crown of a palm tree. They lay 2 to 5 eggs with rough, white shells, averaging about . In the pale-winged trumpeter and the grey-winged trumpeter, groups of adults care for a single clutch.
Relationship with humans
Trumpeters are often used as "guard dogs" because they call loudly when alarmed, become tame easily, and are believed to be adept at killing snakes. One source states their skill at hunting snakes as a fact, and the nineteenth-century botanist Richard Spruce gave an account of the friendliness and snake-killing prowess of a tame grey-winged trumpeter. For these reasons, Spruce recommended that England import trumpeters to India. However, another source says this prowess is "reputed".
| Biology and health sciences | Gruiformes | Animals |
262216 | https://en.wikipedia.org/wiki/Limpkin | Limpkin | The limpkin (Aramus guarauna), also called carrao, courlan, and crying bird, is a large wading bird related to rails and cranes, and the only extant species in the family Aramidae. It is found mostly in wetlands in warm parts of the Americas, from Florida to northern Argentina, but has been spotted as far north as Wisconsin and Southern Ontario. It feeds on molluscs, with the diet dominated by apple snails of the genus Pomacea. Its name derives from its seeming limp when it walks.
Taxonomy and systematics
The limpkin is placed in the family Aramidae, which is in turn placed within the crane and rail order Gruiformes. The limpkin had been suggested to be close to the ibis and spoonbill family Threskiornithidae, based upon shared bird lice. The Sibley–Ahlquist taxonomy of birds, based upon DNA–DNA hybridization, suggested that the limpkin's closest relatives were the Heliornithidae finfoots, and Sibley and Monroe even placed the species in that family in 1990. More recent studies have found little support for this relationship. More recent DNA studies have confirmed a close relationship with particularly the cranes, with the limpkin remaining as a family close to the cranes and the two being sister taxa to the trumpeters.
Although the limpkin is the only extant species in the family today, several fossils of extinct Aramidae are known from across the Americas. The earliest known species, Aramus paludigrus, is dated to the middle Miocene, while the oldest supposed members of the family, Aminornis and Loncornis, have been found in early Oligocene deposits in Argentina, although whether these are indeed related is not certain; in fact, Loncornis seems to be a misidentified mammal bone. Another Oligocene fossil from Europe, Parvigrus pohli (family Parvigruidae), has been described as a mosaic of the features shared by the limpkins and the cranes. It shares many morphological features with the cranes and limpkins, but also was much smaller than either group, and was more rail-like in its proportions. In the paper describing the fossil, Gerald Mayr suggested that it was similar to the stem species of the grues (the cranes and limpkins), and that the limpkins evolved massively long bills as a result of the specialisation to feeding on snails. In contrast, the cranes evolved into long-legged forms to walk and probe on open grasslands.
Subspecies
Between 1856 and 1934, the limpkin was treated as two species, one in South America (Aramus guarauna) and the other found in Central America, the Caribbean, and Florida (Aramus pictus). Today, it is treated as a single species with four subspecies. Along with the nominate subspecies A. g. guarauna, A. g. dolosus, A. g. elucus (both J. L. Peters, 1925), and A. g. pictus (F. A. A. Meyer, 1794) are recognized. The difference between the subspecies are related to slight differences in size and plumage.
Aramus guarauna guarauna - South America (except the arid west coast, the Andes and extreme south)
Aramus guarauna pictus - Florida, Georgia, The Bahamas, Cuba and Jamaica
Aramus guarauna elucus - Hispaniola and (formerly) Puerto Rico
Aramus guarauna dolosus - Southwestern Mexico to Panama
Description
The limpkin is a somewhat large bird, long, with a wingspan of . Body mass ranges from , averaging . The males are slightly larger than the females in size, but no difference in plumage is seen. Its plumage is drab—dark brown with an olive luster above. The feathers of the head, neck, wing coverts, and much of the back and underparts (except the rear) are marked with white, making the body look streaked and the head and neck light gray. It has long, dark-gray legs and a long neck. Its bill is long, heavy, and downcurved, yellowish bill with a darker tip. The bill is slightly open near but not at the end to give it a tweezers-like action in removing snails from their shells, and in many individuals the tip curves slightly to the right, like the apple snails' shells. The white markings are slightly less conspicuous in first-year birds. Its wings are broad and rounded and its tail is short. It is often confused with the immature American white ibis.
This bird is easier to hear than see. Its common vocalization is a loud wild wail or scream with some rattling quality, represented as "kwEEEeeer or klAAAar." This call is most often given at night and at dawn and dusk. Other calls include "wooden clicking", clucks, and in alarm, a "piercing bihk, bihk...".
Distribution and habitat
The limpkin occurs from peninsular Florida (and the Okefenokee Swamp in southern Georgia) and southern Mexico through the Caribbean and Central America to northern Argentina. In South America, it occurs widely east of the Andes; west of them its range extends only to the Equator.
It inhabits freshwater marshes and swamps, often with tall reeds, as well as mangroves. In the Caribbean, it also inhabits dry brushland. In Mexico and northern Central America, it occurs at altitudes up to . In Florida, the distribution of apple snails is the best predictor of where limpkins can be found.
The limpkin undertakes some localized migrations, although the extent of these is not fully understood. In some parts in the northern part of the range, females (and a few males) leave the breeding areas at the end of summer, returning at the end of winter. In Brazil, birds breeding in some seasonal marshes leave during the dry season and return again with the rains. Birds may also migrate between Florida and Cuba, as several limpkins on the Florida Keys and Dry Tortugas have been reported, but these records may also represent vagrants or postbreeding dispersal. One study in Florida using wing tags found limpkins dispersed up to away from the breeding site. This tendency may explain vagrant limpkins seen in other parts of the United States and at sea near the Bahamas.
Behavior and ecology
Limpkins are active during the day, but also forage at night. Where they are not persecuted, they are also very tame and approachable. Even so, they are usually found near cover. They are not aggressive for the most part, being unconcerned by other species and rarely fighting with members of their own species.
Because of their long toes, they can stand on floating water plants. They also swim well, both as adults or as newly hatched chicks, but they seldom do so. They fly strongly, the neck projecting forward and the legs backward, the wings beating shallowly and stiffly, with a jerky upstroke, above the horizontal most of the time.
Feeding
Limpkins forage primarily in shallow water and on floating vegetation such as water hyacinth and water lettuce. When wading, they seldom go deeper than having half the body underwater, and never are submerged up to the back. They walk slowly with a gait described as "slightly undulating" and "giving the impression of lameness or limping", "high-stepping", or "strolling", looking for food if the water is clear or probing with the bill. They do not associate with other birds in mixed-species feeding flocks, as do some other wading birds, but may forage in small groups with others of their species.
The diet of the limpkin is dominated by apple snails (Ampullariidae) of the genus Pomacea. The availability of this one mollusk has a significant effect on the local distribution of the limpkin. Freshwater mussels, including Anodonta cowperiana, Villosa vibex, Elliptio strigosus, E. jayensis, and Uniomerus obesus, as well as other kinds of snails, are a secondary food sources. Less important prey items are insects, frogs, lizards, crustaceans (such as crayfish) and worms, as well as seeds. These prey items may be important in periods of drought or flooding when birds may be pushed into less than optimal foraging areas. In one site in Florida, moon snails and mussels were the most important prey items. Two studies, both in Florida, have looked at the percentage composition of the diet of limpkins. One, looking at stomach contents, found 70% Pomacea apple snails, 3% Campeloma, and 27% unidentified mollusc, probably Pomacea.
When a limpkin finds an apple snail, it carries it to land or very shallow water and places it in mud, the opening facing up. It deftly removes the operculum or "lid" and extracts the snail, seldom breaking the shell. The extraction takes 10 to 20 seconds. The orange-yellow yolk gland of female snails is usually shaken loose and not eaten. It often leaves piles of empty shells at favored spots.
Reproduction and breeding
Males have exclusive territories, which can vary in size from . In large, uniform swamps, nesting territories can often be clumped together, in the form of large colonies. These are vigorously defended, with males flying to the territory edges to challenge intruders and passing limpkins being chased out of the territory. Territorial displays between males at boundaries include ritualized charging and wing-flapping. Females may also participate in territorial defense, but usually only against other females or juveniles. Territories may be maintained year-round or abandoned temporarily during the nonbreeding season, usually due to lack of food.
Limpkins may be either monogamous, with females joining a male's territory, or serially polyandrous, with two or more females joining a male. With the monogamous pairs, banding studies have shown that a small number of pairs reform the following year (four out of 18 pairs).
Nests may be built in a wide variety of places – on the ground, in dense floating vegetation, in bushes, or at any height in trees. They are bulky structures of rushes, sticks, or other materials. Nest building is undertaken by the male initially, which constructs the nest in his territory prior to pair-bond formation. Unpaired females visit a number of territories before settling on a male with which to breed. Males may initially challenge and fight off prospective mates, and may not accept first-year females as mates. Pair-bond formation may take a few weeks. Courtship feeding is part of the bonding process, where males catch and process a snail and then feed it to the female.
The clutch consists of three to eight eggs, with five to seven being typical and averaging 5.5, which measure . The egg color is highly variable. Their background color ranges from gray-white through buff to deep olive, and they are marked with light-brown and sometimes purplish-gray blotches and speckles. The eggs are laid daily until the clutch is complete, and incubation is usually delayed until the clutch is completed. Both parents incubate the eggs during the day, but only the female incubates at night. The shift length is variable, but the male incubates for longer during the day. The male remains territorial during incubation, and leaves the clutch to chase off intruders; if this happens, the female returns quickly to the eggs. The incubation period is about 27 days, and all the eggs hatch within 24 hours of each other.
The young hatch covered with down, capable of walking, running, and swimming. They follow their parents to a platform of aquatic vegetation, where they are brooded. They are fed by both parents; they reach adult size at 7 weeks and leave their parents at about 16 weeks.
Ecology
Limpkins are reported to be attacked and eaten by American alligators. Also, adults with serious foot and leg injuries have been reported, suggesting they may have been attacked by turtles while standing on floating vegetation. Their nests are apparently preyed upon by snakes, raccoons, crows, and muskrats. Foraging adults may in times of drought be victims of kleptoparasitism by snail kites, and the attempted theft of apple snails caught by limpkins has also been observed in boat-tailed grackles.
Limpkins in Florida were examined for parasites, which included trematodes, nematodes, and biting lice. Two biting lice species were found, Laemobothrion cubense and Rallicola funebris. The trematode Prionosoma serratum was found in the intestines of some birds; this species may enter the bird after first infecting apple snails (this has been shown to be the route of infection for a closely related trematode to infect snail kites). Nematodes Amidostomum acutum and Strongyloides spp. are also ingested and live in the gut.
Relationship with humans
Many of the limpkin's names across its range are onomatopoeic and reflect the bird's call; for example, carau in Argentina, carrao in Venezuela, and guareáo in Cuba. The species also has a range of common names that refer to its call, for example lamenting bird, or to its supposed gait, crippled bird. The limpkin does not feature much in folklore, although in the Amazon people believe that when the limpkin starts to call, the river will not rise any more. Its call has been used for jungle sound effects in Tarzan films and for the hippogriff in the film Harry Potter and the Prisoner of Azkaban.
| Biology and health sciences | Gruiformes | Animals |
262252 | https://en.wikipedia.org/wiki/Pyrolysis | Pyrolysis | Pyrolysis is the process of thermal decomposition of materials at elevated temperatures, often in an inert atmosphere without access to oxygen.
Etymology
The word pyrolysis is coined from the Greek-derived elements pyro- (from Ancient Greek πῦρ : pûr - "fire, heat, fever") and lysis (λύσις : lúsis - "separation, loosening").
Applications
Pyrolysis is most commonly used in the treatment of organic materials. It is one of the processes involved in the charring of wood or pyrolysis of biomass. In general, pyrolysis of organic substances produces volatile products and leaves char, a carbon-rich solid residue. Extreme pyrolysis, which leaves mostly carbon as the residue, is called carbonization. Pyrolysis is considered one of the steps in the processes of gasification or combustion. Laypeople often confuse pyrolysis gas with syngas. Pyrolysis gas has a high percentage of heavy tar fractions, which condense at relatively high temperatures, preventing its direct use in gas burners and internal combustion engines, unlike syngas.
The process is used heavily in the chemical industry, for example, to produce ethylene, many forms of carbon, and other chemicals from petroleum, coal, and even wood, or to produce coke from coal. It is used also in the conversion of natural gas (primarily methane) into hydrogen gas and solid carbon char, recently introduced on an industrial scale. Aspirational applications of pyrolysis would convert biomass into syngas and biochar, waste plastics back into usable oil, or waste into safely disposable substances.
Terminology
Pyrolysis is one of the various types of chemical degradation processes that occur at higher temperatures (above the boiling point of water or other solvents). It differs from other processes like combustion and hydrolysis in that it usually does not involve the addition of other reagents such as oxygen (O2, in combustion) or water (in hydrolysis). Pyrolysis produces solids (char), condensable liquids, (light and heavy oils and tar), and non-condensable gasses.
Pyrolysis is different from gasification. In the chemical process industry, pyrolysis refers to a partial thermal degradation of carbonaceous materials that takes place in an inert (oxygen free) atmosphere and produces both gases, liquids and solids. The pyrolysis can be extended to full gasification that produces mainly gaseous output, often with the addition of e.g. water steam to gasify residual carbonic solids, see Steam reforming.
Types
Specific types of pyrolysis include:
Carbonization, the complete pyrolysis of organic matter, which usually leaves a solid residue that consists mostly of elemental carbon.
Methane pyrolysis, the direct conversion of methane to hydrogen fuel and separable solid carbon, sometimes using molten metal catalysts.
Hydrous pyrolysis, in the presence of superheated water or steam, producing hydrogen and substantial atmospheric carbon dioxide.
Dry distillation, as in the original production of sulfuric acid from sulfates.
Destructive distillation, as in the manufacture of charcoal, coke and activated carbon.
Charcoal burning, the production of charcoal.
Tar production by destructive distillation of wood in tar kilns.
Caramelization of sugars.
High-temperature cooking processes such as roasting, frying, toasting, and grilling.
Cracking of heavier hydrocarbons into lighter ones, as in oil refining.
Thermal depolymerization, which breaks down plastics and other polymers into monomers and oligomers.
Ceramization involving the formation of polymer derived ceramics from preceramic polymers under an inert atmosphere.
Catagenesis, the natural conversion of buried organic matter to fossil fuels.
Flash vacuum pyrolysis, used in organic synthesis.
Other pyrolysis types come from a different classification that focuses on the pyrolysis operating conditions and heating system used, which have an impact on the yield of the pyrolysis products.
History
Pyrolysis has been used for turning wood into charcoal since ancient times. The ancient Egyptians used the liquid fraction obtained from the pyrolysis of cedar wood, in their embalming process.
The dry distillation of wood remained the major source of methanol into the early 20th century.
Pyrolysis was instrumental in the discovery of many chemical substances, such as phosphorus from ammonium sodium hydrogen phosphate in concentrated urine, oxygen from mercuric oxide, and various nitrates.
General processes and mechanisms
Pyrolysis generally consists in heating the material above its decomposition temperature, breaking chemical bonds in its molecules. The fragments usually become smaller molecules, but may combine to produce residues with larger molecular mass, even amorphous covalent solids.
In many settings, some amounts of oxygen, water, or other substances may be present, so that combustion, hydrolysis, or other chemical processes may occur besides pyrolysis proper. Sometimes those chemicals are added intentionally, as in the burning of firewood, in the traditional manufacture of charcoal, and in the steam cracking of crude oil.
Conversely, the starting material may be heated in a vacuum or in an inert atmosphere to avoid chemical side reactions (such as combustion or hydrolysis). Pyrolysis in a vacuum also lowers the boiling point of the byproducts, improving their recovery.
When organic matter is heated at increasing temperatures in open containers, the following processes generally occur, in successive or overlapping stages:
Below about 100 °C, volatiles, including some water, evaporate. Heat-sensitive substances, such as vitamin C and proteins, may partially change or decompose already at this stage.
At about 100 °C or slightly higher, any remaining water that is merely absorbed in the material is driven off. This process consumes a lot of energy, so the temperature may stop rising until all water has evaporated. Water trapped in crystal structure of hydrates may come off at somewhat higher temperatures.
Some solid substances, like fats, waxes, and sugars, may melt and separate.
Between 100 and 500 °C, many common organic molecules break down. Most sugars start decomposing at 160–180 °C. Cellulose, a major component of wood, paper, and cotton fabrics, decomposes at about 350 °C. Lignin, another major wood component, starts decomposing at about 350 °C, but continues releasing volatile products up to 500 °C. The decomposition products usually include water, carbon monoxide and/or carbon dioxide , as well as a large number of organic compounds. Gases and volatile products leave the sample, and some of them may condense again as smoke. Generally, this process also absorbs energy. Some volatiles may ignite and burn, creating a visible flame. The non-volatile residues typically become richer in carbon and form large disordered molecules, with colors ranging between brown and black. At this point the matter is said to have been "charred" or "carbonized".
At 200–300 °C, if oxygen has not been excluded, the carbonaceous residue may start to burn, in a highly exothermic reaction, often with no or little visible flame. Once carbon combustion starts, the temperature rises spontaneously, turning the residue into a glowing ember and releasing carbon dioxide and/or monoxide. At this stage, some of the nitrogen still remaining in the residue may be oxidized into nitrogen oxides like and . Sulfur and other elements like chlorine and arsenic may be oxidized and volatilized at this stage.
Once combustion of the carbonaceous residue is complete, a powdery or solid mineral residue (ash) is often left behind, consisting of inorganic oxidized materials of high melting point. Some of the ash may have left during combustion, entrained by the gases as fly ash or particulate emissions. Metals present in the original matter usually remain in the ash as oxides or carbonates, such as potash. Phosphorus, from materials such as bone, phospholipids, and nucleic acids, usually remains as phosphates.
Safety challenges
Because pyrolysis takes place at high temperatures which exceed the autoignition temperature of the produced gases, an explosion risk exists if oxygen is present. To control the temperature of pyrolysis systems careful temperature control is needed and can be accomplished with an open source pyrolysis controller. Pyrolysis also produces various toxic gases, mainly carbon monoxide. The greatest risk of fire, explosion and release of toxic gases comes when the system is starting up and shutting down, operating intermittently, or during operational upsets.
Inert gas purging is essential to manage inherent explosion risks. The procedure is not trivial and failure to keep oxygen out has led to accidents.
Occurrence and uses
Clandestine chemistry
Conversion of CBD to THC can be brought about by pyrolysis.
Cooking
Pyrolysis has many applications in food preparation. Caramelization is the pyrolysis of sugars in food (often after the sugars have been produced by the breakdown of polysaccharides). The food goes brown and changes flavor. The distinctive flavors are used in many dishes; for instance, caramelized onion is used in French onion soup. The temperatures needed for caramelization lie above the boiling point of water. Frying oil can easily rise above the boiling point. Putting a lid on the frying pan keeps the water in, and some of it re-condenses, keeping the temperature too cool to brown for longer time.
Pyrolysis of food can also be undesirable, as in the charring of burnt food (at temperatures too low for the oxidative combustion of carbon to produce flames and burn the food to ash).
Coke, carbon, charcoals, and chars
Carbon and carbon-rich materials have desirable properties but are nonvolatile, even at high temperatures. Consequently, pyrolysis is used to produce many kinds of carbon; these can be used for fuel, as reagents in steelmaking (coke), and as structural materials.
Charcoal is a less smoky fuel than pyrolyzed wood. Some cities ban, or used to ban, wood fires; when residents only use charcoal (and similarly treated rock coal, called coke) air pollution is significantly reduced. In cities where people do not generally cook or heat with fires, this is not needed. In the mid-20th century, "smokeless" legislation in Europe required cleaner-burning techniques, such as coke fuel and smoke-burning incinerators as an effective measure to reduce air pollution
The coke-making or "coking" process consists of heating the material in "coking ovens" to very high temperatures (up to ) so that the molecules are broken down into lighter volatile substances, which leave the vessel, and a porous but hard residue that is mostly carbon and inorganic ash. The amount of volatiles varies with the source material, but is typically 25–30% of it by weight. High temperature pyrolysis is used on an industrial scale to convert coal into coke. This is useful in metallurgy, where the higher temperatures are necessary for many processes, such as steelmaking. Volatile by-products of this process are also often useful, including benzene and pyridine. Coke can also be produced from the solid residue left from petroleum refining.
The original vascular structure of the wood and the pores created by escaping gases combine to produce a light and porous material. By starting with a dense wood-like material, such as nutshells or peach stones, one obtains a form of charcoal with particularly fine pores (and hence a much larger pore surface area), called activated carbon, which is used as an adsorbent for a wide range of chemical substances.
Biochar is the residue of incomplete organic pyrolysis, e.g., from cooking fires. It is a key component of the terra preta soils associated with ancient indigenous communities of the Amazon basin. Terra preta is much sought by local farmers for its superior fertility and capacity to promote and retain an enhanced suite of beneficial microbiota, compared to the typical red soil of the region. Efforts are underway to recreate these soils through biochar, the solid residue of pyrolysis of various materials, mostly organic waste.
Carbon fibers are filaments of carbon that can be used to make very strong yarns and textiles. Carbon fiber items are often produced by spinning and weaving the desired item from fibers of a suitable polymer, and then pyrolyzing the material at a high temperature (from ). The first carbon fibers were made from rayon, but polyacrylonitrile has become the most common starting material. For their first workable electric lamps, Joseph Wilson Swan and Thomas Edison used carbon filaments made by pyrolysis of cotton yarns and bamboo splinters, respectively.
Pyrolysis is the reaction used to coat a preformed substrate with a layer of pyrolytic carbon. This is typically done in a fluidized bed reactor heated to . Pyrolytic carbon coatings are used in many applications, including artificial heart valves.
Liquid and gaseous biofuels
Pyrolysis is the basis of several methods for producing fuel from biomass, i.e. lignocellulosic biomass. Crops studied as biomass feedstock for pyrolysis include native North American prairie grasses such as switchgrass and bred versions of other grasses such as Miscantheus giganteus. Other sources of organic matter as feedstock for pyrolysis include greenwaste, sawdust, waste wood, leaves, vegetables, nut shells, straw, cotton trash, rice hulls, and orange peels. Animal waste including poultry litter, dairy manure, and potentially other manures are also under evaluation. Some industrial byproducts are also suitable feedstock including paper sludge, distillers grain, and sewage sludge.
In the biomass components, the pyrolysis of hemicellulose happens between 210 and 310 °C. The pyrolysis of cellulose starts from 300 to 315 °C and ends at 360–380 °C, with a peak at 342–354 °C. Lignin starts to decompose at about 200 °C and continues until 1000 °C.
Synthetic diesel fuel by pyrolysis of organic materials is not yet economically competitive. Higher efficiency is sometimes achieved by flash pyrolysis, in which finely divided feedstock is quickly heated to between for less than two seconds.
Syngas is usually produced by pyrolysis.
The low quality of oils produced through pyrolysis can be improved by physical and chemical processes, which might drive up production costs, but may make sense economically as circumstances change.
There is also the possibility of integrating with other processes such as mechanical biological treatment and anaerobic digestion. Fast pyrolysis is also investigated for biomass conversion. Fuel bio-oil can also be produced by hydrous pyrolysis.
Methane pyrolysis for hydrogen
Methane pyrolysis is an industrial process for "turquoise" hydrogen production from methane by removing solid carbon from natural gas. This one-step process produces hydrogen in high volume at low cost (less than steam reforming with carbon sequestration). No greenhouse gas is released. No deep well injection of carbon dioxide is needed. Only water is released when hydrogen is used as the fuel for fuel-cell electric heavy truck transportation,
gas turbine electric power generation, and hydrogen for industrial processes including producing ammonia fertilizer and cement. Methane pyrolysis is the process operating around 1065 °C for producing hydrogen from natural gas that allows removal of carbon easily (solid carbon is a byproduct of the process). The industrial quality solid carbon can then be sold or landfilled and is not released into the atmosphere, avoiding emission of greenhouse gas (GHG) or ground water pollution from a landfill. In 2015, a company called Monolith Materials built a pilot plant in Redwood City, CA to study scaling Methane Pyrolysis using renewable power in the process. A successful pilot project then led to a larger commercial-scale demonstration plant in Hallam, Nebraska in 2016. As of 2020, this plant is operational and can produce around 14 metric tons of hydrogen per day. In 2021, the US Department of Energy backed Monolith Materials' plans for major expansion with a $1B loan guarantee. The funding will help produce a plant capable of generating 164 metric tons of hydrogen per day by 2024. Pilots with gas utilities and biogas plants are underway with companies like Modern Hydrogen. Volume production is also being evaluated in the BASF "methane pyrolysis at scale" pilot plant, the chemical engineering team at University of California - Santa Barbara and in such research laboratories as Karlsruhe Liquid-metal Laboratory (KALLA). Power for process heat consumed is only one-seventh of the power consumed in the water electrolysis method for producing hydrogen.
The Australian company Hazer Group was founded in 2010 to commercialise technology originally developed at the University of Western Australia. The company was listed on the ASX in December 2015. It is completing a commercial demonstration project to produce renewable hydrogen and graphite from wastewater and iron ore as a process catalyst use technology created by the University of Western Australia (UWA). The Commercial Demonstration Plant project is an Australian first, and expected to produce around 100 tonnes of fuel-grade hydrogen and 380 tonnes of graphite each year starting in 2023. It was scheduled to commence in 2022. "10 December 2021: Hazer Group (ASX: HZR) regret to advise that there has been a delay to the completion of the fabrication of the reactor for the Hazer Commercial Demonstration Project (CDP). This is expected to delay the planned commissioning of the Hazer CDP, with commissioning now expected to occur after our current target date of 1Q 2022." The Hazer Group has collaboration agreements with Engie for a facility in France in May 2023, A Memorandum of Understanding with Chubu Electric & Chiyoda in Japan April 2023 and an agreement with Suncor Energy and FortisBC to develop 2,500 tonnes per Annum Burrard-Hazer Hydrogen Production Plant in Canada April 2022
The American company C-Zero's technology converts natural gas into hydrogen and solid carbon. The hydrogen provides clean, low-cost energy on demand, while the carbon can be permanently sequestered. C-Zero announced in June 2022 that it closed a $34 million financing round led by SK Gas, a subsidiary of South Korea's second-largest conglomerate, the SK Group. SK Gas was joined by two other new investors, Engie New Ventures and Trafigura, one of the world's largest physical commodities trading companies, in addition to participation from existing investors including Breakthrough Energy Ventures, Eni Next, Mitsubishi Heavy Industries, and AP Ventures. Funding was for C-Zero's first pilot plant, which was expected to be online in Q1 2023. The plant may be capable of producing up to 400 kg of hydrogen per day from natural gas with no CO2 emissions.
One of the world's largest chemical companies, BASF, has been researching hydrogen pyrolysis for more than 10 years.
Ethylene
Pyrolysis is used to produce ethylene, the chemical compound produced on the largest scale industrially (>110 million tons/year in 2005). In this process, hydrocarbons from petroleum are heated to around in the presence of steam; this is called steam cracking. The resulting ethylene is used to make antifreeze (ethylene glycol), PVC (via vinyl chloride), and many other polymers, such as polyethylene and polystyrene.
Semiconductors
The process of metalorganic vapour-phase epitaxy (MOCVD) entails pyrolysis of volatile organometallic compounds to give semiconductors, hard coatings, and other applicable materials. The reactions entail thermal degradation of precursors, with deposition of the inorganic component and release of the hydrocarbons as gaseous waste. Since it is an atom-by-atom deposition, these atoms organize themselves into crystals to form the bulk semiconductor. Raw polycrystalline silicon is produced by the chemical vapor deposition of silane gases:
Gallium arsenide, another semiconductor, forms upon co-pyrolysis of trimethylgallium and arsine.
Waste management
Pyrolysis can also be used to treat municipal solid waste and plastic waste. The main advantage is the reduction in volume of the waste. In principle, pyrolysis will regenerate the monomers (precursors) to the polymers that are treated, but in practice the process is neither a clean nor an economically competitive source of monomers.
In tire waste management, tire pyrolysis is a well-developed technology.
Other products from car tire pyrolysis include steel wires, carbon black and bitumen. The area faces legislative, economic, and marketing obstacles. Oil derived from tire rubber pyrolysis has a high sulfur content, which gives it high potential as a pollutant; consequently it should be desulfurized.
Alkaline pyrolysis of sewage sludge at low temperature of 500 °C can enhance H2 production with in-situ carbon capture. The use of NaOH (sodium hydroxide) has the potential to produce H2-rich gas that can be used for fuels cells directly.
In early November 2021, the U.S. State of Georgia announced a joint effort with Igneo Technologies to build an $85 million large electronics recycling plant in the Port of Savannah. The project will focus on lower-value, plastics-heavy devices in the waste stream using multiple shredders and furnaces using pyrolysis technology.
One-stepwise pyrolysis and Two-stepwise pyrolysis for Tobacco Waste
Pyrolysis has also been used for trying to mitigate tobacco waste. One method was done where tobacco waste was separated into two categories TLW (Tobacco Leaf Waste) and TSW (Tobacco Stick Waste). TLW was determined to be any waste from cigarettes and TSW was determined to be any waste from electronic cigarettes. Both TLW and TSW were dried at 80 °C for 24 hours and stored in a desiccator. Samples were grounded so that the contents were uniform. Tobacco Waste (TW) also contains inorganic (metal) contents, which was determined using an inductively coupled plasma-optical spectrometer. Thermo-gravimetric analysis was used to thermally degrade four samples (TLW, TSW, glycerol, and guar gum) and monitored under specific dynamic temperature conditions. About one gram of both TLW and TSW were used in the pyrolysis tests. During these analysis tests, CO2 and N2 were used as atmospheres inside of a tubular reactor that was built using quartz tubing. For both CO2 and N2 atmospheres the flow rate was 100 mL min−1. External heating was created via a tubular furnace. The pyrogenic products were classified into three phases. The first phase was biochar, a solid residue produced by the reactor at 650 °C. The second phase liquid hydrocarbons were collected by a cold solvent trap and sorted by using chromatography. The third and final phase was analyzed using an online micro GC unit and those pyrolysates were gases.
Two different types of experiments were conducted: one-stepwise pyrolysis and two-stepwise pyrolysis. One-stepwise pyrolysis consisted of a constant heating rate (10 °C min−1) from 30 to 720 °C. In the second step of the two-stepwise pyrolysis test the pyrolysates from the one-stepwise pyrolysis were pyrolyzed in the second heating zone which was controlled isothermally at 650 °C. The two-stepwise pyrolysis was used to focus primarily on how well CO2 affects carbon redistribution when adding heat through the second heating zone.
First noted was the thermolytic behaviors of TLW and TSW in both the CO2 and N2 environments. For both TLW and TSW the thermolytic behaviors were identical at less than or equal to 660 °C in the CO2 and N2 environments. The differences between the environments start to occur when temperatures increase above 660 °C and the residual mass percentages significantly decrease in the CO2 environment compared to that in the N2 environment. This observation is likely due to the Boudouard reaction, where we see spontaneous gasification happening when temperatures exceed 710 °C. Although these observations were seen at temperatures lower than 710 °C it is most likely due to the catalytic capabilities of inorganics in TLW. It was further investigated by doing ICP-OES measurements and found that a fifth of the residual mass percentage was Ca species. CaCO3 is used in cigarette papers and filter material, leading to the explanation that degradation of CaCO3 causes pure CO2 reacting with CaO in a dynamic equilibrium state. This being the reason for seeing mass decay between 660 °C and 710 °C. Differences in differential thermogram (DTG) peaks for TLW were compared to TSW. TLW had four distinctive peaks at 87, 195, 265, and 306 °C whereas TSW had two major drop offs at 200 and 306 °C with one spike in between. The four peaks indicated that TLW contains more diverse types of additives than TSW. The residual mass percentage between TLW and TSW was further compared, where the residual mass in TSW was less than that of TLW for both CO2 and N2 environments concluding that TSW has higher quantities of additives than TLW.
The one-stepwise pyrolysis experiment showed different results for the CO2 and N2 environments. During this process the evolution of 5 different notable gases were observed. Hydrogen, Methane, Ethane, Carbon Dioxide, and Ethylene all are produced when the thermolytic rate of TLW began to be retarded at greater than or equal to 500 °C. Thermolytic rate begins at the same temperatures for both the CO2 and N2 environment but there is higher concentration of the production of Hydrogen, Ethane, Ethylene, and Methane in the N2 environment than that in the CO2 environment. The concentration of CO in the CO2 environment is significantly greater as temperatures increase past 600 °C and this is due to CO2 being liberated from CaCO3 in TLW. This significant increase in CO concentration is why there is lower concentrations of other gases produced in the CO2 environment due to a dilution effect. Since pyrolysis is the re-distribution of carbons in carbon substrates into three pyrogenic products. The CO2 environment is going to be more effective because the CO2 reduction into CO allows for the oxidation of pyrolysates to form CO. In conclusion the CO2 environment allows a higher yield of gases than oil and biochar. When the same process is done for TSW the trends are almost identical therefore the same explanations can be applied to the pyrolysis of TSW.
Harmful chemicals were reduced in the CO2 environment due to CO formation causing tar to be reduced. One-stepwise pyrolysis was not that effective on activating CO2 on carbon rearrangement due to the high quantities of liquid pyrolysates (tar). Two-stepwise pyrolysis for the CO2 environment allowed for greater concentrations of gases due to the second heating zone. The second heating zone was at a consistent temperature of 650 °C isothermally. More reactions between CO2 and gaseous pyrolysates with longer residence time meant that CO2 could further convert pyrolysates into CO. The results showed that the two-stepwise pyrolysis was an effective way to decrease tar content and increase gas concentration by about 10 wt.% for both TLW (64.20 wt.%) and TSW (73.71%).
Thermal cleaning
Pyrolysis is also used for thermal cleaning, an industrial application to remove organic substances such as polymers, plastics and coatings from parts, products or production components like extruder screws, spinnerets and static mixers. During the thermal cleaning process, at temperatures from , organic material is converted by pyrolysis and oxidation into volatile organic compounds, hydrocarbons and carbonized gas. Inorganic elements remain.
Several types of thermal cleaning systems use pyrolysis:
Molten Salt Baths belong to the oldest thermal cleaning systems; cleaning with a molten salt bath is very fast but implies the risk of dangerous splatters, or other potential hazards connected with the use of salt baths, like explosions or highly toxic hydrogen cyanide gas.
Fluidized Bed Systems use sand or aluminium oxide as heating medium; these systems also clean very fast but the medium does not melt or boil, nor emit any vapors or odors; the cleaning process takes one to two hours.
Vacuum Ovens use pyrolysis in a vacuum avoiding uncontrolled combustion inside the cleaning chamber; the cleaning process takes 8 to 30 hours.
Burn-Off Ovens, also known as Heat-Cleaning Ovens, are gas-fired and used in the painting, coatings, electric motors and plastics industries for removing organics from heavy and large metal parts.
Fine chemical synthesis
Pyrolysis is used in the production of chemical compounds, mainly, but not only, in the research laboratory.
The area of boron-hydride clusters started with the study of the pyrolysis of diborane (B2H6) at ca. 200 °C. Products include the clusters pentaborane and decaborane. These pyrolyses involve not only cracking (to give H2), but also recondensation.
The synthesis of nanoparticles, zirconia and oxides utilizing an ultrasonic nozzle in a process called ultrasonic spray pyrolysis (USP).
Other uses and occurrences
Pyrolysis is used to turn organic materials into carbon for the purpose of carbon-14 dating.
Pyrolysis liquids from slow pyrolysis of bark and hemp have been tested for their antifungal activity against wood decaying fungi, showing potential to substitute the current wood preservatives while further tests are still required. However, their ecotoxicity is very variable and while some are less toxic than current wood preservatives, other pyrolysis liquids have shown high ecotoxicity, what may cause detrimental effects in the environment.
Pyrolysis of tobacco, paper, and additives, in cigarettes and other products, generates many volatile products (including nicotine, carbon monoxide, and tar) that are responsible for the aroma and negative health effects of smoking. Similar considerations apply to the smoking of marijuana and the burning of incense products and mosquito coils.
Pyrolysis occurs during the incineration of trash, potentially generating volatiles that are toxic or contribute to air pollution if not completely burned.
Laboratory or industrial equipment sometimes gets fouled by carbonaceous residues that result from coking, the pyrolysis of organic products that come into contact with hot surfaces.
PAHs generation
Polycyclic aromatic hydrocarbons (PAHs) can be generated from the pyrolysis of different solid waste fractions, such as hemicellulose, cellulose, lignin, pectin, starch, polyethylene (PE), polystyrene (PS), polyvinyl chloride (PVC), and polyethylene terephthalate (PET). PS, PVC, and lignin generate significant amount of PAHs. Naphthalene is the most abundant PAH among all the polycyclic aromatic hydrocarbons.
When the temperature is increased from 500 to 900 °C, most PAHs increase. With increasing temperature, the percentage of light PAHs decreases and the percentage of heavy PAHs increases.
Study tools
Thermogravimetric analysis
Thermogravimetric analysis (TGA) is one of the most common techniques to investigate pyrolysis with no limitations of heat and mass transfer. The results can be used to determine mass loss kinetics. Activation energies can be calculated using the Kissinger method or peak analysis-least square method (PA-LSM).
TGA can couple with Fourier-transform infrared spectroscopy (FTIR) and mass spectrometry. As the temperature increases, the volatiles generated from pyrolysis can be measured.
Macro-TGA
In TGA, the sample is loaded first before the increase of temperature, and the heating rate is low (less than 100 °C min−1). Macro-TGA can use gram-scale samples to investigate the effects of pyrolysis with mass and heat transfer.
Pyrolysis–gas chromatography–mass spectrometry
Pyrolysis mass spectrometry (Py-GC-MS) is an important laboratory procedure to determine the structure of compounds.
Machine learning
In recent years, machine learning has attracted significant research interest in predicting yields, optimizing parameters, and monitoring pyrolytic processes.
| Physical sciences | Other reactions | Chemistry |
262401 | https://en.wikipedia.org/wiki/Translation%20%28biology%29 | Translation (biology) | In biology, translation is the process in living cells in which proteins are produced using RNA molecules as templates. The generated protein is a sequence of amino acids. This sequence is determined by the sequence of nucleotides in the RNA. The nucleotides are considered three at a time. Each such triple results in addition of one specific amino acid to the protein being generated. The matching from nucleotide triple to amino acid is called the genetic code. The translation is performed by a large complex of functional RNA and proteins called ribosomes. The entire process is called gene expression.
In translation, messenger RNA (mRNA) is decoded in a ribosome, outside the nucleus, to produce a specific amino acid chain, or polypeptide. The polypeptide later folds into an active protein and performs its functions in the cell. The polypeptide can also start folding in the during protein synthesis. The ribosome facilitates decoding by inducing the binding of complementary transfer RNA (tRNA) anticodon sequences to mRNA codons. The tRNAs carry specific amino acids that are chained together into a polypeptide as the mRNA passes through and is "read" by the ribosome.
Translation proceeds in three phases:
Initiation: The ribosome assembles around the target mRNA. The first tRNA is attached at the start codon.
Elongation: The last tRNA validated by the small ribosomal subunit (accommodation) transfers the amino acid. It carries to the large ribosomal subunit which binds it to one of the preceding admitted tRNA (transpeptidation). The ribosome then moves to the next mRNA codon to continue the process (translocation), creating an amino acid chain.
Termination: When a stop codon is reached, the ribosome releases the polypeptide. The ribosomal complex remains intact and moves on to the next mRNA to be translated.
In prokaryotes (bacteria and archaea), translation occurs in the cytosol, where the large and small subunits of the ribosome bind to the mRNA. In eukaryotes, translation occurs in the cytoplasm or across the membrane of the endoplasmic reticulum through a process called co-translational translocation. In co-translational translocation, the entire ribosome/mRNA complex binds to the outer membrane of the rough endoplasmic reticulum (ER), and the new protein is synthesized and released into the ER; the newly created polypeptide can be stored inside the ER for future vesicle transport and secretion outside the cell, or immediately secreted.
Many types of transcribed RNA, such as tRNA, ribosomal RNA, and small nuclear RNA, do not undergo a translation into proteins.
Several antibiotics act by inhibiting translation. These include anisomycin, cycloheximide, chloramphenicol, tetracycline, streptomycin, erythromycin, and puromycin. Prokaryotic ribosomes have a different structure from that of eukaryotic ribosomes, and thus antibiotics can specifically target bacterial infections without any harm to a eukaryotic host's cells.
Basic mechanisms
The basic process of protein production is the addition of one amino acid at a time to the end of a protein. This operation is performed by a ribosome. A ribosome is made up of two subunits, a small subunit, and a large subunit. These subunits come together before the translation of mRNA into a protein to provide a location for translation to be carried out and a polypeptide to be produced. The choice of amino acid type to add is determined by a messenger RNA (mRNA) molecule. Each amino acid added is matched to a three-nucleotide subsequence of the mRNA. For each such triplet possible, the corresponding amino acid is accepted. The successive amino acids added to the chain are matched to successive nucleotide triplets in the mRNA. In this way, the sequence of nucleotides in the template mRNA chain determines the sequence of amino acids in the generated amino acid chain.
The addition of an amino acid occurs at the C-terminus of the peptide; thus, translation is said to be amine-to-carboxyl directed.
The mRNA carries genetic information encoded as a ribonucleotide sequence from the chromosomes to the ribosomes. The ribonucleotides are "read" by translational machinery in a sequence of nucleotide triplets called codons. Each of those triplets codes for a specific amino acid.
The ribosome molecules translate this code to a specific sequence of amino acids. The ribosome is a multisubunit structure containing ribosomal RNA (rRNA) and proteins. It is the "factory" where amino acids are assembled into proteins.
Transfer RNAs (tRNAs) are small noncoding RNA chains (74–93 nucleotides) that transport amino acids to the ribosome. The repertoire of tRNA genes varies widely between species, with some bacteria having between 20 and 30 genes while complex eukaryotes could have thousands. tRNAs have a site for amino acid attachment, and a site called an anticodon. The anticodon is an RNA triplet complementary to the mRNA triplet that codes for their cargo amino acid.
Aminoacyl tRNA synthetases (enzymes) catalyze the bonding between specific tRNAs and the amino acids that their anticodon sequences call for. The product of this reaction is an aminoacyl-tRNA. The amino acid is joined by its carboxyl group to the 3' OH of the tRNA by an ester bond. When the tRNA has an amino acid linked to it, the tRNA is termed "charged". In bacteria, this aminoacyl-tRNA is carried to the ribosome by EF-Tu, where mRNA codons are matched through complementary base pairing to specific tRNA anticodons. Aminoacyl-tRNA synthetases that mispair tRNAs with the wrong amino acids can produce mischarged aminoacyl-tRNAs, which can result in inappropriate amino acids at the respective position in the protein. This "mistranslation" of the genetic code naturally occurs at low levels in most organisms, but certain cellular environments cause an increase in permissive mRNA decoding, sometimes to the benefit of the cell.
The ribosome has two binding sites for tRNA. They are the aminoacyl site (abbreviated A), and the peptidyl site/ exit site (abbreviated P/E). Concerning the mRNA, the three sites are oriented 5' to 3' E-P-A, because ribosomes move toward the 3' end of mRNA. The A-site binds the incoming tRNA with the complementary codon on the mRNA. The P/E-site holds the tRNA with the growing polypeptide chain. When an aminoacyl-tRNA initially binds to its corresponding codon on the mRNA, it is in the A site. Then, a peptide bond forms between the amino acid of the tRNA in the A site and the amino acid of the charged tRNA in the P/E site. The growing polypeptide chain is transferred to the tRNA in the A site. Translocation occurs, moving the tRNA to the P/E site, now without an amino acid; the tRNA that was in the A site, now charged with the polypeptide chain, is moved to the P/E site and the uncharged tRNA leaves, and another aminoacyl-tRNA enters the A site to repeat the process.
After the new amino acid is added to the chain, and after the tRNA is released out of the ribosome and into the cytosol, the energy provided by the hydrolysis of a GTP bound to the translocase EF-G (in bacteria) and a/eEF-2 (in eukaryotes and archaea) moves the ribosome down one codon towards the 3' end. The energy required for translation of proteins is significant. For a protein containing n amino acids, the number of high-energy phosphate bonds required to translate it is 4n-1. The rate of translation varies; it is significantly higher in prokaryotic cells (up to 17–21 amino acid residues per second) than in eukaryotic cells (up to 6–9 amino acid residues per second).
Initiation and termination of translation
Initiation involves the small subunit of the ribosome binding to the 5' end of mRNA with the help of initiation factors (IF). In bacteria and a minority of archaea, initiation of protein synthesis involves the recognition of a purine-rich initiation sequence on the mRNA called the Shine–Dalgarno sequence. The Shine–Dalgarno sequence binds to a complementary pyrimidine-rich sequence on the 3' end of the 16S rRNA part of the 30S ribosomal subunit. The binding of these complementary sequences ensures that the 30S ribosomal subunit is bound to the mRNA and is aligned such that the initiation codon is placed in the 30S portion of the P-site. Once the mRNA and 30S subunit are properly bound, an initiation factor brings the initiator tRNA–amino acid complex, f-Met-tRNA, to the 30S P site. The initiation phase is completed once a 50S subunit joins the 30S subunit, forming an active 70S ribosome. Termination of the polypeptide occurs when the A site of the ribosome is occupied by a stop codon (UAA, UAG, or UGA) on the mRNA, creating the primary structure of a protein. tRNA usually cannot recognize or bind to stop codons. Instead, the stop codon induces the binding of a release factor protein (RF1 & RF2) that prompts the disassembly of the entire ribosome/mRNA complex by the hydrolysis of the polypeptide chain from the peptidyl transferase center of the ribosome. Drugs or special sequence motifs on the mRNA can change the ribosomal structure so that near-cognate tRNAs are bound to the stop codon instead of the release factors. In such cases of 'translational readthrough', translation continues until the ribosome encounters the next stop codon.
Errors in translation
Even though the ribosomes are usually considered accurate and processive machines, the translation process is subject to errors that can lead either to the synthesis of erroneous proteins or to the premature abandonment of translation, either because a tRNA couples to a wrong codon or because a tRNA is coupled to the wrong amino acid. The rate of error in synthesizing proteins has been estimated to be between 1 in 105 and 1 in 103 misincorporated amino acids, depending on the experimental conditions. The rate of premature translation abandonment, instead, has been estimated to be of the order of magnitude of 10−4 events per translated codon.
Regulation
The process of translation is highly regulated in both eukaryotic and prokaryotic organisms. Regulation of translation can impact the global rate of protein synthesis which is closely coupled to the metabolic and proliferative state of a cell.
To study this process, scientists have used a wide variety of methods such as structural biology, analytical chemistry (mass-spectrometry based), imaging of reporter mRNA translation (in which the translation of a mRNA is linked to an output, such as luminescence or fluorescence), and next-generation sequencing based methods . Other methods such as toeprinting assay can also be used to determine to determine the location of ribosomes of a particular mRNA in vitro, and footprints of other proteins regulating translation.
In particular, ribosome profiling, which is a powerful method, enables researchers to take a snapshot of all the proteins being translated at a given time, showing which parts of the mRNA are being translated into proteins by ribosomes at a given time. This method is useful because it looks at all the mRNAs instead of using reporters that would typically look at one specific mRNA at a time. Ribosome profiling provides valuable insights into translation dynamics, revealing the complex interplay between gene sequence, mRNA structure, and translation regulation. For example, research utilizing this method has revealed that genetic differences and their subsequent expression as mRNAs can also impact translation rate in an RNA-specific manner.
Expanding on this concept, a more recent development is single-cell ribosome profiling, a technique that allows us to study the translation process at the resolution of individual cells. This is particularly significant as cells, even those of the same type, can exhibit considerable variability in their protein synthesis. Single-cell ribosome profiling has the potential to shed light on the heterogeneous nature of cells, leading to a more nuanced understanding of how translation regulation can impact cell behavior, metabolic state, and responsiveness to various stimuli or conditions.
Clinical significance
Translational control is critical for the development and survival of cancer. Cancer cells must frequently regulate the translation phase of gene expression, though it is not fully understood why translation is targeted over steps like transcrion. While cancer cells often have genetically altered translation factors, it is much more common for cancer cells to modify the levels of existing translation factors. Several major oncogenic signaling pathways, including the RAS–MAPK, PI3K/AKT/mTOR, MYC, and WNT–β-catenin pathways, ultimately reprogram the genome via translation. Cancer cells also control translation to adapt to cellular stress. During stress, the cell translates mRNAs that can mitigate the stress and promote survival. An example of this is the expression of AMPK in various cancers; its activation triggers a cascade that can ultimately allow the cancer to escape apoptosis (programmed cell death) triggered by nutrition deprivation. Future cancer therapies may involve disrupting the translation machinery of the cell to counter the downstream effects of cancer.
Mathematical modeling of translation
The transcription-translation process description, mentioning only the most basic "elementary" processes, consists of:
production of mRNA molecules (including splicing),
initiation of these molecules with help of initiation factors (e.g., the initiation can include the circularization step though it is not universally required),
initiation of translation, recruiting the small ribosomal subunit,
assembly of full ribosomes,
elongation, (i.e. movement of ribosomes along mRNA with production of protein),
termination of translation,
degradation of mRNA molecules,
degradation of proteins.
The process of amino acid building to create protein in translation is a subject of various physic models for a long time starting from the first detailed kinetic models such as or others taking into account stochastic aspects of translation and using computer simulations. Many chemical kinetics-based models of protein synthesis have been developed and analyzed in the last four decades. Beyond chemical kinetics, various modeling formalisms such as Totally Asymmetric Simple Exclusion Process, Probabilistic Boolean Networks, Petri Nets and max-plus algebra have been applied to model the detailed kinetics of protein synthesis or some of its stages. A basic model of protein synthesis that takes into account all eight 'elementary' processes has been developed, following the paradigm that "useful models are simple and extendable". The simplest model M0 is represented by the reaction kinetic mechanism (Figure M0). It was generalised to include 40S, 60S and initiation factors (IF) binding (Figure M1'). It was extended further to include effect of microRNA on protein synthesis. Most of models in this hierarchy can be solved analytically. These solutions were used to extract 'kinetic signatures' of different specific mechanisms of synthesis regulation.
Genetic code
It is also possible to translate either by hand (for short sequences) or by computer (after first programming one appropriately, see section below); this allows biologists and chemists to draw out the chemical structure of the encoded protein on paper.
First, convert each template DNA base to its RNA complement (note that the complement of A is now U), as shown below. Note that the template strand of the DNA is the one the RNA is polymerized against; the other DNA strand would be the same as the RNA, but with thymine instead of uracil.
DNA -> RNA
A -> U
T -> A
C -> G
G -> C
A=T-> A=U
Then split the RNA into triplets (groups of three bases). Note that there are 3 translation "windows", or reading frames, depending on where you start reading the code.
Finally, use the table at Genetic code to translate the above into a structural formula as used in chemistry.
This will give the primary structure of the protein. However, proteins tend to fold, depending in part on hydrophilic and hydrophobic segments along the chain. Secondary structure can often still be guessed at, but the proper tertiary structure is often very hard to determine.
Whereas other aspects such as the 3D structure, called tertiary structure, of protein can only be predicted using sophisticated algorithms, the amino acid sequence, called primary structure, can be determined solely from the nucleic acid sequence with the aid of a translation table.
This approach may not give the correct amino acid composition of the protein, in particular if unconventional amino acids such as selenocysteine are incorporated into the protein, which is coded for by a conventional stop codon in combination with a downstream hairpin (SElenoCysteine Insertion Sequence, or SECIS).
There are many computer programs capable of translating a DNA/RNA sequence into a protein sequence. Normally this is performed using the Standard Genetic Code, however, few programs can handle all the "special" cases, such as the use of the alternative initiation codons which are biologically significant. For instance, the rare alternative start codon CTG codes for Methionine when used as a start codon, and for Leucine in all other positions.
Example: Condensed translation table for the Standard Genetic Code (from the NCBI Taxonomy webpage).
AAs = FFLLSSSSYY**CC*WLLLLPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG
Starts = ---M---------------M---------------M----------------------------
Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG
Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG
Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG
The "Starts" row indicate three start codons, UUG, CUG, and the very common AUG. It also indicates the first amino acid residue when interpreted as a start: in this case it is all methionine.
Translation tables
Even when working with ordinary eukaryotic sequences such as the Yeast genome, it is often desired to be able to use alternative translation tables—namely for translation of the mitochondrial genes. Currently the following translation tables are defined by the NCBI Taxonomy Group for the translation of the sequences in GenBank:
The standard code
The vertebrate mitochondrial code
The yeast mitochondrial code
The mold, protozoan, and coelenterate mitochondrial code and the mycoplasma/spiroplasma code
The invertebrate mitochondrial code
The ciliate, dasycladacean and hexamita nuclear code
The kinetoplast code
The echinoderm and flatworm mitochondrial code
The euplotid nuclear code
The bacterial, archaeal and plant plastid code
The alternative yeast nuclear code
The ascidian mitochondrial code
The alternative flatworm mitochondrial code
The Blepharisma nuclear code
The chlorophycean mitochondrial code
The trematode mitochondrial code
The Scenedesmus obliquus mitochondrial code
The Thraustochytrium mitochondrial code
The Pterobranchia mitochondrial code
The candidate division SR1 and gracilibacteria code
The Pachysolen tannophilus nuclear code
The karyorelict nuclear code
The Condylostoma nuclear code
The Mesodinium nuclear code
The peritrich nuclear code
The Blastocrithidia nuclear code
The Cephalodiscidae mitochondrial code
| Biology and health sciences | Cell processes | null |
262572 | https://en.wikipedia.org/wiki/Ventricle%20%28heart%29 | Ventricle (heart) | A ventricle is one of two large chambers located toward the bottom of the heart that collect and expel blood towards the peripheral beds within the body and lungs. The blood pumped by a ventricle is supplied by an atrium, an adjacent chamber in the upper heart that is smaller than a ventricle. Interventricular means between the ventricles (for example the interventricular septum), while intraventricular means within one ventricle (for example an intraventricular block).
In a four-chambered heart, such as that in humans, there are two ventricles that operate in a double circulatory system: the right ventricle pumps blood into the pulmonary circulation to the lungs, and the left ventricle pumps blood into the systemic circulation through the aorta.
Structure
Ventricles have thicker walls than atria and generate higher blood pressures. The physiological load on the ventricles requiring pumping of blood throughout the body and lungs is much greater than the pressure generated by the atria to fill the ventricles. Further, the left ventricle has thicker walls than the right because it needs to pump blood to most of the body while the right ventricle fills only the lungs.
On the inner walls of the ventricles are irregular muscular columns called trabeculae carneae which cover all of the inner ventricular surfaces except that of the conus arteriosus, in the right ventricle. There are three types of these muscles. The third type, the papillary muscles, give origin at their apices to the chordae tendinae which attach to the cusps of the tricuspid valve and to the mitral valve.
The mass of the left ventricle, as estimated by magnetic resonance imaging, averages 143 g ± 38.4 g, with a range of 87–224 g.
The right ventricle is equal in size to the left ventricle and contains roughly 85 millilitres (3 imp fl oz; 3 US fl oz) in the adult. Its upper front surface is circled and convex, and forms much of the sternocostal surface of the heart. Its under surface is flattened, forming part of the diaphragmatic surface of the heart that rests upon the diaphragm.
Its posterior wall is formed by the ventricular septum, which bulges into the right ventricle, so that a transverse section of the cavity presents a semilunar outline. Its upper and left angle forms a conical pouch, the conus arteriosus, from which the pulmonary artery arises. A tendinous band, called the tendon of the conus arteriosus, extends upward from the right atrioventricular fibrous ring and connects the posterior surface of the conus arteriosus to the aorta.
Shape
The left ventricle is longer and more conical in shape than the right, and on transverse section its concavity presents an oval or nearly circular outline. It forms a small part of the sternocostal surface and a considerable part of the diaphragmatic surface of the heart; it also forms the apex of the heart. The left ventricle is thicker and more muscular than the right ventricle because it pumps blood at a higher pressure.
The right ventricle is triangular in shape and extends from the tricuspid valve in the right atrium to near the apex of the heart. Its wall is thickest at the apex and thins towards its base at the atrium. When viewed via cross section however, the right ventricle seems to be crescent shaped. The right ventricle is made of two components: the sinus and the conus. The Sinus is the inflow which flows away from the tricuspid valve. Three bands made from muscle, separate the right ventricle: the parietal, the septal, and the moderator band. The moderator band connects from the base of the anterior papillary muscle to the ventricular septum.
Development
By young adulthood, the walls of the left ventricle have thickened from three to six times greater than that of the right ventricle. This reflects the typical five times greater pressure workload this chamber performs while accepting blood returning from the pulmonary veins at ~80mmHg pressure (equivalent to around 11 kPa) and pushing it forward to the typical ~120mmHg pressure (around 16.3 kPa) in the aorta during each heartbeat. (The pressures stated are resting values and stated as relative to surrounding atmospheric which is the typical "0" reference pressure used in medicine.)
Function
During systole, the ventricles contract, pumping blood through the body. During diastole, the ventricles relax and fill with blood again.
The left ventricle receives oxygenated blood from the left atrium via the mitral valve and pumps it through the aorta via the aortic valve, into the systemic circulation. The left ventricular muscle must relax and contract quickly and be able to increase or lower its pumping capacity under the control of the nervous system. In the diastolic phase, it has to relax very quickly after each contraction so as to quickly fill with the oxygenated blood flowing from the pulmonary veins. Likewise in the systolic phase, the left ventricle must contract rapidly and forcibly to pump this blood into the aorta, overcoming the much higher aortic pressure. The extra pressure exerted is also needed to stretch the aorta and other arteries to accommodate the increase in blood volume.
The right ventricle receives deoxygenated blood from the right atrium via the tricuspid valve and pumps it into the pulmonary artery via the pulmonary valve, into the pulmonary circulation.
Pumping volume
The typical healthy adult heart pumping volume is ~5 liters/min, resting. Maximum capacity pumping volume extends from ~25 liters/min for non-athletes to as high as ~45 liters/min for Olympic level athletes.
Volumes
In cardiology, the performance of the ventricles are measured with several volumetric parameters, including end-diastolic volume (EDV), end-systolic volume (ESV), stroke volume (SV) and ejection fraction (Ef).
Pressures
Ventricular pressure is a measure of blood pressure within the ventricles of the heart.
Left
During most of the cardiac cycle, ventricular pressure is less than the pressure in the aorta, but during systole, the ventricular pressure rapidly increases, and the two pressures become equal to each other (represented by the junction of the blue and red lines on the diagram on this page), the aortic valve opens, and blood is pumped to the body.
Elevated left ventricular end-diastolic pressure has been described as a risk factor in cardiac surgery.
Noninvasive approximations have been described.
An elevated pressure difference between the aortic pressure and the left ventricular pressure may be indicative of aortic stenosis.
Right
Right ventricular pressure demonstrates a different pressure-volume loop than left ventricular pressure.
Dimensions
The heart and its performance are also commonly measured in terms of dimensions, which in this case means one-dimensional distances, usually measured in millimeters. This is not as informative as volumes but may be much easier to estimate with (e.g., M-Mode echocardiography or with sonomicrometry, which is mostly used for animal model research). Optimally, it is specified with which plane the distance is measured in, e.g. the dimension of the longitudinal plane.
Fractional shortening (FS) is the fraction of any diastolic dimension that is lost in systole. When referring to endocardial luminal distances, it is EDD minus ESD divided by EDD (times 100 when measured in percentage). Normal values may differ somewhat dependent on which anatomical plane is used to measure the distances. Normal range is 25–45%, Mild is 20–25%, Moderate is 15–20%, and Severe is <15%. Cardiology Diagnostic Tests Midwall fractional shortening may also be used to measure diastolic/systolic changes for inter-ventricular septal dimensions and posterior wall dimensions. However, both endocardial and midwall fractional shortening are dependent on myocardial wall thickness, and thereby dependent on long-axis function. By comparison, a measure of short-axis function termed epicardial volume change (EVC) is independent of myocardial wall thickness and represents isolated short-axis function.
Clinical significance
An arrhythmia is an irregular heartbeat that can occur in the ventricles or atria. Normally the heartbeat is initiated in the SA node of the atrium but initiation can also occur in the Purkinje fibres of the ventricles, giving rise to premature ventricular contractions, also called ventricular extra beats. When these beats become grouped the condition is known as ventricular tachycardia.
Another form of arrhythmia is that of the ventricular escape beat. This can happen as a compensatory mechanism when there is a problem in the conduction system from the SA node.
The most severe form of arrhythmia is ventricular fibrillation which is the most common cause of cardiac arrest and subsequent sudden death.
Ventricular septal defect
Atrioventricular septal defect
| Biology and health sciences | Circulatory system | Biology |
262577 | https://en.wikipedia.org/wiki/Stealth%20technology | Stealth technology | Stealth technology, also termed low observable technology (LO technology), is a sub-discipline of military tactics and passive and active electronic countermeasures. The term covers a range of methods used to make personnel, aircraft, ships, submarines, missiles, satellites, and ground vehicles less visible (ideally invisible) to radar, infrared, sonar and other detection methods. It corresponds to military camouflage for these parts of the electromagnetic spectrum (i.e., multi-spectral camouflage).
Development of modern stealth technologies in the United States began in 1958, where earlier attempts to prevent radar tracking of its U-2 spy planes during the Cold War by the Soviet Union had been unsuccessful. Designers turned to developing a specific shape for planes that tended to reduce detection by redirecting electromagnetic radiation waves from radars. Radiation-absorbent material was also tested and made to reduce or block radar signals that reflect off the surfaces of aircraft. Such changes to shape and surface composition comprise stealth technology as currently used on the Northrop Grumman B-2 Spirit "Stealth Bomber".
The concept of stealth is to operate or hide while giving enemy forces no indication as to the presence of friendly forces. This concept was first explored through camouflage to make an object's appearance blend into the visual background. As the potency of detection and interception technologies (radar, infrared search and tracking, surface-to-air missiles, etc.) have increased, so too has the extent to which the design and operation of military personnel and vehicles have been affected in response. Some military uniforms are treated with chemicals to reduce their infrared signature. A modern stealth vehicle is designed from the outset to have a chosen spectral signature. The degree of stealth embodied in a given design is chosen according to the projected threats of detection.
History
Camouflage to aid or avoid predation predates humanity, and hunters have been using vegetation to conceal themselves, perhaps as long as people have been hunting. The earliest application of camouflage in warfare is impossible to ascertain. Methods for visual concealment in war were documented by Sun Tzu in his book The Art of War in the 5th century BC, and by Frontinus in his work Strategemata in the 1st century AD.
In England, irregular units of gamekeepers in the 17th century were the first to adopt drab colours (common in 16th century Irish units) as a form of camouflage, following examples from the continent.
During World War I, the Germans experimented with the use of Cellon (Cellulose acetate), a transparent covering material, in an attempt to reduce the visibility of military aircraft. Single examples of the Fokker E.III Eindecker fighter monoplane, the Albatros C.I two-seat observation biplane, and the Linke-Hofmann R.I prototype heavy bomber were covered with Cellon. However, sunlight glinting from the material made the aircraft even more visible. Cellon was also found to degrade quickly from both sunlight and in-flight temperature changes, so the effort to make transparent aircraft ceased.
In 1916, the British modified a small SS class airship for the purpose of night-time reconnaissance over German lines on the Western Front. Fitted with a silenced engine and a black gas bag, the craft was both invisible and inaudible from the ground but several night-time flights over German-held territory produced little useful intelligence and the idea was dropped.
Diffused lighting camouflage, a shipborne form of counter-illumination camouflage, was trialled by the Royal Canadian Navy from 1941 to 1943. The concept was followed up for aircraft by the Americans and the British: in 1945, a Grumman Avenger with Yehudi lights reached from a ship before being sighted. This ability was rendered obsolete by radar.
Chaff was invented in Britain and Germany early in World War II as a means to hide aircraft from radar. In effect, chaff acted upon radio waves much as a smoke screen acted upon visible light.
The U-boat U-480 may have been the first stealth submarine. It featured an anechoic tile rubber coating, one layer of which contained circular air pockets to defeat ASDIC sonar. Radar-absorbent paints and materials of rubber and semiconductor composites (codenames: Sumpf, Schornsteinfeger) were used by the Kriegsmarine on submarines in World War II. Tests showed they were effective in reducing radar signatures at both short (centimetres) and long (1.5 metre) wavelengths.
In 1956, the CIA began attempts to reduce the radar cross-section (RCS) of the U-2 spyplane. Three systems were developed, Trapeze, a series of wires and ferrite beads around the planform of the aircraft, a covering material with PCB circuitry embedded in it, and radar-absorbent paint. These were deployed in the field on the so-called dirty birds but results were disappointing, the weight and drag increases were not worth any reduction in detection rates. More successful was applying camouflage paint to the originally bare metal aircraft; a deep blue was found to be most effective. The weight of this cost 250 ft in maximum altitude, but made the aircraft harder for interceptors to see.
In 1958, the U.S. Central Intelligence Agency requested funding for a reconnaissance aircraft to replace the existing U-2 spy planes, and Lockheed secured contractual rights to produce it. "Kelly" Johnson and his team at Lockheed's Skunk Works were assigned to produce the A-12 (or OXCART), which operated at high altitude of and speed of to avoid radar detection. Various plane shapes designed to reduce radar detection were developed in earlier prototypes, named A-1 to A-11. The A-12 included a number of stealthy features including special fuel to reduce the signature of the exhaust plume, canted vertical stabilizers, the use of composite materials in key locations, and the overall finish in radar-absorbent paint.
In 1960, the USAF reduced the radar cross-section of a Ryan Q-2C Firebee drone. This was achieved through specially designed screens over the air intake, and radiation-absorbent material on the fuselage, and radar-absorbent paint.
The United States Army issued a specification in 1968 which called for an observation aircraft that would be acoustically undetectable from the ground when flying at an altitude of at night. This resulted in the Lockheed YO-3A Quiet Star, which operated in South Vietnam from late June 1970 to September 1971.
During the 1970s, the U.S. Department of Defense launched project Lockheed Have Blue, with the aim of developing a stealth fighter. There was fierce bidding between Lockheed and Northrop to secure the multibillion-dollar contract. Lockheed incorporated into its bid a text written by the Soviet-Russian physicist Pyotr Ufimtsev from 1962, titled Method of Edge Waves in the Physical Theory of Diffraction, Soviet Radio, Moscow, 1962. In 1971, this book was translated into English with the same title by the U.S. Air Force, Foreign Technology Division. The theory played a critical role in the design of American stealth-aircraft F-117 and B-2. Equations outlined in the paper quantified how a plane's shape would affect its detectability by radar, termed radar cross-section (RCS). At the time, the Soviet Union did not have supercomputer capacity to solve these equations for actual designs. This was applied by Lockheed in computer simulation to design a novel shape they called the "Hopeless Diamond", a wordplay on the Hope Diamond, securing contractual rights to produce the F-117 Nighthawk starting in 1975. In 1977, Lockheed produced two 60% scale models under the Have Blue contract. The Have Blue program was a stealth technology demonstrator that lasted from 1976 to 1979. The Northrop Grumman Tacit Blue also played a part in the development of composite material and curvilinear surfaces, low observables, fly-by-wire, and other stealth technology innovations. The success of Have Blue led the Air Force to create the Senior Trend program which developed the F-117.
In the early 21st century, the proliferation of stealth technology began outside of the United States. Both Russia and China tested their stealth aircraft in 2010. Russia manufactured 10 flyable prototypes of the Su-57, while China produced two stealth aircraft, Chengdu J-20 and Shenyang FC-31. In 2017, China became the second country in the world to field an operational stealth aircraft, challenging the United States and its Asian allies.
Principles
Stealth technology (or LO for low observability) is not one technology. It is a set of technologies, used in combinations, that can greatly reduce the distances at which a person or vehicle can be detected; more so radar cross-section reductions, but also acoustic, thermal, and other aspects.
Radar cross-section (RCS) reductions
Almost since the invention of radar, various methods have been tried to minimize detection. Rapid development of radar during World War II led to equally rapid development of numerous counter radar measures during the period; a notable example of this was the use of chaff. Modern methods include radar jamming and deception.
The term stealth in reference to reduced radar signature aircraft became popular during the late eighties when the Lockheed Martin F-117 stealth fighter became widely known. The first large scale (and public) use of the F-117 was during the Gulf War in 1991. However, F-117A stealth fighters were used for the first time in combat during Operation Just Cause, the United States invasion of Panama in 1989. Stealth aircraft are often designed to have radar cross sections that are orders of magnitude smaller than conventional aircraft. The radar range equation meant that all else being equal, detection range is proportional to the fourth root of RCS; thus, reducing detection range by a factor of 10 requires a reduction of RCS by a factor of 10,000.
Vehicle shape
Aircraft
The possibility of designing aircraft in such a manner as to reduce their radar cross-section was recognized in the late 1930s, when the first radar tracking systems were employed, and it has been known since at least the 1960s that aircraft shape makes a significant difference in detectability. The Avro Vulcan, a British bomber of the 1960s, had a remarkably small appearance on radar despite its large size, and occasionally disappeared from radar screens entirely. It is now known that it had a fortuitously stealthy shape apart from the vertical element of the tail. Despite being designed before a low radar cross-section (RCS) and other stealth factors were ever a consideration, a Royal Aircraft Establishment technical note of 1957 stated that of all the aircraft so far studied, the Vulcan appeared by far the simplest radar echoing object, due to its shape: only one or two components contributing significantly to the echo at any aspect (one of them being the vertical stabilizer, which is especially relevant for side aspect RCS), compared with three or more on most other types. While writing about radar systems, authors Simon Kingsley and Shaun Quegan singled out the Vulcan's shape as acting to reduce the RCS. In contrast, the Tupolev 95 Russian long-range bomber (NATO reporting name 'Bear') was conspicuous on radar. It is now known that propellers and jet turbine blades produce a bright radar image; the Bear has four pairs of large diameter contra-rotating propellers.
Another important factor is internal construction. Some stealth aircraft have skin that is radar transparent or absorbing, behind which are structures termed reentrant triangles. Radar waves penetrating the skin get trapped in these structures, reflecting off the internal faces and losing energy. This method was first used on the Blackbird series: A-12, YF-12A, Lockheed SR-71 Blackbird.
The most efficient way to reflect radar waves back to the emitting radar is with orthogonal metal plates, forming a corner reflector consisting of either a dihedral (two plates) or a trihedral (three orthogonal plates). This configuration occurs in the tail of a conventional aircraft, where the vertical and horizontal components of the tail are set at right angles. Stealth aircraft such as the F-117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. A more radical method is to omit the tail, as in the B-2 Spirit. The B-2's clean, low-drag flying wing configuration gives it exceptional range and reduces its radar profile. The flying wing design most closely resembles a so-called infinite flat plate (as vertical control surfaces dramatically increase RCS), the perfect stealth shape, as it would have no angles to reflect back radar waves.
In addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. A stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. Any stealthy vehicle becomes un-stealthy when a door or hatch opens.
Parallel alignment of edges or even surfaces is also often used in stealth designs. The technique involves using a small number of edge orientations in the shape of the structure. For example, on the F-22A Raptor, the leading edges of the wing and the tail planes are set at the same angle. Other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. The effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. The effect is sometimes called "glitter" after the very brief signal seen when the reflected beam passes across a detector. It can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system.
Stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. The YF-23 has such serrations on the exhaust ports. This is another example in the parallel alignment of features, this time on the external airframe.
The shaping requirements detracted greatly from the F-117's aerodynamic properties. It is inherently unstable, and cannot be flown without a fly-by-wire control system.
Similarly, coating the cockpit canopy with a thin film transparent conductor (vapor-deposited gold or indium tin oxide) helps to reduce the aircraft's radar profile, because radar waves would normally enter the cockpit, reflect off objects (the inside of a cockpit has a complex shape, with a pilot helmet alone forming a sizeable return), and possibly return to the radar, but the conductive coating creates a controlled shape that deflects the incoming radar waves away from the radar. The coating is thin enough that it has no adverse effect on pilot vision.
Ships
Ships have also adopted similar methods. Though the earlier Arleigh Burke-class destroyer incorporated some signature-reduction features. the Norwegian Skjold-class corvette was the first coastal defence and the French La Fayette-class frigate the first ocean-going stealth ship to enter service. Other examples are the Dutch De Zeven Provinciën class frigates, the Taiwanese Tuo Chiang stealth corvette, German Sachsen-class frigates, the Swedish Visby-class corvette, the USS San Antonio amphibious transport dock, and most modern warship designs.
Materials
Non-metallic airframe
Dielectric composite materials are more transparent to radar, whereas electrically conductive materials such as metals and carbon fibers reflect electromagnetic energy incident on the material's surface. Composites may also contain ferrites to optimize the dielectric and magnetic properties of a material for its application.
Radar-absorbent material
Radiation-absorbent material (RAM), often as paints, are used especially on the edges of metal surfaces. While the material and thickness of RAM coatings can vary, the way they work is the same: absorb radiated energy from a ground- or air-based radar station into the coating and convert it to heat rather than reflect it back. Current technologies include dielectric composites and metal fibers containing ferrite isotopes. Ceramic composite coating is a new type of material systems which can sustain at higher temperatures with better sand erosion resistance and thermal resistance. Paint comprises depositing pyramid-like colonies on the reflecting superficies with the gaps filled with ferrite-based RAM. The pyramidal structure deflects the incident radar energy in the maze of RAM. One commonly used material is called iron ball paint. It contains microscopic iron spheres that resonate in tune with incoming radio waves and dissipate most of their energy as heat, leaving little to reflect back to detectors. FSS are planar periodic structures that behave like filters to electromagnetic energy. The considered frequency-selective surfaces are composed of conducting patch elements pasted on the ferrite layer. FSS are used for filtration and microwave absorption.
Radar stealth countermeasures and limits
Low-frequency radar
Shaping offers far fewer stealth advantages against low-frequency radar. If the radar wavelength is roughly twice the size of the target, a half-wave resonance effect can still generate a significant return. However, low-frequency radar is limited by lack of available frequencies (many are heavily used by other systems), by lack of accuracy of the diffraction-limited systems given their long wavelengths, and by the radar's size, making it difficult to transport. A long-wave radar may detect a target and roughly locate it, but not provide enough information to identify it, target it with weapons, or even to guide a fighter to it.
Multiple emitters
Stealth aircraft attempt to minimize all radar reflections, but are specifically designed to avoid reflecting radar waves back in the direction they came from (since in most cases a radar emitter and receiver are in the same location). They are less able to minimize radar reflections in other directions. Thus, detection can be better achieved if emitters are in different locations from receivers. One emitter separate from one receiver is termed bistatic radar; one or more emitters separate from more than one receiver is termed multistatic radar. Proposals exist to use reflections from emitters such as civilian radio transmitters, including cellular telephone radio towers.
Moore's law
By Moore's law the processing power behind radar systems is rising over time. This will eventually erode the ability of physical stealth to hide vehicles.
Ship wakes and spray
Synthetic aperture sidescan radars can be used to detect the location and heading of ships from their wake patterns. These are detectable from orbit. When a ship moves through a seaway it throws up a cloud of spray which can be detected by radar.
Acoustics
Acoustic stealth plays a primary role for submarines and ground vehicles. Submarines use extensive rubber mountings to isolate, damp, and avoid mechanical noises that can reveal locations to underwater passive sonar arrays.
Early stealth observation aircraft used slow-turning propellers to avoid being heard by enemy troops below. Stealth aircraft that stay subsonic can avoid being tracked by sonic boom. The presence of supersonic and jet-powered stealth aircraft such as the SR-71 Blackbird indicates that acoustic signature is not always a major driver in aircraft design, as the Blackbird relied more on its very high speed and altitude.
One method to reduce helicopter rotor noise is modulated blade spacing. Standard rotor blades are evenly spaced, and produce greater noise at a given frequency and its harmonics. Using varied spacing between the blades spreads the noise or acoustic signature of the rotor over a greater range of frequencies.
Visibility
The simplest technology is visual camouflage; the use of paint or other materials to color and break up the lines of a vehicle or person.
Most stealth aircraft use matte paint and dark colors, and operate only at night. Lately, interest in daylight Stealth (especially by the USAF) has emphasized the use of gray paint in disruptive schemes, and it is assumed that Yehudi lights could be used in the future to hide the airframe (against the background of the sky, including at night, aircraft of any colour appear dark) or as a sort of active camouflage. The original B-2 design had wing tanks for a contrail-inhibiting chemical, alleged by some to be chlorofluorosulfonic acid, but this was replaced in the final design with a contrail sensor that alerts the pilot when he should change altitude and mission planning also considers altitudes where the probability of their formation is minimized.
In space, mirrored surfaces can be employed to reflect views of empty space toward known or suspected observers; this approach is compatible with several radar stealth schemes. Careful control of the orientation of the satellite relative to the observers is essential, and mistakes can lead to detectability enhancement rather than the desired reduction.
Infrared
An exhaust plume contributes a significant infrared signature. One means to reduce IR signature is to have a non-circular tail pipe (a slit shape) to minimize the exhaust cross sectional area and maximize the mixing of hot exhaust with cool ambient air (see Lockheed F-117 Nighthawk, rectangular nozzles on the Lockheed Martin F-22, and serrated nozzle flaps on the Lockheed Martin F-35). Often, cool air is deliberately injected into the exhaust flow to boost this process (see Ryan AQM-91 Firefly and Northrop B-2 Spirit). The Stefan–Boltzmann law shows how this results in less energy (Thermal radiation in infrared spectrum) being released and thus reduces the heat signature. In some aircraft, the jet exhaust is vented above the wing surface to shield it from observers below, as in the Lockheed F-117 Nighthawk, and the unstealthy Fairchild Republic A-10 Thunderbolt II. To achieve infrared stealth, the exhaust gas is cooled to the temperatures where the brightest wavelengths it radiates are absorbed by atmospheric carbon dioxide and water vapor, greatly reducing the infrared visibility of the exhaust plume. Another way to reduce the exhaust temperature is to circulate coolant fluids such as fuel inside the exhaust pipe, where the fuel tanks serve as heat sinks cooled by the flow of air along the wings.
Ground combat includes the use of both active and passive infrared sensors. Thus, the United States Marine Corps (USMC) ground combat uniform requirements document specifies infrared reflective quality standards.
Reducing radio frequency (RF) emissions
In addition to reducing infrared and acoustic emissions, a stealth vehicle must avoid radiating any other detectable energy, such as from onboard radars, communications systems, or RF leakage from electronics enclosures. The F-117 uses passive infrared and low light level television sensor systems to aim its weapons and the F-22 Raptor has an advanced LPI radar which can illuminate enemy aircraft without triggering a radar warning receiver response.
Measuring
The size of a target's image on radar is measured by the radar cross section (RCS), often represented by the symbol σ and expressed in square meters. This does not equal geometric area. A perfectly conducting sphere of projected cross sectional area 1 m2 (i.e. a diameter of 1.13 m) will have an RCS of 1 m2. Note that for radar wavelengths much less than the diameter of the sphere, RCS is independent of frequency. Conversely, a square flat plate of area 1 m2 will have an RCS of σ=4π A2 / λ2 (where A=area, λ=wavelength), or 13,982 m2 at 10 GHz if the radar is perpendicular to the flat surface. At off-normal incident angles, energy is reflected away from the receiver, reducing the RCS. Modern stealth aircraft are said to have an RCS comparable with small birds or large insects, though this varies widely depending on aircraft and radar.
If the RCS was directly related to the target's cross-sectional area, the only way to reduce it would be to make the physical profile smaller. Rather, by reflecting much of the radiation away or by absorbing it, the target achieves a smaller radar cross section.
Tactics
Stealthy strike aircraft such as the Lockheed F-117 Nighthawk, are usually used against heavily defended enemy sites such as command and control centers or surface-to-air missile (SAM) batteries. Enemy radar will cover the airspace around these sites with overlapping coverage, making undetected entry by conventional aircraft nearly impossible. Stealthy aircraft can also be detected, but only at short ranges around the radars; for a stealthy aircraft there are substantial gaps in the radar coverage. Thus a stealthy aircraft flying an appropriate route can remain undetected by radar. Even if a stealth aircraft is detected, fire-control radars operating in C, X and Ku bands cannot paint (for missile guidance) low observable (LO) jets except at very close ranges. Many ground-based radars exploit Doppler filter to improve sensitivity to objects having a radial velocity component relative to the radar. Mission planners use their knowledge of enemy radar locations and the RCS pattern of the aircraft to design a flight path that minimizes radial speed while presenting the lowest-RCS aspects of the aircraft to the threat radar. To be able to fly these "safe" routes, it is necessary to understand an enemy's radar coverage (see electronic intelligence). Airborne or mobile radar systems such as airborne early warning and control (AEW&C, AWACS) can complicate tactical strategy for stealth operation.
Research
After the invention of electromagnetic metasurfaces, the conventional means to reduce RCS have been improved significantly. As mentioned earlier, the main objective in purpose shaping is to redirect scattered waves away from the backscattered direction, which is usually the source. However, this usually compromises aerodynamic performance. One feasible solution, which has extensively been explored in recent time, is to use metasurfaces which can redirect scattered waves without altering the geometry of a target. Such metasurfaces can primarily be classified in two categories: (i) checkerboard metasurfaces, (ii) gradient index metasurfaces. Similarly, negative index metamaterials are artificial structures for which refractive index has a negative value for some frequency range, such as in microwave, infrared, or possibly optical. These offer another way to reduce detectability, and may provide electromagnetic near-invisibility in designed wavelengths.
Plasma stealth is a phenomenon proposed to use ionized gas, termed a plasma, to reduce RCS of vehicles. Interactions between electromagnetic radiation and ionized gas have been studied extensively for many purposes, including concealing vehicles from radar. Various methods might form a layer or cloud of plasma around a vehicle to deflect or absorb radar, from simpler electrostatic to radio frequency (RF) more complex laser discharges, but these may be difficult in practice.
Several technology research and development efforts exist to integrate the functions of aircraft flight control systems such as ailerons, elevators, elevons, flaps, and flaperons into wings to perform the aerodynamic purpose with the advantages of lower RCS for stealth, via simpler geometries and lower complexity (mechanically simpler, fewer or no moving parts or surfaces, less maintenance), and lower mass, cost (up to 50% less), drag (up to 15% less during use), and inertia (for faster, stronger control response to change vehicle orientation to reduce detection). Two promising approaches are flexible wings, and fluidics.
In flexible wings, much or all of a wing surface can change shape in flight to deflect air flow. Adaptive compliant wings are a military and commercial effort. The X-53 Active Aeroelastic Wing was a US Air Force, Boeing, and NASA effort.
In fluidics, fluid injection into airflows is being researched for use in aircraft to control direction, in two ways: circulation control and thrust vectoring. In both, larger more complex mechanical parts are replaced by smaller, simpler, lower mass fluidic systems, in which larger forces in fluids are diverted by smaller jets or flows of fluid intermittently, to change the direction of vehicles. Mechanical control surfaces that must move cause an important part of aircraft radar cross-section. Omitting mechanical control surfaces can reduce radar returns. , at least two countries are known to be researching fluidic control. In Britain, BAE Systems has tested two fluidically controlled unmanned aircraft, one starting in 2010 named Demon, and another starting in 2017 named MAGMA, with the University of Manchester. In the United States, the Defense Advanced Research Projects Agency (DARPA) program named Control of Revolutionary Aircraft with Novel Effectors (CRANE) seeks "... to design, build, and flight test a novel X-plane that incorporates active flow control (AFC) as a primary design consideration. ... In 2023, the aircraft received its official designation as X-65." In January 2024, construction began, at Boeing subsidiary Aurora Flight Sciences. According to DARPA, the Aurora X-65 could be completed and unveiled as soon as early 2025, with the first flight occurring in summer 2025.
In circulation control, near the trailing edges of wings, aircraft flight control systems are replaced by slots which emit fluid flows.
List of stealth aircraft
F-117 Nighthawk
B-2 Spirit
F-22 Raptor
F-35 Lightning II
J-20
Su-57
B-21 Raider
FC-31
SU-75 Checkmate
List of reduced-signature ships
Navy ships worldwide have incorporated signature-reduction features, mostly for the purpose of reducing anti-ship missile detection range and enhancing countermeasure effectiveness rather than actual detection avoidance. Such ships include:
Bhumibol Adulyadej-class frigate
Independence-class littoral combat ship
Kamorta-class corvette
Kolkata-class destroyer
Klewang-class fast attack craft
Nilgiri-class frigate (2019)
La Fayette-class frigate
Visby-class corvette
Skjold-class corvette
Tuo Chiang-class Stealth Corvette
Sachsen-class frigate
Shivalik-class frigate
Talwar-class frigate
Type 055 destroyer
Visakhapatnam-class destroyer
Zumwalt-class destroyer
List of stealth helicopters
Boeing–Sikorsky RAH-66 Comanche
Hughes 500P
| Technology | Military technology: General | null |
262601 | https://en.wikipedia.org/wiki/Thorax | Thorax | The thorax (: thoraces or thoraxes) or chest is a part of the anatomy of mammals and other tetrapod animals located between the neck and the abdomen.
In insects, crustaceans, and the extinct trilobites, the thorax is one of the three main divisions of the body, each in turn composed of multiple segments.
The human thorax includes the thoracic cavity and the thoracic wall. It contains organs including the heart, lungs, and thymus gland, as well as muscles and various other internal structures. Many diseases may affect the chest, and one of the most common symptoms is chest pain.
Etymology
The word thorax comes from the Greek θώραξ thṓrax "breastplate, cuirass, corslet" via .
Humans
Structure
In humans and other hominids, the thorax is the chest region of the body between the neck and the abdomen, along with its internal organs and other contents. It is mostly protected and supported by the rib cage, spine, and shoulder girdle.
Contents
The contents of the thorax include the heart and lungs (and the thymus gland); the major and minor pectoral muscles, trapezius muscles, and neck muscle; and internal structures such as the diaphragm, the esophagus, the trachea, and a part of the sternum known as the xiphoid process. Arteries and veins are also contained – (aorta, superior vena cava, inferior vena cava and the pulmonary artery); bones (the shoulder socket containing the upper part of the humerus, the scapula, sternum, thoracic portion of the spine, collarbone, and the rib cage and floating ribs).
External structures are the skin and nipples.
Chest
In the human body, the region of the thorax between the neck and diaphragm in the front of the body is called the chest. The corresponding area in an animal can also be referred to as the chest.
The shape of the chest does not correspond to that part of the thoracic skeleton that encloses the heart and lungs. All the breadth of the shoulders is due to the shoulder girdle, and contains the axillae and the heads of the humeri. In the middle line the suprasternal notch is seen above, while about three fingers' breadth below it a transverse ridge can be felt, which is known as the sternal angle and this marks the junction between the manubrium and body of the sternum. Level with this line the second ribs join the sternum, and when these are found the lower ribs can often be counted. At the lower part of the sternum, where the seventh or last true ribs join it, the ensiform cartilage begins, and above this there is often a depression known as the pit of the stomach.
Bones
The bones of the thorax, called the "thoracic skeleton" is a component of the axial skeleton.
It consists of the ribs and sternum. The ribs of the thorax are numbered in ascending order from 1–12. 11 and 12 are known as floating ribs because they have no anterior attachment point in particular the cartilage attached to the sternum, as 1 through 7 are, and therefore are termed "floating". Whereas ribs 8 through 10 are termed false ribs as their costal cartilage articulates with the costal cartilage of the rib above. The thorax bones also have the main function of protecting the heart, lungs, and major blood vessels in the thorax area, such as the aorta.
Landmarks
The anatomy of the chest can also be described through the use of anatomical landmarks. The nipple in the male is situated in front of the fourth rib or a little below; vertically it lies a little external to a line drawn down from the middle of the clavicle; in the female it is not so constant. A little below it the lower limit of the great pectoral muscle is seen running upward and outward to the axilla; in the female this is obscured by the breast, which extends from the second to the sixth rib vertically and from the edge of the sternum to the mid-axillary line laterally. The female nipple is surrounded for half an inch by a more or less pigmented disc, the areola. The apex of a normal heart is in the fifth left intercostal space, three and a half inches from the mid-line.
Clinical significance
Different types of diseases or conditions that affect the chest include pleurisy, flail chest, atelectasis, and the most common condition, chest pain. These conditions can be hereditary or caused by birth defects or trauma. Any condition that lowers the ability to either breathe deeply or to cough is considered a chest disease or condition.
Injury
Injury to the chest (also referred to as chest trauma, thoracic injury, or thoracic trauma) results in up to of all deaths due to trauma in the United States.
The major pathophysiologies encountered in blunt chest trauma involve derangements in the flow of air, blood, or both in combination. Sepsis due to leakage of alimentary tract contents, as in esophageal perforations, also must be considered. Blunt trauma commonly results in chest wall injuries (e.g., rib fractures). The pain associated with these injuries can make breathing difficult, and this may compromise ventilation. Direct lung injuries, such as pulmonary contusions (see the image below), are frequently associated with major chest trauma and may impair ventilation by a similar mechanism.
Pain
Chest pain can be the result of multiple issues, including respiratory problems, digestive issues, and musculoskeletal complications. The pain can trigger cardiac issues as well. Not all pain that is felt is associated with the heart, but it should not be taken lightly either. Symptoms can be different depending on the cause of the pain. While cardiac issues cause feelings of sudden pressure in the chest or a crushing pain in the back, neck, and arms, pain that is felt due to noncardiac issues gives a burning feeling along the digestive tract or pain when deep breaths are attempted. Different people feel pains differently for the same condition. Only a patient truly knows if the symptoms are mild or serious.
Chest pain may be a symptom of myocardial infarctions ('heart attack'). If this condition is present in the body, discomfort will be felt in the chest that is similar to a heavy weight placed on the body. Sweating, shortness of breath, lightheadedness, and irregular heartbeat may also be experienced. If a heart attack occurs, the bulk of the damage is caused during the first six hours, so getting the proper treatment as quickly as possible is important. Some people, especially those who are elderly or have diabetes, may not have typical chest pain but may have many of the other symptoms of a heart attack. It is important that these patients and their caregivers have a good understanding of heart attack symptoms.
Non-cardiac causes
Just like with a heart attack, not all chest pain is caused by conditions involving the heart. Chest wall pain can be experienced after an increase in activity. Persons who add exercise to their daily routine generally feel this type of pain at the beginning. It is important to monitor the pain to ensure that it is not a sign of something more serious. Pain can also be experienced in persons who have an upper respiratory infection. This virus is also accompanied by a fever and cough. Shingles is another viral infection that can give symptoms of chest or rib pain before a rash develops. Injuries to the rib cage or sternum is also a common cause of chest pain. It is generally felt when deep breaths are taken or during a cough.
Atelectasis
Another non-cardiac cause of chest pain is atelectasis. It is a condition that occurs when a portion of the lung collapses from being airless. When bronchial tubes are blocked, this condition develops and causes patients to feel shortness of breath. The most common cause of atelectasis is when a bronchi that extends from the windpipe is blocked and traps air. The blockage may be caused by something inside the bronchus, such as a plug of mucus, a tumour, or an inhaled foreign object such as a coin, piece of food, or a toy. It is possible for something outside of the bronchus to cause the blockage.
Pneumothorax
Pneumothorax is the condition where air or gas can build up in the pleural space. It can occur without a known cause or as the result of a lung disease or acute lung injury. The size of the pneumothorax changes as air or gas builds up, so a medical procedure can release the pressure with a needle. If it is untreated, blood flow can be interrupted and cause a drop in blood pressure known as tension pneumothorax. It is possible for smaller cases to clear up on their own. Symptoms of this condition are often felt only on one side of the lung or as a shortness of breath.
Images
Tetrapods
In mammals, the thorax is the region of the body formed by the sternum, the thoracic vertebrae, and the ribs. It extends from the neck to the diaphragm, and does not include the upper limbs. The heart and the lungs reside in the thoracic cavity, as well as many blood vessels. The inner organs are protected by the rib cage and the sternum. Thoracic vertebrae are also distinguished in birds, but not in reptiles.
Arthropods
In insects, crustaceans, and the extinct trilobites, the thorax is one of the three main divisions of the creature's body, each of which is in turn composed of multiple segments. It is the area where the wings and legs attach in insects, or an area of multiple articulating plates in trilobites. In most insects, the thorax itself is composed of three segments; the prothorax, the mesothorax, and the metathorax. In extant insects, the prothorax never has wings, though legs are always present in adults; wings (when present) are restricted to at least the mesothorax, and typically also the metathorax, though the wings may be reduced or modified on either or both segments. In the apocritan Hymenoptera, the first abdominal segment is fused to the metathorax, where it forms a structure known as the propodeum. Accordingly, in these insects, the functional thorax is composed of four segments, and is therefore typically called the mesosoma to distinguish it from the "thorax" of other insects.
Each thoracic segment in an insect is further subdivided into various parts, the most significant of which are the dorsal portion (the notum), the lateral portion (the pleuron; one on each side), and the ventral portion (the sternum). In some insects, each of these parts is composed of one to several independent exoskeletal plates with membrane between them (called sclerites), though in many cases the sclerites are fused to various degrees.
| Biology and health sciences | Animal: General | null |
262608 | https://en.wikipedia.org/wiki/Jacamar | Jacamar | The jacamars are a family, Galbulidae, of birds from tropical South and Central America, extending up to Mexico. The family contains five genera and 18 species. The family is closely related to the puffbirds, another Neotropical family, and the two families are often separated into their own order, Galbuliformes, separate from the Piciformes. They are principally birds of low-altitude woodlands and forests, and particularly of forest edge and canopy.
Taxonomy
The placement of the combined puffbird and jacamar lineage was in question, with some bone and muscle features suggesting they may be more closely related to the Coraciiformes. However, analysis of nuclear DNA in a 2003 study placed them as sister group to the rest of the Piciformes, also showing that the groups had developed zygodactyl feet before separating. Per Ericson and colleagues, in analysing genomic DNA, confirmed that puffbirds and jacamars were sister groups and their place in Piciformes.
The phylogenetic relationship between the jacamars and the eight other families in the order Piciformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
Description
The jacamars are small to medium-sized perching birds, in length and weighing . They are elegant, glossy birds with long bills and tails. In appearance and behaviour they resemble the Old World bee-eaters, as most aerial insectivores tend to have short, wide bills rather than long, thin ones. The legs are short and weak, and the feet are zygodactylic (two forward-pointing toes, two backward-pointing). Their plumage is often bright and highly iridescent, although it is quite dull in a few species. There are minor differences in plumage based on sex, males often having a white patch on the breast.
Behaviour
Diet and feeding
Jacamars are insectivores, taking a variety of insect prey (many specialize on butterflies and moths) by hawking in the air. Birds sit in favoured perches and sally towards the prey when it is close enough. Only the great jacamar varies from the rest of the family, taking prey by gleaning and occasionally taking small lizards and spiders.
Breeding
The breeding systems of jacamars have not been studied in depth. They are thought to generally be monogamous, although a few species are thought to engage in cooperative breeding sometimes, with several adults sharing duties. The family nests in holes either in the soil or in arboreal termite mounds. Ground-nesting species usually nest in the banks of rivers (or, more recently, roads), although if these are not available they will nest in the soil held by the roots of fallen trees. Bank-nesting jacamars can sometimes be loosely colonial. Clutch sizes are between one and four eggs, and usually more than one. Both parents participate in incubation. Little is known about the incubation times of most species, but it lasts between 19 and 26 days in the rufous-tailed jacamar. Chicks are born with down feathers, unique among the piciformes.
| Biology and health sciences | Piciformes | Animals |
262638 | https://en.wikipedia.org/wiki/Mesite | Mesite | The mesites (Mesitornithidae) are a family of birds that are part of a clade (Columbimorphae) that include Columbiformes and Pterocliformes. They are smallish flightless or near flightless birds endemic to Madagascar. All the species of this clade are listed as vulnerable.
Description
The mesites are forest and scrubland birds that feed on insects and seeds; brown and white-breasted mesites forage on the ground, gleaning insects from underneath leaves as well as low vegetation. The subdesert mesite uses its long bill to probe in the soil. Other birds, such as drongos and flycatchers, will follow mesites to catch any insects they flush out or miss. Mesites are vocal birds, with calls similar to a passerine song, used for territorial defence. Two or three white eggs are laid in a stick-built nest located in a bush or on a low branch. The Mesitornis species are monogamous while Monias benschi is polygamous and, unlike the other two, shows significant sexual dichromatism.
Systematics
There are two genera, Mesitornis (2 species) and Monias (subdesert mesite).
Historically, mesites' phylogenetic relationships were not very clear; they have been allied with the Gruiformes, Turniciformes and Columbiformes.
Some phylogenomic studies support Pterocliformes (sandgrouse) as the sister group of mesites while others place this clade with another clade constituted of Columbiformes and Cuculiformes (cuckoos).
| Biology and health sciences | Columbimorphae | Animals |
262663 | https://en.wikipedia.org/wiki/Tampon | Tampon | A tampon is a menstrual product designed to absorb blood and vaginal secretions by insertion into the vagina during menstruation. Unlike a pad, it is placed internally, inside of the vaginal canal. Once inserted correctly, a tampon is held in place by the vagina and expands as it soaks up menstrual blood.
As tampons also absorb the vagina's natural lubrication and bacteria in addition to menstrual blood, they can increase the risk of toxic shock syndrome by changing the normal pH of the vagina and increasing the risk of infections from the bacterium Staphylococcus aureus. TSS is a rare but life-threatening infection that requires immediate medical attention.
The majority of tampons sold are made of blends of rayon and cotton, along with synthetic fibers. Some tampons are made out of organic cotton. Tampons are available in several absorbency ratings.
Several countries regulate tampons as medical devices. In the United States, they are considered to be a Class II medical device by the Food and Drug Administration (FDA). They are sometimes used for hemostasis in surgery.
Design and packaging
Tampon design varies between companies and across product lines in order to offer a variety of applicators, materials and absorbencies. There are two main categories of tampons based on the way of insertion – digital tampons inserted by finger, and applicator tampons. Tampon applicators may be made of plastic or cardboard, and are similar in design to a syringe. The applicator consists of two tubes, an "outer", or barrel, and "inner", or plunger. The outer tube has a smooth surface to aid insertion and sometimes comes with a rounded end that is petaled.
Differences exist in the way tampons expand when in use: applicator tampons generally expand axially (increase in length), while digital tampons will expand radially (increase in diameter). Most tampons have a cord or string for removal. The majority of tampons sold are made of rayon, or a blend of rayon and cotton. Organic cotton tampons are marketed as 100% cotton, but they may have plastic covering the cotton core. Tampons may also come in scented or unscented varieties.
Absorbency ratings
In the US
Tampons are available in several absorbency ratings, which are consistent across manufacturers in the U.S. These differ in the amount of cotton in each product and are measured based on the amount of fluid they are able to absorb. The absorbency rates required by the U.S. Food and Drug Administration (FDA) for manufacturer labeling are listed below:
In Europe
Absorbency ratings outside the US may be different. The majority of non-US manufacturers use absorbency rating and Code of Practice recommended by EDANA (European Disposables and Nonwovens Association).
In the UK
In the UK, the Absorbent Hygiene Product Manufacturers Association (AHPMA) has written a Tampon Code of Practice which companies can follow on a volunteer basis. According to this code, UK manufacturers should follow the (European) EDANA code (see above).
Testing
A piece of test equipment referred to as a Syngyna (short for synthetic vagina) is usually used to test absorbency. The machine uses a condom into which the tampon is inserted, and synthetic menstrual fluid is fed into the test chamber.
A novel way of testing was developed by feminist medical experts after the toxic shock syndrome (TSS) crisis, and used blood – rather than the industry standard blue saline – as a test material.
Labeling
The FDA requires the manufacturer to perform absorbency testing to determine the absorbency rating using the Syngyna method or other methods that are approved by the FDA. The manufacturer is also required to include on the package label the absorbency rating and a comparison to other absorbency ratings as an attempt to help consumers choose the right product and avoid complications of TSS. In addition, The following statement of association between tampons and TSS is required by the FDA to be on the package label as part of the labeling requirements: "Attention: Tampons are associated with Toxic Shock Syndrome (TSS). TSS is a rare but serious disease that may cause death. Read and save the enclosed information."
Such guidelines for package labeling are more lenient when it comes to tampons bought from vending machines. For example, tampons sold in vending machines are not required by the FDA to include labeling such as absorbency ratings or information about TSS.
Costs
The average person who menstruates uses approximately 11,400 tampons in their lifetime, assuming exclusive use of tampons. Tampon prices have risen due to inflation and supply chain challenges. Currently, a box of tampons typically costs between $7 and $12 USD and contains 16 to 40 tampons, depending on the brand and size. This means users might spend between $63 and $108 annually on tampons alone, assuming the need for around 9 boxes per year. This corresponds to an average cost of approximately $0.22–$0.75 per tampon, reflecting price increases of up to 33% since the pandemic.
Activists call the problem some women have when not being able to afford menstrual products "period poverty". The fact that, in certain U.S. states, sales tax applies to menstrual products is referred to as a "tampon tax". As of 2024, 23 states exempt these products, while others impose taxes up to 7%. Local taxes can also apply, adding further costs. Some states such as Texas recently abolished this tax. Some states provide free tampons and pads in public schools and prisons, helping to alleviate period poverty.
Health aspects
Toxic shock syndrome
Menstrual toxic shock syndrome (mTSS) is a life-threatening disease most commonly caused by infection of superantigen-producing Staphylococcus aureus. The superantigen toxin secreted in S. aureus infections is TSS Toxin-1, or TSST-1. Incidence ranges from 0.03 to 0.50 cases per 100,000 people, with an overall mortality around 8%. mTSS signs and symptoms include fever (greater than or equal to 38.9 °C), rash, desquamation, hypotension (systolic blood pressure less than 90 mmHg), and multi-system organ involvement with at least three systems, such as gastrointestinal complications (vomiting), central nervous system (CNS) effects (disorientation), and myalgia.
Toxic shock syndrome was named by James K. Todd in 1978. Philip M. Tierno Jr., Director of Clinical Microbiology and Immunology at the NYU Langone Medical Center, helped determine that tampons were behind toxic shock syndrome (TSS) cases in the early 1980s. Tierno blames the introduction of higher-absorbency tampons made with rayon in 1978, as well as the relatively recent decision by manufacturers to recommend that tampons can be worn overnight, for the surge in cases of TSS. However, a later meta-analysis found that the material composition of tampons is not directly correlated to the incidence of toxic shock syndrome, whereas oxygen and carbon dioxide content of menstrual fluid uptake is associated more strongly.
In 1982, a liability case called Kehm v. Proctor & Gamble took place, where the family of Patricia Kehm sued Procter & Gamble for her death on September 6, 1982, from TSS, while using Rely brand tampons. The case was the first successful case to sue the company. Procter & Gamble paid $300,000 in compensatory damages to the Kehm family. This case can be attributed to the increase in regulations and safety protocol testing for current FDA requirements.
Some risk factors identified for developing TSS include recent labor and delivery, tampon use, recent staphylococcus infection, recent surgery, and foreign objects inside the body.
The FDA suggests the following guidelines for decreasing the risk of contracting TSS when using tampons:
Choose the lowest absorbency needed for one's flow (test of absorbency is approved by FDA)
Follow package directions and guidelines for insertion and tampon usage (located on box's label)
Change the tampon at least every 4 to 8 hours
Alternate usage between tampons and pads
Increase awareness of the warning signs of Toxic Shock Syndrome and other tampon-associated health risks (and remove the tampon as soon as a risk factor is noticed)
The FDA also advises those with a history of TSS not to use tampons and instead turn to other feminine hygiene products to control menstrual flow. Other menstrual hygiene products available include pads, menstrual cups, menstrual discs, and reusable period underwear.
Cases of tampon-connected TSS are very rare in the United Kingdom and United States. A controversial study by Tierno found that all-cotton tampons were less likely than rayon tampons to produce the conditions in which TSS can grow. This was done using a direct comparison of 20 brands of tampons, including conventional cotton/rayon tampons and 100% organic cotton tampons. In a series of studies conducted after this initial claim, it was shown that all tampons (regardless of composition) are similar in their effect on TSS and that tampons made with rayon do not have an increased incidence of TSS. Instead, tampons should be selected based on minimum absorbency rating necessary to absorb flow corresponding to the individual.
Sea sponges are also marketed as menstrual hygiene products. A 1980 study by the University of Iowa found that commercially sold sea sponges contained harmful materials like sand and bacteria.
Studies have shown non-significantly higher mean levels of mercury in tampon users compared to non tampon users. No evidence showed an association between tampon use and inflammation biomarkers.
Other considerations
Bleached products
According to the Women's Environmental Network research briefing on menstrual products made from wood pulp:The basic ingredient for menstrual pads is wood pulp, which begins life as a brown coloured product. Various 'purification' processes can be used to bleach it white. Measurable levels of dioxin have been found near paper pulping mills, where chlorine has been used to bleach the wood pulp. Dioxin is one of the most persistent and toxic chemicals, and can cause reproductive disorders, damage to the immune system and cancer (26). There are no safe levels and it builds up in our fat tissue and in our environment.
Marine pollution
In the UK, the Marine Conservation Society has researched the prevalence and problem of plastic tampon applicators found on beaches.
Disposal and flushing
Disposal of tampons, especially flushing (which manufacturers warn against) may lead to clogged drains and waste management problems.
Tampon-drug interactions
There are multiple cases in which the use of tampons may need medical advice from a healthcare professional. For example, as part of the National Institutes of Health, the U.S. National Library of Medicine and its branch MedlinePlus advise against using tampons while being treated with any of several medications taken by the vaginal route such as vaginal suppositories and creams, as tampons may decrease the absorbance of these drugs by the body. Example of these medications include clindamycin, terconazole, miconazole, clotrimazole, when used as a vaginal cream or vaginal suppository, as well as butoconazole vaginal cream.
Increased risk for infections
According to the American Society for Blood and Marrow Transplantation (ASBMT), tampons may be responsible for an increased risk of infection due to the erosions it causes in the tissue of the cervix and vagina, leaving the skin prone to infections. Thus, ASBMT advises hematopoietic stem cell transplantation recipients against using tampons while undergoing therapy.
Other uses
Clinical use
Tampons are currently being used and tested to restore and/or maintain the normal microbiota of the vagina to treat bacterial vaginosis. Some of these are available to the public but come with disclaimers. The efficacy of the use of these probiotic tampons has not been established.
Tampons have also been used in cases of tooth extraction to reduce post-extraction bleeding.
Tampons are currently being investigated as a possible use to detect endometrial cancer. Endometrial cancer does not currently have effective cancer screening methods if an individual is not showing symptoms. Tampons not only absorb menstrual blood, but also vaginal fluids. The vaginal fluids absorbed in the tampons would also contain the cancerous DNA, and possibly contain precancerous material, allowing for earlier detection of endometrial cancer. Clinical trials are currently being conducted to evaluate the use of tampons as a screening method for early detection of endometrial cancer.
Environment and waste
Appropriate disposal of used tampons is still lacking in many countries. Because the lack of menstrual management practices in some countries, many sanitary pads or other menstrual products will be disposed into domestic solid wastes or garbage bins that eventually becomes part of a solid wastes.
The issue that underlies the governance or implementation of menstrual waste management is how country categorizes menstrual waste. This waste could be considered as a common household waste, hazardous household waste (which will required to be segregated from routine household waste), biomedical waste given amount of blood it contains, or plastic waste given the plastic content in many commercial disposal pads (some only the outer case of the tampon or pads).
Ecological impact varies according to disposal method (whether a tampon is flushed down the toilet or placed in a garbage bin – the latter is the recommended option). Factors such as tampon composition will likewise impact sewage treatment plants or waste processing. The average use of tampons in menstruation may add up to approximately 11,400 tampons in someone's lifetime (if they use only tampons rather than other products). Tampons are made of cotton, rayon, polyester, polyethylene, polypropylene, and fiber finishes. Aside from the cotton, rayon and fiber finishes, these materials are not biodegradable. Organic cotton tampons are biodegradable, but must be composted to ensure they break down in a reasonable amount of time. Rayon was found to be more biodegradable than cotton.
Environmentally friendly alternatives to using tampons are the menstrual cup, reusable sanitary pads, menstrual sponges, reusable tampons, and reusable absorbent underwear.
The Royal Institute of Technology in Stockholm carried out a life-cycle assessment (LCA) comparison of the environmental impact of tampons and sanitary pads. They found that the main environmental impact of the products was in fact caused by the processing of raw materials, particularly LDPE (low density polyethylene) – or the plastics used in the backing of pads and tampon applicators, and cellulose production. As production of these plastics requires a lot of energy and creates long-lasting waste, the main impact from the life cycle of these products is fossil fuel use, though the waste produced is significant in its own right.
The menstrual material was disposed according to the type of product, and even based on cultural beliefs. This was done regardless of giving any importance to the location and proper techniques of disposal. In some areas of the world, menstrual waste is disposed into pit latrines, as burning and burial were difficult due to limited private space.
History
Women have used tampons during menstruation for thousands of years. In her book Everything You Must Know About Tampons (1981), Nancy Friedman writes,[T]here is evidence of tampon use throughout history in a multitude of cultures. The oldest printed medical document, Ebers Papyrus, refers to the use of soft papyrus tampons by Egyptian women in the fifteenth century B.C. Roman women used wool tampons. Women in ancient Japan fashioned tampons out of paper, held them in place with a bandage, and changed them 10 to 12 times a day. Traditional Hawaiian women used the furry part of a native fern called hapu'u; and grasses, mosses and other plants are still used by women in parts of Asia and Africa.R. G. Mayne defined a tampon in 1860 as: "a less inelegant term for the plug, whether made up of portions of rag, sponge, or a silk handkerchief, where plugging the vagina is had recourse to in cases of hemorrhage."
Earle Haas patented the first modern tampon, Tampax, with the tube-within-a-tube applicator. Gertrude Schulte Tenderich (née Voss) bought the patent rights to her company trademark Tampax and started as a seller, manufacturer, and spokesperson in 1933. Tenderich hired women to manufacture the item and then hired two sales associates to market the product to drugstores in Colorado and Wyoming, and nurses to give public lectures on the benefits of the creation, and was also instrumental in inducing newspapers to run advertisements.
In 1945, Tampax presented a number of studies to prove the safety of tampons. A 1965 study by the Rock Reproductive Clinic stated that the use of tampons "has no physiological or clinical undesired side effects".
During her study of female anatomy, German gynecologist Judith Esser-Mittag developed a digital-style tampon, which was made to be inserted without an applicator. In the late 1940s, Carl Hahn and Heinz Mittag worked on the mass production of this tampon. Hahn sold his company to Johnson & Johnson in 1974.
In 1992, Congress found an internal FDA memo about the presence of dioxin, a known carcinogen, in tampons. Dioxin is one of the toxic chemicals produced when wood pulp is bleached with chlorine. Congressional hearings were held and tampon manufacturers assured Congress that the trace levels of dioxin in tampons was well below EPA level. The EPA has stated there is no acceptable level of dioxin. Following this, major commercial tampon brands began switching from dioxin-producing chlorine gas bleaching methods to either elemental "chlorine-free" or "totally chlorine free" bleaching processes.
In the United States, the Tampon Safety and Research Act was introduced to Congress in 1997 in an attempt to create transparency between tampon manufacturers and consumers. The bill would mandate the conduct or support of research on the extent to which additives in feminine hygiene products pose any risks to the health of women or to the children of women who use those products during or before the pregnancies involved. Although yet to be passed, the bill has been continually reintroduced, most recently in 2019 as the Robin Danielson Feminine Hygiene Product Safety Act. Data would also be required from manufacturers regarding the presence of dioxins, synthetic fibers, chlorine, and other components (including contaminants and substances used as fragrances, colorants, dyes, and preservatives) in their feminine hygiene products.
Society and culture
Tampon tax
"Tampon tax" refers to tampons' lack of tax exempt status that is often in place for other basic need products. Several political statements have been made in regards to tampon use. In 2000, a 10% goods and services tax (GST) was introduced in Australia. While lubricant, condoms, incontinence pads and numerous medical items were regarded as essential and exempt from the tax, tampons continue to be charged GST. Prior to the introduction of GST, several states also applied a luxury tax to tampons at a higher rate than GST. Specific petitions such as "Axe the Tampon Tax" have been created to oppose this tax, and the tax was removed in 2019.
In the UK, tampons are subject to a zero rate of value added tax (VAT), as opposed to the standard rate of 20% applied to the vast majority of products sold in the country. The UK was previously bound by the EU VAT directive, which required a minimum of 5% VAT on sanitary products. Since 1 January 2021, VAT applied to menstrual sanitary products has been 0%.
In Canada, the federal government has removed the goods and services tax (GST) and harmonized sales tax (HST) from tampons and other menstrual hygiene products as of 1 July 2015.
In the US, access to menstrual products such as pads and tampons and taxes added on these products, have also been controversial topics especially when it comes to people with low income. Laws for exempting such taxes differ vastly from state to state. The American Civil Liberties Union (ACLU) has published a report discussing these laws and listing the different guidelines followed by institutions such as schools, shelters, and prisons when providing menstrual goods.
The report by ACLU also discusses the case of Kimberly Haven who was a former prisoner that had a hysterectomy after she had experienced toxic shock syndrome (TSS) due to using handmade tampons from toilet paper in prison. Her testimony supported a Maryland bill that is intended to increase access of menstrual products for imprisoned women.
Etymology
Historically, the word "tampon" originated from the medieval French word "tampion", meaning a piece of cloth to stop a hole, a stamp, plug, or stopper.
Virginity
There is a misconception described as sexist that using a tampon rids a person of their virginity. This is because some cultures regard virginity as indicated by whether the hymen is still intact, and may believe that inserting a tampon breaks the hymen. However, this belief is not rooted in medical science. The hymen, a thin membrane partially covering the vaginal opening, varies greatly in thickness, elasticity, and shape from person to person. It almost never blocks the entire vaginal opening, stretches and thins naturally over time, and can stretch or break due to non-sexual everyday activities such as exercise; conversely, it may not break even after penetrative sexual intercourse, as it is able to stretch. Therefore, the presence or condition of the hymen is not an indicator of virginity.
Medical professionals have pointed out that misconceptions about the hymen lead to medically unfounded and harmful practices such as virginity testing and hymenoplasty.
In popular culture
In Stephen King's novel Carrie, the title character is bullied for menstruating and is bombarded with tampons and pads by her peers.
In 1985, Tampon Applicator Creative Klubs International (TACKI) was established to develop creative uses for discarded, non-biodegradable, plastic feminine hygiene products, commonly referred to as "beach whistles". TACKI President Jay Critchley launched his corporation in order to develop a global folk art movement and cottage industry, promote awareness of these throwaway objects washed up on beaches worldwide from faulty sewage systems, create the world's largest collection of discarded plastic tampon applicators, and ban their manufacture and sale through legislative action. The project and artwork was carried out during numerous site-specific performances and installations.
Intersectionality
Gender inclusion
Tampons are traditionally marketed as products for women, conserving the idea that menstruation only affects cisgender women, or women who were assigned female at birth and continue to identify with that label. This framing marginalizes transgender men, nonbinary individuals, and genderqueer people who menstruate. This makes their experiences with menstruation largely invisible in public discourse, marketing, and product design. Addressing gender inclusion in the context of tampons involves examining societal stigmas, access challenges, and evolving efforts to create more inclusive spaces for all people who menstruate. In recent years, when discussing periods, academia has shifted terminology to be more inclusive, thus beginning to use the term "menstruators" instead of "women."
Additionally, public restrooms often reinforce a binary understanding of gender. Men’s restrooms rarely, if ever, provide menstrual product dispensers, leaving many queer people without access to tampons when needed. When using the women's bathroom in attempt to have access to period products, transgender and nonbinary individuals may face safety concerns or harassment for accessing restrooms that align with their gender identity.
Some menstrual product companies, such as Aunt Flow and Thinx, have started using inclusive language like “menstrual products” instead of “feminine hygiene products.” These efforts aim to normalize menstruation for all individuals who experience it, regardless of gender. In marketing, efforts to redesign tampon packaging to be more gender-neutral help make these products less alienating for trans and nonbinary users. Removing pink or floral designs, for example, makes them more approachable.
Socioeconomic disparities
Access to tampons is shaped by significant socioeconomic disparities, with systemic barriers disproportionately affecting individuals from low-income backgrounds. These disparities show up in various ways, including affordability challenges, stigmatization, and inadequate infrastructure. Addressing these issues requires recognizing how economic inequality intersects with social factors to restrict access to menstrual products like tampons.
A large issue in our society is period poverty, which refers to the lack of access to menstrual products, hygiene facilities, and education due to financial constraints. It is a prevalent issue in both developing and developed countries, where many people cannot afford tampons and other menstrual products. In many low-income communities, individuals often miss school, work, or social activities due to a lack of menstrual products. This contributes to cycles of poverty, as menstruation becomes a barrier to education and economic opportunities. Globally, an estimated 500 million people face period poverty. In United States, studies have shown that one in five students have struggled to afford menstrual products.
Many schools, shelters, and prisons fail to provide tampons for free or in sufficient quantities. Low-income students, in particular, may resort to unsafe alternatives like paper towels or rags, which can lead to health risks such as infections. Homeless menstruators face unique challenges, as they often lack both the financial means to purchase tampons and access to clean facilities for changing them. Nonprofits like The Homeless Period Project work to distribute tampons to these populations, but systemic support is still lacking.
Racial inequities
Racial inequities in access to menstrual products are shaped by a complex interplay of systemic racism, socioeconomic disparities, cultural stigmas, and healthcare inequality. These inequities disproportionately affect menstruators from marginalized backgrounds, limiting their access to essential products.
Menstruators from racialized communities are more likely to live in poverty due to historical economic inequities. This economic disadvantage makes purchasing tampons and other menstrual products a financial burden. This is because in some predominantly Black and Brown communities, menstrual products may be sold at higher prices due to fewer retail options and the "poverty tax," where essential goods cost more in underserved areas.
Cultural stigmas around menstruation can be particularly pronounced in certain racial and ethnic communities, where periods may be considered taboo or inappropriate to discuss openly. This silence can discourage menstruators from seeking tampons or advocating for their needs. In some cultures, tampons are viewed with suspicion. They are linked to myths about virginity or seen as inappropriate for younger menstruators. These beliefs can limit the willingness or ability of individuals in certain racial groups to access tampons.
Additionally, schools in marginalized racial communities often lack comprehensive menstrual education programs. This lack of education can leave menstruators unaware of the variety of menstrual products available, or unsure how to use them safely. On top of that, racial biases in the education system may contribute to a lack of attention to the menstrual health needs of students from different racial groups.
There are many other aspects of racial inequity when it comes to menstrual product education and accessibility. People of color are less likely to have access to adequate gynecological care, and more likely to face discrimination in the health world regardless of their health issue. To add onto that, the diverse needs of people from ethnic backgrounds have historically been neglected by marketing companies. When advertising does attempt to include people of color, it often fails to address the unique cultural stigmas, challenges, or values that shape their experiences with menstruation.
Tampons and disability
Individuals with conditions that affect fine motor skills or hand strength (e.g., arthritis, cerebral palsy, or multiple sclerosis) may find it difficult to unwrap, insert, or remove tampons. The small size of tampons and their applicators can be a significant barrier. Most tampons are designed with the assumption of able-bodied users, lacking features like ergonomic grips, adaptive applicators, or designs that accommodate reduced hand mobility. Public restrooms, especially those not compliant with accessibility standards, may not provide sufficient space or the necessary support structures (e.g., grab bars) for disabled individuals to manage tampon insertion or removal. People who struggle with vision may encounter difficulties identifying tampon sizes, brands, or instructions due to the lack of braille on packaging or tactile features on the products themselves. For menstruators with hearing impairments, they may miss out on important product usage information if it is only provided through audio formats or poorly captioned content.
People with intellectual or developmental disabilities may require additional support to learn how to use tampons safely and effectively. Complex instructions, such as proper insertion angles and removal timing, can be challenging to navigate without guidance. Caregivers or support workers may need to assist disabled menstruators with tampon usage. However, this raises concerns about autonomy, privacy, and dignity, as menstruation is a deeply personal experience.
Certain disabilities or chronic illnesses (e.g., endometriosis, pelvic floor disorders, or interstitial cystitis) may make tampon use uncomfortable or painful. These conditions can limit access to tampons as a viable option for menstrual care. Some disabled individuals have heightened sensitivity to tampon materials (e.g., rayon, chlorine bleach, or fragrances), which can increase discomfort or lead to allergic reactions.
Disabled menstruators often face compounded stigma, as society tends to marginalize both disabled individuals and discussions around menstruation. This can lead to isolation and discomfort in seeking out appropriate menstrual products.
Activism and advocacy
Advocacy around tampons intersects with broader movements for menstrual equity. These movements aim to dismantle the societal, economic, and cultural barriers that perpetuate inequality. Activism plays a critical role in shaping policy and ensuring that tampons are accessible and affordable for all menstruators. Many advocacy groups campaign to remove sales tax on tampons and other menstrual products, arguing that these items are essential healthcare products rather than luxury goods. Success stories include states and countries that have already abolished the tax, setting a precedent for global change.
Campaigns also work to normalize conversations about menstruation, dismantling stigma that has historically silenced discussions around menstrual products. Destigmatization efforts include public education campaigns, art installations, and social media movements aimed at reframing menstruation as a natural aspect of human health. Advocates are pushing for the portrayal of menstruation in mainstream media, emphasizing diverse menstruators’ experiences and highlighting the role of tampons in empowering individuals to manage their periods effectively.
Organizations like Period: The Menstrual Movement and Menstrual Equity for All are working to raise awareness about the need for inclusivity in menstruation-related discussions. Advocating for free menstrual products in schools, prisons, and public spaces, particularly in areas serving predominantly racialized populations, can help bridge gaps in access. Schools and healthcare providers must offer menstrual care that reflect the cultural challenges faced by diverse racial groups, ensuring that tampons are accessible to all.
| Biology and health sciences | Hygiene products | Health |
262712 | https://en.wikipedia.org/wiki/Slash-and-burn | Slash-and-burn | Slash-and-burn agriculture is a farming method that involves the cutting and burning of plants in a forest or woodland to create a field called a swidden. The method begins by cutting down the trees and woody plants in an area. The downed vegetation, or "slash", is then left to dry, usually right before the rainiest part of the year. Then, the biomass is burned, resulting in a nutrient-rich layer of ash which makes the soil fertile, as well as temporarily eliminating weed and pest species. After about three to five years, the plot's productivity decreases due to depletion of nutrients along with weed and pest invasion, causing the farmers to abandon the field and move to a new area. The time it takes for a swidden to recover depends on the location and can be as little as five years to more than twenty years, after which the plot can be slashed and burned again, repeating the cycle. In Bangladesh and India, the practice is known as jhum or jhoom.
Slash-and-burn is a type of shifting cultivation, an agricultural system in which farmers routinely move from one cultivable area to another. A rough estimate is that 250 million people worldwide use slash-and-burn. Slash-and-burn causes temporary deforestation. Ashes from the burnt trees help farmers by providing nutrients for the soil. In low density of human population this approach is very sustainable but the technique is not scalable for large human populations.
A similar term is assarting, which is the clearing of forests, usually (but not always) for the purpose of agriculture. Assarting does not include burning.
History
Historically, slash-and-burn cultivation has been practiced throughout much of the world. Fire was already used by hunter-gatherers before the invention of agriculture, and still is in present times. Clearings created by the fire were made for many reasons, such as to provide new growth for game animals and to promote certain kinds of edible plants.
During the Neolithic Revolution, groups of hunter-gatherers domesticated various plants and animals, permitting them to settle down and practice agriculture, which provided more nutrition per hectare than hunting and gathering. Some groups could easily plant their crops in open fields along river valleys, but others had forests covering their land. Thus, since Neolithic times, slash-and-burn agriculture has been widely used to clear land to make it suitable for crops and livestock.
Large groups wandering in the woodlands was once a common form of society in European prehistory. The extended family burned and cultivated their swidden plots, sowed one or more crops, and then proceeded on to the next plot.
Technique
Slash-and-burn fields are typically used and owned by a family until the soil is exhausted. At this point the ownership rights are abandoned, the family clears a new field, and trees and shrubs are permitted to grow on the former field. After a few decades, another family or clan may then use the land and claim usufructuary rights. In such a system there is typically no market in farmland, so land is not bought or sold on the open market and land rights are traditional.
In slash-and-burn agriculture, forests are typically cut months before a dry season. The "slash" is permitted to dry and then burned in the following dry season. The resulting ash fertilizes the soil and the burned field is then planted at the beginning of the next rainy season with crops such as rice, maize, cassava, or other staples. This work was once done using simple tools such as machetes, axes, hoes and shovels.
Benefits and drawbacks
This system of agriculture provides millions of people with food and income. It has been ecologically sustainable for thousands of years. Because the leached soil in many tropical regions, such as the Amazon, are nutritionally extremely poor, slash-and-burn is one of the only types of agriculture which can be practiced in these areas. Slash-and-burn farmers typically plant a variety of crops, instead of a monoculture, and contribute to a higher biodiversity due to creating mosaic habitats. The general ecosystem is not harmed in traditional slash-and-burn, aside from a small temporary patch.
This technique is most unsuitable for the production of cash crops. A huge amount of land, or a low density of people, is required for slash-and-burn. When slash-and-burn is practiced in the same area too often, because the human population density has increased to an unsustainable level, the forest will eventually be destroyed.
Regionally
Asia
Tribal groups in the northeastern Indian states of Tripura, Arunachal Pradesh, Meghalaya, Mizoram and Nagaland and the Bangladeshi districts of Rangamati, Khagrachari, Bandarban and Sylhet refer to slash-and-burn agriculture as podu, jhum or jhoom cultivation. The system involves clearing land, by fire or clear-felling, for economically important crops such as upland rice, vegetables or fruits. After a few cycles, the land's fertility declines and a new area is chosen. Jhum cultivation is most often practiced on the slopes of thickly-forested hills. Cultivators cut the treetops to allow sunlight to reach the land, burning the trees and grasses for fresh soil. Although it is believed that this helps fertilize the land, it can leave it vulnerable to erosion. Holes are made for the seeds of crops such as sticky rice, maize, eggplant and cucumber. After considering jhums effects, the government of Mizoram has introduced a policy to end the method in the state.
Vietnam is home to a diverse range of ethnic groups. Some of these groups form primarily rural communities that reside far from the larger cities of the country, living on swidden fields where slash-and-burn agriculture is still in regular use as a part of everyday life.
Americas
Some American civilizations, like the Maya, have used slash-and-burn cultivation since ancient times. American Indians in the United States also used fire in agriculture and hunting. In the Amazon, many peoples such as the Yanomami Indians also live off the slash and burn method due to the Amazon's poor soil quality.
Northern Europe
Slash-and-burn techniques were used in northeastern Sweden in agricultural systems. In Sweden, the practice is known as .
Telkkämäki Nature Reserve in Kaavi, Finland, is an open-air museum where slash-and-burn agriculture is demonstrated. Farm visitors can see how people farmed when slash-and-burn was the norm in the Northern Savonian region of eastern Finland beginning in the 15th century. Areas of the reserve are burnt each year.
is a Swedish and Norwegian term for slash-and-burn agriculture derived from the Old Norse word , which means "to burn". This practice originated in Russia in the region of Novgorod and was widespread in Finland and Eastern Sweden during the Medieval period. It spread to western Sweden in the 16th century when Finnish settlers were encouraged to migrate there by King Gustav Vasa to help clear the dense forests. Later, when the Finns were persecuted by the local Swedes, farming was spread by refugees to eastern Norway, more specifically in the eastern part of Solør, in the area bordering Sweden known as Finnskogen ("the Finnish woods").
The practice also spread to New Sweden in North America. Reinforced by the use of fire in agriculture and hunting by American Indians, it became an important part of pioneering in America.
Description of process
involved stripping a ring of bark completely around the trunk of coniferous trees like pine or spruce or felling them, allowing them to dry, setting fire to the dried forest and growing crops on the fertile ash-covered soil. The resulting ash was highly fertile, but only for a short period. The clearing was initially planted to rye as soon as the ash had fully settled and sufficiently cooled. When the rain came, it packed the ash around the rye. The rye germinated and grew prolifically, with anywhere from 25 to 100 stalks (or straws), each with multiple grains.
Only two tools were required, the axe and the sickle. The axe cut the trees to start the cycle. When the rye had ripened, it was harvested with a sickle, which could reach among the rocks and stumps where a scythe would have been ineffective.
In the second and third year the field would be sown with turnips or cabbages. It then might be grazed for several years before being allowed to return to woodland.
culture
required felling new forest and burning a new area every year. It was necessary to allow the former fields to regrow with forest for 10–30 years before repeating the cycle. As a result, the dwellings were often many kilometers from the fields. Furthermore, since the process was man-power intensive, extended families tended to work together and live in compact communities.
The farming approach required a large area. When forest was plentiful, the Finns were very prosperous. As population grew and restrictions were placed on the forest which could be burned, it became increasingly difficult. By 1710, during the conflict with Sweden, because of their suspect loyalties Norwegian authorities considered expelling them from the border area, but did not do so because it was judged they were too poor to survive if evicted.
Research
This type of agriculture is discouraged by many developmental or environmentalist organisations, with the main alternatives being promoted as switching to more intensive, permanent farming methods, or promoting a shift from farming to working in different, higher-paying industries altogether. Other organisations promote helping farmers achieve higher productivity by introducing new techniques.
Not allowing the slashed vegetation to burn completely and ploughing the resultant charcoal into the soil (slash-and-char) has been proposed as a way to boost yields.
Promoters of a project from the early 2000s claimed that slash-and-burn cultivation could be reduced if farmers grew black pepper crops, turmeric, beans, corn, cacao, rambutan, and citrus between Inga trees, which they termed 'Inga alley cropping'.
A method of improving the yields in a type of traditional assarting cultivation used to grow common beans in Central America called 'slash-and-cover' has been proposed by additionally planting leguminous shrubs to act as a fallow crop after the soil is exhausted and one is ready to clear a new patch of forest.
Gallery
| Technology | Agriculture_2 | null |
262734 | https://en.wikipedia.org/wiki/Hoatzin | Hoatzin | The hoatzin ( ) or hoactzin ( ) (Opisthocomus hoazin) is a species of tropical bird found in swamps, riparian forests, and mangroves of the Amazon and the Orinoco basins in South America. It is the only extant species in the genus Opisthocomus which is the only extant genus in the Opisthocomidae family under the order of Opisthocomiformes. Despite being the subject of intense debate by specialists, the taxonomic position of this family is still far from clear.
The hoatzin is notable for its chicks having primitive claws on two of their wing digits; the species also is unique in possessing a digestive system capable of fermentation and the effective breaking-down of plant matter, a trait more commonly known from herbivorous ungulate-ruminant mammals and some primates. This bird is also the national bird of Guyana, where the local name for this bird is Canje pheasant.
Description
The hoatzin is pheasant-sized, with a total length of , and a long neck and small head. It has an unfeathered blue face with maroon eyes, and its head is topped by a spiky, rufous crest. The long, sooty-brown tail is bronze-green tipped with a broad whitish or buff band at the end. The upper parts are dark, sooty brown-edged buff on the wing coverts, and streaked buff on the mantle and nape. The underparts are buff, while the crissum (the undertail coverts surrounding the cloaca), primaries, underwing coverts, and flanks are rich rufous-chestnut, but this is mainly visible when the hoatzin opens its wings.
It is a noisy bird, and makes a variety of hoarse calls, including groans, croaks, hisses, and grunts. These calls are often associated with body movements, such as wing spreading.
Young wing claws
Hoatzin chicks have two claws on each wing. Immediately after hatching, they can use these claws, and their oversized feet, to scramble around the tree branches without falling into the water. When predators such as the great black hawk attack a hoatzin nesting colony, the adults fly noisily about, trying to divert the predator's attention, while the chicks move away from the nest and hide among the thickets. If discovered, however, they drop into the water and swim under the surface to escape, then later use their clawed wings to climb back to the safety of the nest. This has inevitably led to comparisons to the fossil bird Archaeopteryx, but the characteristic is rather an autapomorphy, possibly caused by an atavism toward the dinosaurian finger claws, whose developmental genetics ("blueprint") presumably is still in the avian genome. Since Archaeopteryx had three functional claws on each wing, some earlier systematists speculated that the hoatzin was descended from it, because nestling hoatzins have two functional claws on each wing. Modern researchers, however, hypothesize that the young hoatzin's claws are of more recent origin, and may be a secondary adaptation from its frequent need to leave the nest and climb about in dense vines and trees well before it can fly. A similar trait is seen in turacos, whose nestlings use claws on their wings to climb in trees.
Taxonomy, systematics, and evolution
The generic name Opisthocomus comes from Ancient Greek ópisthokomos derived from ópisthe ( ópisthen before a consonant) "behind" and kómē "hair" altogether meaning "long hair behind" referring to its large crest.
The hoatzin was originally described in 1776 by German zoologist Statius Müller. Much debate has occurred about the hoatzin's relationships with other birds. Because of its distinctness, it has been given its own family, the Opisthocomidae, and its own suborder, the Opisthocomi. At various times, it has been allied with such taxa as the tinamous, the Galliformes (gamebirds), the rails, the bustards, seriemas, sandgrouse, doves, turacos and other Cuculiformes, and mousebirds. A whole genome sequencing study published in 2014 places the hoatzin as the sister taxon of a clade composed of Gruiformes (cranes) and Charadriiformes (plovers). Another genomic study in 2024 instead places it as the sister group to the Phaethoquornithes (containing numerous aquatic bird orders). The combined group was found to be sister to the Mirandornithes (flamingos and grebes).
In 2015, genetic research indicated that the hoatzin is the last surviving member of a bird line that branched off in its own direction 64 million years ago, shortly after the extinction event that killed the nonavian dinosaurs. Another genetic study from 2024 instead suggested a Late Cretaceous origin (around 70 million years ago), but found that this early divergence is shared with a majority of extant bird orders, making it no more primitive than them.
Fossil record
With respect to other material evidence, an undisputed fossil record of a close hoatzin relative is specimen UCMP 42823, a single cranium backside. It is of Miocene origin and was recovered in the upper Magdalena River Valley, Colombia, in the well-known fauna of La Venta. This has been placed into a distinct, less derived genus, Hoazinoides, but clearly would be placed into the same family as the extant species. It markedly differs in that the cranium of the living hoatzin is characteristic, being much domed, rounded, and shortened, and that these autapomorphies were less pronounced in the Miocene bird. Müller discussed these findings in the light of the supposed affiliation of the hoatzins and the Galliformes, which was the favored hypothesis at that time but had been controversial almost since its inception. He cautioned, however, "that Hoazinoides by no means establishes a phyletic junction point with other galliforms" for obvious reasons, as we know today. Anything other than the primary findings of Müller are not to be expected in any case, as by the time of Hoazinoides, essentially all modern bird families are either known or believed to have been present and distinct. Going further back in time, the Late Eocene or Early Oligocene (some 34 Mya) Filholornis from France has also been considered "proof" of a link between the hoatzin and the gamebirds. The fragmentary fossil Onychopteryx from the Eocene of Argentina and the quite complete, but no less enigmatic Early-Middle Eocene (Ypresian-Lutetian, some 48 Mya) Foro panarium are sometimes used to argue for a hoatzin-cuculiform (including turacos) link. As demonstrated above, though, this must be considered highly speculative, if not as badly off the mark as the relationship with the Cracidae discussed by Miller.
The earliest record of the order Opisthocomiformes is Protoazin parisiensis, from the latest Eocene (about 34 Mya) of Romainville, France. The holotype and only known specimen is NMB PG.70, consisting of partial coracoid, partial scapula, and partial pedal phalanx. According to the phylogenetic analysis performed by the authors, Namibiavis, although later, is more basal than Protoazin. Opisthocomiforms seem to have been much more widespread in the past, with the present South American distribution being only a relic. By the Early to Middle Miocene, they were probably extinct in Europe already, as formations dated to this time and representing fluvial or lacustrine palaeoenvironments, in which the hoatzin thrives today, have yielded dozens of bird specimens, but no opisthocomiforms. A possible explanation to account for the extinction of Protoazin between the Late Eocene and the Early Miocene in Europe, and of Namibiavis after the Middle Miocene of sub-Saharan Africa is the arrival of arboreal carnivorans—predation which could have had a devastating effect on the local opisthocomiforms, if they were similarly poor flyers and had a similarly vulnerable nesting strategies as today's hoatzins. Felids and viverrids first arrived in Europe from Asia after the Turgai Sea closed, marking the boundary between the Eocene and the Oligocene. None of these predators, and for the matter, no placental predator at all was present in South America before the Great American Interchange 3 Mya; this absence could explain the survival of the hoatzin there. In addition to being the earliest fossil record of an opisthocomiform, Protoazin was also the earliest find of one (1912), but it was forgotten for more than a century, being described only in 2014.
Hoazinavis is an extinct genus of early opisthocomiforms from Late Oligocene and Early Miocene (about 24–22 Mya) deposits of Brazil. It was collected in 2008 from the Tremembé Formation of São Paulo, Brazil. It was first named by Gerald Mayr, Herculano Alvarenga and Cécile Mourer-Chauviré in 2011 and the type species is Hoazinavis lacustris.
Namibiavis is another extinct genus of early opisthocomiforms from early Middle Miocene (around 16 Mya) deposits of Namibia. It was collected from Arrisdrift, southern Namibia. It was first named by Cécile Mourer-Chauviré in 2003 and the type species is Namibiavis senutae.
Behavior
Feeding and habits
The hoatzin is a folivore—it eats the leaves (and to a lesser degree, the fruits and flowers) of the plants that grow in its marshy and riverine habitat. It clambers around along the branches in its search for food. The hoatzin uses a leathery “bump” on the bottom of its crop to help balance its weight on the branches. The species was once thought to eat the leaves of only arums and mangroves, but the species is now known to consume the leaves of more than 50 botanical species. One study, undertaken in Venezuela, found that the hoatzin's diet was 82% leaves, 10% flowers, and 8% fruit. Any feeding on insects or other animal matter is purely opportunistic or accidental.
One of this species' many peculiarities is its unique digestive system, which contains specialized bacteria in the front part of the gut that break-down and ferment the foliar material they consume (much like cattle and other ruminants do). This process is more efficient than what has been measured in many other species of birds, with up to 70% of the plant fiber being digested. Unlike ruminants, however, which possess a rumen (a specialized, chambered stomach for bacterial fermentation), the hoatzin has an unusually large crop that is folded into two chambers, with a large, multi-chambered lower esophagus.
Serrations on the beak help cut leaves into smaller pieces before they are swallowed. Because they lack the teeth of mammals, hoatzins don't regurgitate their food, or chew the cud; instead, a combination of muscular pressure and abrasion by a “cornified” lining of the crop is used as an equivalent to remastication, allowing fermentation and trituration to occur at the same site. The fermented foliage produces methane which the bird expels through burping. Its stomach chamber and gizzard are much smaller than in other birds. Its crop is so large as to displace the flight muscles and keel of the sternum, much to the detriment of its flight capacity. The crop is supported by a thickened skin callus on the tip of the sternum, which helps the bird support the crop on a branch during rest and while digesting its food. A hoatzin's meal takes up to 45 hours to pass through its body. With a body weight as low as , the adult hoatzin is the smallest known animal with foregut fermentation (the lower limit for mammals is about ).
Because of aromatic compounds in the leaves they consume, and the bacterial fermentation required to digest them, the birds have a disagreeable, manure-like odor and are only hunted by humans for food in times of dire need; local people also call it the "stinkbird" because of it. Much of the hoatzin’s diet, including various types of Monstera, Philodendron and other aroids, contains a high concentration of calcium oxalate crystals, which, even in small amounts, can be greatly uncomfortable (and even dangerous) for humans to consume.
Breeding
Hoatzins are seasonal breeders, breeding during the rainy season, the exact timing of which varies across their range. Hoatzins are gregarious and nest in small colonies, laying two or three eggs in a stick nest in a tree hanging over water in seasonally flooded forests. The chicks are fed on regurgitated fermented food.
Relationship with humans
In Brazil, indigenous peoples sometimes collect the eggs for food, and the adults are occasionally hunted, but it is generally rare to consume mature birds, as hoatzin meat is reputed to have a bad taste. Its preferred habitats of forests and inland wetlands are threatened by Amazonian deforestation. The hoatzin is believed to remain fairly common in a large part of its range, but its population is likely decreasing due to habitat loss. The hoatzin is the national bird of Guyana.
| Biology and health sciences | Cuculiformes | null |
262861 | https://en.wikipedia.org/wiki/Zeroth%20law%20of%20thermodynamics | Zeroth law of thermodynamics | The zeroth law of thermodynamics is one of the four principal laws of thermodynamics. It provides an independent definition of temperature without reference to entropy, which is defined in the second law. The law was established by Ralph H. Fowler in the 1930s, long after the first, second, and third laws had been widely recognized.
The zeroth law states that if two thermodynamic systems are both in thermal equilibrium with a third system, then the two systems are in thermal equilibrium with each other.
Two systems are said to be in thermal equilibrium if they are linked by a wall permeable only to heat, and they do not change over time.
Another formulation by James Clerk Maxwell is "All heat is of the same kind". Another statement of the law is "All diathermal walls are equivalent".
The zeroth law is important for the mathematical formulation of thermodynamics. It makes the relation of thermal equilibrium between systems an equivalence relation, which can represent equality of some quantity associated with each system. A quantity that is the same for two systems, if they can be placed in thermal equilibrium with each other, is a scale of temperature. The zeroth law is needed for the definition of such scales, and justifies the use of practical thermometers.
Equivalence relation
A thermodynamic system is by definition in its own state of internal thermodynamic equilibrium, that is to say, there is no change in its observable state (i.e. macrostate) over time and no flows occur in it. One precise statement of the zeroth law is that the relation of thermal equilibrium is an equivalence relation on pairs of thermodynamic systems. In other words, the set of all systems each in its own state of internal thermodynamic equilibrium may be divided into subsets in which every system belongs to one and only one subset, and is in thermal equilibrium with every other member of that subset, and is not in thermal equilibrium with a member of any other subset. This means that a unique "tag" can be assigned to every system, and if the "tags" of two systems are the same, they are in thermal equilibrium with each other, and if different, they are not. This property is used to justify the use of empirical temperature as a tagging system. Empirical temperature provides further relations of thermally equilibrated systems, such as order and continuity with regard to "hotness" or "coldness", but these are not implied by the standard statement of the zeroth law.
If it is defined that a thermodynamic system is in thermal equilibrium with itself (i.e., thermal equilibrium is reflexive), then the zeroth law may be stated as follows:
This statement asserts that thermal equilibrium is a left-Euclidean relation between thermodynamic systems. If we also define that every thermodynamic system is in thermal equilibrium with itself, then thermal equilibrium is also a reflexive relation. Binary relations that are both reflexive and Euclidean are equivalence relations. Thus, again implicitly assuming reflexivity, the zeroth law is therefore often expressed as a right-Euclidean statement:
One consequence of an equivalence relationship is that the equilibrium relationship is symmetric: If A is in thermal equilibrium with B, then B is in thermal equilibrium with A. Thus, the two systems are in thermal equilibrium with each other, or they are in mutual equilibrium. Another consequence of equivalence is that thermal equilibrium is described as a transitive relation:
A reflexive, transitive relation does not guarantee an equivalence relationship. For the above statement to be true, both reflexivity and symmetry must be implicitly assumed.
It is the Euclidean relationships which apply directly to thermometry. An ideal thermometer is a thermometer which does not measurably change the state of the system it is measuring. Assuming that the unchanging reading of an ideal thermometer is a valid tagging system for the equivalence classes of a set of equilibrated thermodynamic systems, then the systems are in thermal equilibrium, if a thermometer gives the same reading for each system. If the system are thermally connected, no subsequent change in the state of either one can occur. If the readings are different, then thermally connecting the two systems causes a change in the states of both systems. The zeroth law provides no information regarding this final reading.
Foundation of temperature
Nowadays, there are two nearly separate concepts of temperature, the thermodynamic concept, and that of the kinetic theory of gases and other materials.
The zeroth law belongs to the thermodynamic concept, but this is no longer the primary international definition of temperature. The current primary international definition of temperature is in terms of the kinetic energy of freely moving microscopic particles such as molecules, related to temperature through the Boltzmann constant . The present article is about the thermodynamic concept, not about the kinetic theory concept.
The zeroth law establishes thermal equilibrium as an equivalence relationship. An equivalence relationship on a set (such as the set of all systems each in its own state of internal thermodynamic equilibrium) divides that set into a collection of distinct subsets ("disjoint subsets") where any member of the set is a member of one and only one such subset. In the case of the zeroth law, these subsets consist of systems which are in mutual equilibrium. This partitioning allows any member of the subset to be uniquely "tagged" with a label identifying the subset to which it belongs. Although the labeling may be quite arbitrary, temperature is just such a labeling process which uses the real number system for tagging. The zeroth law justifies the use of suitable thermodynamic systems as thermometers to provide such a labeling, which yield any number of possible empirical temperature scales, and justifies the use of the second law of thermodynamics to provide an absolute, or thermodynamic temperature scale. Such temperature scales bring additional continuity and ordering (i.e., "hot" and "cold") properties to the concept of temperature.
In the space of thermodynamic parameters, zones of constant temperature form a surface, that provides a natural order of nearby surfaces. One may therefore construct a global temperature function that provides a continuous ordering of states. The dimensionality of a surface of constant temperature is one less than the number of thermodynamic parameters, thus, for an ideal gas described with three thermodynamic parameters P, V and N, it is a two-dimensional surface.
For example, if two systems of ideal gases are in joint thermodynamic equilibrium across an immovable diathermal wall, then = where Pi is the pressure in the ith system, Vi is the volume, and Ni is the amount (in moles, or simply the number of atoms) of gas.
The surface = constant defines surfaces of equal thermodynamic temperature, and one may label defining T so that = RT, where R is some constant. These systems can now be used as a thermometer to calibrate other systems. Such systems are known as "ideal gas thermometers".
In a sense, focused on the zeroth law, there is only one kind of diathermal wall or one kind of heat, as expressed by Maxwell's dictum that "All heat is of the same kind". But in another sense, heat is transferred in different ranks, as expressed by Arnold Sommerfeld's dictum "Thermodynamics investigates the conditions that govern the transformation of heat into work. It teaches us to recognize temperature as the measure of the work-value of heat. Heat of higher temperature is richer, is capable of doing more work. Work may be regarded as heat of an infinitely high temperature, as unconditionally available heat." This is why temperature is the particular variable indicated by the zeroth law's statement of equivalence.
Dependence on the existence of walls permeable only to heat
In Constantin Carathéodory's (1909) theory, it is postulated that there exist walls "permeable only to heat", though heat is not explicitly defined in that paper. This postulate is a physical postulate of existence. It does not say that there is only one kind of heat. This paper of Carathéodory states as proviso 4 of its account of such walls: "Whenever each of the systems S1 and S2 is made to reach equilibrium with a third system S3 under identical conditions, systems S1 and S2 are in mutual equilibrium".
It is the function of this statement in the paper, not there labeled as the zeroth law, to provide not only for the existence of transfer of energy other than by work or transfer of matter, but further to provide that such transfer is unique in the sense that there is only one kind of such wall, and one kind of such transfer. This is signaled in the postulate of this paper of Carathéodory that precisely one non-deformation variable is needed to complete the specification of a thermodynamic state, beyond the necessary deformation variables, which are not restricted in number. It is therefore not exactly clear what Carathéodory means when in the introduction of this paper he writes
It is possible to develop the whole theory without assuming the existence of heat, that is of a quantity that is of a different nature from the normal mechanical quantities.
It is the opinion of Elliott H. Lieb and Jakob Yngvason (1999) that the derivation from statistical mechanics of the law of entropy increase is a goal that has so far eluded the deepest thinkers. Thus the idea remains open to consideration that the existence of heat and temperature are needed as coherent primitive concepts for thermodynamics, as expressed, for example, by Maxwell and Max Planck. On the other hand, Planck (1926) clarified how the second law can be stated without reference to heat or temperature, by referring to the irreversible and universal nature of friction in natural thermodynamic processes.
History
Writing long before the term "zeroth law" was coined, in 1871 Maxwell discussed at some length ideas which he summarized by the words "All heat is of the same kind". Modern theorists sometimes express this idea by postulating the existence of a unique one-dimensional hotness manifold, into which every proper temperature scale has a monotonic mapping. This may be expressed by the statement that there is only one kind of temperature, regardless of the variety of scales in which it is expressed. Another modern expression of this idea is that "All diathermal walls are equivalent". This might also be expressed by saying that there is precisely one kind of non-mechanical, non-matter-transferring contact equilibrium between thermodynamic systems.
According to Sommerfeld, Ralph H. Fowler coined the term zeroth law of thermodynamics while discussing the 1935 text by Meghnad Saha and B.N. Srivastava.
They write on page 1 that "every physical quantity must be measurable in numerical terms". They presume that temperature is a physical quantity and then deduce the statement "If a body is in temperature equilibrium with two bodies and , then and themselves are in temperature equilibrium with each other". Then they italicize a self-standing paragraph, as if to state their basic postulate:
They do not themselves here use the phrase "zeroth law of thermodynamics".
There are very many statements of these same physical ideas in the physics literature long before this text, in very similar language. What was new here was just the label zeroth law of thermodynamics.
Fowler & Guggenheim (1936/1965) wrote of the zeroth law as follows:
They then proposed that
The first sentence of this present article is a version of this statement. It is not explicitly evident in the existence statement of Fowler and Edward A. Guggenheim that temperature refers to a unique attribute of a state of a system, such as is expressed in the idea of the hotness manifold. Also their statement refers explicitly to statistical mechanical assemblies, not explicitly to macroscopic thermodynamically defined systems.
| Physical sciences | Thermodynamics | Physics |
262925 | https://en.wikipedia.org/wiki/Ergot | Ergot | Ergot ( ) or ergot fungi refers to a group of fungi of the genus Claviceps.
The most prominent member of this group is Claviceps purpurea ("rye ergot fungus"). This fungus grows on rye and related plants, and produces alkaloids that can cause ergotism in humans and other mammals who consume grains contaminated with its fruiting structure (called ergot sclerotium).
Claviceps includes about 50 known species, mostly in the tropical regions. Economically significant species include C. purpurea (parasitic on grasses and cereals), C. fusiformis (on pearl millet, buffel grass), C. paspali (on dallis grass), C. africana (on sorghum) and C. lutea (on paspalum). C. purpurea most commonly affects outcrossing species such as rye (its most common host), as well as triticale, wheat and barley. It affects oats only rarely.
C. purpurea has at least three races or varieties, which differ in their host specificity:
G1 – land grasses of open meadows and fields;
G2 – grasses from moist, forest and mountain habitats;
G3 (C. purpurea var. spartinae) – salt marsh grasses (Spartina, Distichlis).
Life cycle
An ergot kernel, called a sclerotium, develops when a spore of fungal species of the genus Claviceps infects a floret of flowering grass or cereal. The infection process mimics a pollen grain growing into an ovary during fertilization. Infection requires that the fungal spore have access to the stigma; consequently, plants infected by Claviceps are mainly outcrossing species with open flowers, such as rye (Secale cereale) and ryegrasses (genus Lolium). The proliferating fungal mycelium then destroys the plant ovary and connects with the vascular bundle originally intended for seed nutrition. The first stage of ergot infection manifests itself as a white soft tissue (known as sphacelia) producing sugary honeydew, which often drops out of the infected grass florets. This honeydew contains millions of asexual spores (conidia), which insects disperse to other florets. Later, the sphacelia convert into a hard dry sclerotium inside the husk of the floret. At this stage, alkaloids and lipids accumulate in the sclerotium.
Claviceps species from tropic and subtropic regions produce macro- and microconidia in their honeydew. Macroconidia differ in shape and size between the species, whereas microconidia are rather uniform, oval to globose (5×3 μm). Macroconidia are able to produce secondary conidia. A germ tube emerges from a macroconidium through the surface of a honeydew drop and a secondary conidium of an oval to pearlike shape is formed, to which the contents of the original macroconidium migrates. Secondary conidia form a white, frost-like surface on honeydew drops and spread via the wind. No such process occurs in Claviceps purpurea, Claviceps grohii, Claviceps nigricans and Claviceps zizaniae, all from northern temperate regions.
When a mature sclerotium drops to the ground, the fungus remains dormant until proper conditions (such as the onset of spring or a rain period) trigger its fruiting phase. It germinates, forming one or several fruiting bodies with heads and stipes, variously coloured (resembling a tiny mushroom). In the head, threadlike sexual spores form, which are ejected simultaneously when suitable grass hosts are flowering.
Ergot infection causes a reduction in the yield and quality of grain and hay, and if livestock eat infected grain or hay it may cause a disease called ergotism. Black and protruding sclerotia of C. purpurea are well known. However, many tropical ergots have brown or greyish sclerotia, mimicking the shape of the host seed. For this reason, the infection is often overlooked.
Insects, including flies and moths, carry conidia of Claviceps species, but it is unknown whether insects play a role in spreading the fungus from infected to healthy plants.
Evolution
Regarding the evolution of plant parasitism in the Clavicipitaceae, an amber fossil discovered in 2020 preserves a grass spikelet and an ergot-like parasitic fungus. The fossil shows that the original hosts of the Clavicipitaceae could have been grasses. The discovery also establishes a minimum time for the conceivable presence of psychotropic compounds in fungi.
Several evolutionary processes have acted to diversify the array of ergot alkaloids produced by fungi; these differences in enzyme activities are evident at the levels of substrate specificity (LpsA), product specification (EasA, CloA) or both (EasG and possibly CloA). The "old yellow enzyme", EasA, presents an outstanding example. This enzyme catalyzes reduction of the C8=C9 double-bond in chanoclavine I, but EasA isoforms differ in whether they subsequently catalyze reoxidation of C8–C9 after rotation. This difference distinguishes most Clavicipitaceae from Trichocomaceae, but in Clavicipitaceae it is also the key difference dividing the branch of classical ergot alkaloids from dihydroergot alkaloids, the latter often being preferred for pharmaceuticals due to their relatively few side effects.
Effects on humans, other mammals and LSD
The ergot sclerotium contains high concentrations (up to 2% of dry mass) of the alkaloid ergotamine, a complex molecule consisting of a tripeptide-derived cyclol-lactam ring connected via amide linkage to a lysergic acid (ergoline) moiety, and other alkaloids of the ergoline group that are biosynthesized by the fungus. Ergot alkaloids have a wide range of biological activities including effects on circulation and neurotransmission.
Ergot alkaloids are classified as:
derivatives of 6,8-dimethylergoline and
lysergic acid derivatives.
Ergotism is the name for sometimes severe pathological syndromes affecting humans or other animals that have ingested plant material containing ergot alkaloid, such as ergot-contaminated grains.
The Hospital Brothers of St. Anthony, an order of monks established in 1095, specialized in treating ergotism victims with balms containing tranquilizing and circulation-stimulating plant extracts. The common name for ergotism is "St. Anthony's fire", in reference to this order of monks and the severe burning sensations in the limbs which was one of the symptoms.
There are two types of ergotism. The first is characterized by muscle spasms, fever and hallucinations and the victims may appear dazed, be unable to speak, become manic, or have other forms of paralysis or tremors, and suffer from hallucinations and other distorted perceptions. This is caused by serotonergic stimulation of the central nervous system by some of the alkaloids.
The second type of ergotism is marked by violent burning, absent peripheral pulses and shooting pain of the poorly vascularized distal organs, such as the fingers and toes, and are caused by effects of ergot alkaloids on the vascular system due to vasoconstriction, sometimes leading to gangrene and loss of limbs due to severely restricted blood circulation.
The neurotropic activities of the ergot alkaloids may also cause hallucinations and attendant irrational behaviour, convulsions, and even death. Other symptoms include strong uterine contractions, nausea, seizures, high fever, vomiting, loss of muscle strength and unconsciousness.
Since the Middle Ages, controlled doses of ergot were used to induce abortions and to stop maternal bleeding after childbirth.
Klotz offers a detailed overview of the toxicities in mammalian livestock, stating that the activities are attributable to antagonism or agonism of neurotransmitters, including dopamine, serotonin and norepinephrine. He also states that the adrenergic blockage by ergopeptines (e.g., ergovaline or ergotamine) leads to potent and long-term vasoconstriction, and can result in reduced blood flow resulting in intense burning pain (St. Anthony's fire), edema, cyanosis, dry gangrene and even loss of hooves in cattle or limbs in humans. Reduced prolactin due to ergot alkaloid activity on dopamine receptors in the pituitary is also common in livestock. Reduced serum prolactin is associated with various reproductive problems in cattle, and especially in horses, including agalactia and poor conception, and late-term losses of foals and sometimes mares due to dystocia and thickened placentas.
Although both gangrenous and convulsive symptoms are seen in naturally occurring ergotism resulting from the ingestion of fungus infected rye, only gangrenous ergotism has been reported following the excessive ingestion of ergotamine tartrate.
Ergot extract has been used in pharmaceutical preparations, including ergot alkaloids in products such as Cafergot (containing caffeine and ergotamine or ergoline) to treat migraine headaches, and ergometrine, used to induce uterine contractions and to control bleeding after childbirth. Clinical ergotism as seen today results almost exclusively from the excessive intake of ergotamine tartrate in the treatment of migraine headache.
In addition to ergot alkaloids, Claviceps paspali also produces tremorgens (paspalitrem) causing "paspalum staggers" in cattle. The fungi of the genera Penicillium and Aspergillus also produce ergot alkaloids, notably some isolates of the human pathogen Aspergillus fumigatus, and have been isolated from plants in the family Convolvulaceae, of which morning glory is best known. The causative agents of most ergot poisonings are the ergot alkaloid class of fungal metabolites, though some ergot fungi produce distantly related indole-diterpene alkaloids that are tremorgenic.
Ergot does not contain lysergic acid diethylamide (LSD) but instead contains lysergic acid as well as its precursor, ergotamine. Lysergic acid is a precursor for the synthesis of LSD. Their realized and hypothesized medicinal uses have encouraged intensive research since the 1950s culminating on the one hand in development of drugs both legal (e.g., bromocriptine) and illegal (e.g., LSD), and on the other hand in extensive knowledge of the enzymes, genetics and diversity of ergot alkaloid biosynthetic pathways.
The January 4, 2007 issue of the New England Journal of Medicine includes a paper that documents a British study of more than 11,000 Parkinson's disease patients. The study found that two ergot-derived drugs, pergolide and cabergoline, commonly used to treat Parkinson's Disease may increase the risk of leaky heart valves by up to 700%.
History
Ergotism is the earliest recorded example of mycotoxicosis, or poisoning caused by toxic molds.
Early references to ergotism date back as far as 600 BC, an Assyrian tablet referred to it as a "noxious pustule in the ear of grain." In 350 BC, the Parsees described "noxious grasses that cause pregnant women to drop the womb and die in childbed." In ancient Syria, ergot was called "Daughter of Blood." Radulf Glaber described an ailment he called "hidden fire," or ignus ocultus, in which a burning of the limb is followed by its separation from the body, often consuming the victim in one night. In 1588, Johannes Thallius wrote that it is called "Mother of Rye," or rockenmutter, and is used to halt bleeding.
Human poisoning due to the consumption of rye bread made from ergot-infected grain was common in Europe in the Middle Ages. The first mention of a plague of gangrenous ergotism in Europe comes from Germany in 857; following this, France and Scandinavia experienced similar outbreaks; England is noticeably absent from the historical regions affected by ergotism as its main source of food was wheat, which is resistant to ergot fungi. In 994, a massive outbreak potentially attributed to ergotism caused 40,000 deaths in the regions of Aquitaine, Limousin, Périgord and Angoumois in France. In Hesse, in 1596, Wendelin Thelius was one of the first to attribute ergotism poisoning to grain. In 1778, S. Tessier, observing a huge epidemic in Sologne, France, in which more than 8,000 people died, recommended drainage of fields, compulsory cleaning of grain, and the substitution of potatoes for affected grain.
In 1722, the Russian Tsar Peter the Great was thwarted in his campaign against the Ottoman Empire as his army, traveling down the Terek steppe, was struck by ergotism and was forced to retreat in order to find edible grains. A diary entry from the time notes that as soon as people ate the poisoned bread, they became dizzy, with such strong nerve contractions that those who did not die on the first day found their hands and feet falling off, akin to frostbite. The outbreak was known as Saint Anthony's fire, or ignis sacer.
Some historical events, such as the Great Fear in France at the outset of the French Revolution, have been linked to ergot poisoning.
Saint Anthony's fire and the Antonites
Saint Anthony was a 3rd Century Egyptian ascetic who lived by the Red Sea and was known for long fasting in which he confronted terrible visions and temptations sent from the Devil. He was credited by two noblemen for assisting them in recovery from the disease; they subsequently founded the Order of St. Anthony in his honor. Anthony was a popular subject for art in the Middle Ages, and his symbol is a large blue "T" sewn onto the shoulder of the order's monks, symbolizing the crutch used by the ill and injured.
The Order of St. Anthony, whose members were known as Antonites, grew quickly, and hospitals spread through France, Germany and Scandinavia and gained wealth and power as grateful patrons bestowed money and charitable goods on the hospitals. By the end of the Middle Ages, there were 396 settlements and 372 hospitals owned by the order, and pilgrimages to such hospitals became popular, as well as the donation of limbs lost to ergotism, which were displayed near shrines to the saint. These hagiotherapeutic centers were the first specialized European medical welfare systems, and the friars of the order were knowledgeable about treatment of ergotism and the horrifying effects of the poison. The sufferers would receive ergot-free meals, wines containing vasodilating and analgesic herbs, and applications of Antonites-balsam, which was the first transdermal therapeutic system (TTS) in medical history. These medical recipes have been lost to time, though some recorded treatments still remain. After 1130, the monks were no longer permitted to perform operations, and so barber surgeons were employed to remove gangrenous limbs and treat open sores. Three barbers founded a hospital in Memmingen in 1214 and accepted those who were afflicted with the gangrenous form of ergotism. Patients were fed and housed, with the more able-bodied individuals acting as orderlies and assistants. Patients with the convulsive form of ergotism, or ergotismus convulsivus, were welcomed for only nine days before they were asked to leave, as convulsive ergotism was seen as less detrimental. Though the sufferers often experienced irreversible effects, they most often returned to their families and resumed their livelihoods.
An important aspect to the Order of St. Anthony's treatment practices was the exclusion of rye bread and other ergot-containing edibles, which halted the progression of ergotism. There was no known cure for ergotism itself; however, there was treatment of the symptoms, which often included blood constriction, nervous disorders and/or hallucinations; if the sufferer survived the initial poisoning, his limbs would often fall off, and he or she would continue to improve in health if he or she halted consumption of ergot. The trunk of the body remained relatively untouched by the disease until its final stages, and the victims, not understanding the cause of their ailment, would continue to imbibe ergot-laden food for weeks until the condition reached their digestive system. It is believed that the peasantry and children were most susceptible to ergotism, though the wealthy were afflicted as well, as, at times, entire villages relied on tainted crops for sustenance, and during times of famine, ergotism reached into every house. Ergot fungus is impervious to heat and water, and thus it was most often baked into bread through rye flour; though other grasses can be infected, it was uncommon in Medieval Europe to consume grasses other than rye. The physiological effects of ergot depended on the concentration and combinations of the ingested ergot metabolites, as well as the age and nutritional status of the afflicted individual. The Antonites began to decline after physicians discovered the genesis of ergotism and recommended methods for removing the sclerotium from the rye crops. In 1776, the cloisters of the Antonites were incorporated into the Maltese Knights Hospitaller, losing much of their medical histories in the process and losing the ergotism cures and recipes due to lack of use and lack of preservation.
Usage in gynaecology and obstetrics
Midwives and very few doctors in Europe have used extracts from ergot for centuries:
In a Nürnberg manuscript of 1474, powdered ergot was prescribed together with Laurel-fruits and rhizomes of Solomon's seals to cure permutter or heffmutter, which refers to pain in the lower abdomen caused by 'uprising of the womb'
In a printed book of 1582, the German physician Adam Lonicer wrote, that three sclerotia of ergot, used several times a day, were used by midwives as a good remedy in case of the "uprising and pain of the womb" (auffſteigen vnd wehethumb der mutter)
Joachim Camerarius the Younger wrote in 1586, that sclerotia of ergot held under the tongue, would stop bleeding
To prove that ergot is a harmless sort of grain, in 1774, the French pharmacist Antoine-Augustin Parmentier edited a letter he had received from Madame Dupile, a midwife of Chaumont-en-Vexin. She had told him that if uterine contractions were too weak in the expulsion stage of childbirth, she and her mother gave peeled ergot in an amount of the filling of a thimble dispersed in water, wine or broth. The administration of ergot was followed by a mild childbirth within 15 minutes. The French physician Jean-Baptiste Desgranges (1751–1831) published in 1818, that in 1777 he had met midwives in Lyon, who successfully treated feeble uterine contractions by administering the powder of ergot. Desgranges added this remedy to his therapeutic arsenal. From 1777 to 1804, he was successful in alleviating childbirth for more than twenty women by the administration of the powder of ergot. He never saw any side-effect of this treatment.
In the United States, in 1807 Dr. John Stearns of Saratoga County, New York wrote to a friend that he had used, over several years, a pulvis parturiens with complete success in patients with "lingering parturitation". This pulvis parturiens consisted of ergot, that he called a "spurious groth of rye". He boiled "half a drachm" (ca. 2g) of that powder in half a pint of water and gave one third every twenty minutes, till the pains commenced. In 1813, Dr. Oliver Prescott (1762–1827) of Newburyport, Massachusetts published a dissertation "on the natural history and medical effects of the secale cornutum", in which he described and analysed the experience he had gathered over five years while using ergot in cases of poor uterine action in the second stage of labour in childbirth.
The 1836 Dispensatory of the United States recommended "to a woman in labour fifteen or twenty grains [ca. 1 to 1.3g] of ergot in powder to be repeated every twenty minutes, till its peculiar effects are experienced, or till the amount of a drachm [ca. 3.9g] has been taken".
In 1837, the French Codex Pharmacopee Francaise required ergot to be kept in all pharmacies.
Low to very low evidence from clinical trials suggests that prophylactic use of ergot alkaloids, administered by intravenous (IV) or intramuscular (IM) in the third stage of labor, may reduce blood loss and may reduce the risk of moderate to severe hemorrhage following delivery, however this medication may also be associated with higher blood pressure and higher pain. It is not clear if oral ergot alkaloids are beneficial or harmful as they have not been well studied. A 2018 Cochrane Systematic Review concluded that other medications such as oxytocin, syntometrine and prostaglandins, may be preferred over ergot alkaloids.
Though ergot was known to cause abortions in cattle and humans, this was not a recognized use for it as abortion was illegal in most countries, thus evidence for its use in abortion is unknown. Most often, ergot was used to speed the process of parturition or delivery, and was not used for the purpose of halting postpartum bleeding, which is a concern of childbirth. However, until anesthesia became available, there was no antidote or way of controlling the effects of ergot. So if the fetus did not move as expected, the drug could cause the uterus to mold itself around the child, rupturing the uterus and killing the child. David Hosack, an American physician, noted the large number of stillbirths resulting from ergot use and stated that rather than pulvis ad partum, it should be called pulvis ad mortem. He began advocating for its use to halt postpartum bleeding. Eventually, doctors determined that the use of ergot in childbirth without an antidote was too dangerous. They ultimately restricted its use to expelling the placenta or stopping hemorrhage. Not only did it constrict the uterus, ergot had the ability to increase or decrease blood pressure, induce hypothermia and emesis, and influence pituitary hormone secretions. In 1926, Swiss psychiatrist Hans Maier suggested to use ergotamine for the treatment of vascular headaches of the migraine type.
In the 1930s, abortifacient drugs were marketed to women by various companies under various names such as Molex pills and Cote pills. Since birth control devices and abortifacients were illegal to market and sell at the time, they were offered to women who were "delayed". The recommended dosage was seven grains of ergotin a day. According to the United States Federal Trade Commission (FTC) these pills contained ergotin, aloes, Black Hellebore and other substances. The efficacy and safety of these pills are unknown. The FTC deemed them unsafe and ineffective and demanded that they cease and desist selling the product. Currently, over a thousand compounds have been derived from ergot ingredients.
Speculated cause of hysterics and hallucinations
It has been posited that Kykeon, the beverage consumed by participants in the ancient Greek Eleusinian Mysteries cult, might have been based on hallucinogens from ergotamine, a precursor to the potent hallucinogen LSD, and ergonovine.
An article appearing in the July 23, 1881 edition of Scientific American entitled "A New Exhilarating Substance" denotes cases of euphoria upon consuming tincture of ergot of rye, particularly when mixed with phosphate of soda and sweetened water. In rainy years, it was thought rye bread exceeded 5% ergot.
British author John Grigsby contends that the presence of ergot in the stomachs of some of the so-called 'bog-bodies' (Iron Age human remains from peat bogs of northeast Europe, such as the Tollund Man) is indicative of use of Claviceps purpurea in ritual drinks in a prehistoric fertility cult akin to the Greek Eleusinian Mysteries. In his 2005 book Beowulf and Grendel, he argues that the Anglo-Saxon poem Beowulf is based on a memory of the quelling of this fertility cult by followers of Odin. He writes that Beowulf, which he translates as barley-wolf, suggests a connection to ergot which in German was known as the 'tooth of the wolf'.
Linnda R. Caporael posited in 1976 that the hysterical symptoms of young women that had spurred the Salem witch trials had been the result of consuming ergot-tainted rye. However, Nicholas P. Spanos and Jack Gottlieb, after a review of the historical and medical evidence, later disputed her conclusions. Other authors have likewise cast doubt on ergotism as the cause of the Salem witch trials.
Claviceps purpurea
Mankind has known about Claviceps purpurea for a long time, and its appearance has been linked to extremely cold winters that were followed by rainy summers.
The sclerotial stage of C. purpurea conspicuous on the heads of ryes and other such grains is known as ergot. Favorable temperatures for growth are in the range of 18–30 °C. Temperatures above 37 °C cause rapid germination of conidia. Sunlight has a chromogenic effect on the mycelium, with intense coloration. Cereal mashes and sprouted rye are suitable substrates for growth of the fungus in the laboratory.
Claviceps africana
Claviceps africana infects sorghum. In sorghum and pearl millet, ergot became a problem when growers adopted hybrid technology, which increased host susceptibility. It only infects unfertilized ovaries, so self-pollination and fertilization can decrease the presence of the disease, but male-sterile lines are extremely vulnerable to infection. Symptoms of infection by C. africana include the secretion of honeydew (a fluid with high concentrates of sugar and conidia), which attracts insects like flies, beetles and wasps that feed on it. This helps spread the fungus to uninfected plants.
In Sorghum, this honeydew can be spotted coming out of head flowers. A whitish sticky substance can also be observed on leaves and on the ground.
C. africana caused ergot disease that caused a famine in 1903–1906 in northern Cameroon, West Africa, and also occurs in eastern and southern Africa, especially Zimbabwe and South Africa. Male sterile sorghums (also referred to as A-lines) are especially susceptible to infection, as first recognized in the 1960s, and massive losses in seed yield have been noted. Infection is associated with cold night temperatures that are below 12 °C occurring two to three weeks before flowering.
Sorghum ergot caused by Claviceps africana Frederickson, Mantle and De Milliano is widespread in all sorghum-growing areas, whereas the species was formerly restricted to Africa and Asia where it was first recorded more than 90 years ago, it has been spreading rapidly and by the mid-1990s it reached Brazil, South Africa and Australia. By 1997, the disease had spread to most South American countries and the Caribbean including Mexico, and by 1997 had reached Texas in the United States.
Management
Partners of the CABI-led programme, Plantwise (including the Ministry of Agriculture and Livestock in Zambia) have several recommendations for managing the spread of ergot, these include; planting tolerant varieties, disk fields after harvest to prevent sorghum ratoon and volunteer plants from developing, remove any infected plants, and carrying out three-year crop rotations with legumes.
Claviceps paspali
Claviceps paspali infects wild grasses and could be found on the common grass Paspalum. Like the C. africana, C. paspali also secretes honeydew which is consumed by bees. The bees then create a honey called fic'e (Paraguayan Makai Indian language), which is infused with secretions from the plants and has a pungent aroma. If consumed in high amounts, the honey can cause drunkenness, dizziness and even death.
| Biology and health sciences | Poisonous fungi | Plants |
262927 | https://en.wikipedia.org/wiki/Groundwater | Groundwater | Groundwater is the water present beneath Earth's surface in rock and soil pore spaces and in the fractures of rock formations. About 30 percent of all readily available fresh water in the world is groundwater. A unit of rock or an unconsolidated deposit is called an aquifer when it can yield a usable quantity of water. The depth at which soil pore spaces or fractures and voids in rock become completely saturated with water is called the water table. Groundwater is recharged from the surface; it may discharge from the surface naturally at springs and seeps, and can form oases or wetlands. Groundwater is also often withdrawn for agricultural, municipal, and industrial use by constructing and operating extraction wells. The study of the distribution and movement of groundwater is hydrogeology, also called groundwater hydrology.
Typically, groundwater is thought of as water flowing through shallow aquifers, but, in the technical sense, it can also contain soil moisture, permafrost (frozen soil), immobile water in very low permeability bedrock, and deep geothermal or oil formation water. Groundwater is hypothesized to provide lubrication that can possibly influence the movement of faults. It is likely that much of Earth's subsurface contains some water, which may be mixed with other fluids in some instances.
Groundwater is often cheaper, more convenient and less vulnerable to pollution than surface water. Therefore, it is commonly used for public drinking water supplies. For example, groundwater provides the largest source of usable water storage in the United States, and California annually withdraws the largest amount of groundwater of all the states. Underground reservoirs contain far more water than the capacity of all surface reservoirs and lakes in the US, including the Great Lakes. Many municipal water supplies are derived solely from groundwater. Over 2 billion people rely on it as their primary water source worldwide.
Human use of groundwater causes environmental problems. For example, polluted groundwater is less visible and more difficult to clean up than pollution in rivers and lakes. Groundwater pollution most often results from improper disposal of wastes on land. Major sources include industrial and household chemicals and garbage landfills, excessive fertilizers and pesticides used in agriculture, industrial waste lagoons, tailings and process wastewater from mines, industrial fracking, oil field brine pits, leaking underground oil storage tanks and pipelines, sewage sludge and septic systems. Additionally, groundwater is susceptible to saltwater intrusion in coastal areas and can cause land subsidence when extracted unsustainably, leading to sinking cities (like Bangkok) and loss in elevation (such as the multiple meters lost in the Central Valley of California). These issues are made more complicated by sea level rise and other effects of climate change, particularly those on the water cycle. Earth's axial tilt has shifted 31 inches because of human groundwater pumping.
Definition
Groundwater is fresh water located in the subsurface pore space of soil and rocks. It is also water that is flowing within aquifers below the water table. Sometimes it is useful to make a distinction between groundwater that is closely associated with surface water, and deep groundwater in an aquifer (called "fossil water" if it infiltrated into the ground millennia ago).
Role in the water cycle
Groundwater can be thought of in the same terms as surface water: inputs, outputs and storage. The natural input to groundwater is seepage from surface water. The natural outputs from groundwater are springs and seepage to the oceans. Due to its slow rate of turnover, groundwater storage is generally much larger (in volume) compared to inputs than it is for surface water. This difference makes it easy for humans to use groundwater unsustainably for a long time without severe consequences. Nevertheless, over the long term the average rate of seepage above a groundwater source is the upper bound for average consumption of water from that source.
Groundwater is naturally replenished by surface water from precipitation, streams, and rivers when this recharge reaches the water table.
Groundwater can be a long-term 'reservoir' of the natural water cycle (with residence times from days to millennia), as opposed to short-term water reservoirs like the atmosphere and fresh surface water (which have residence times from minutes to years). Deep groundwater (which is quite distant from the surface recharge) can take a very long time to complete its natural cycle.
The Great Artesian Basin in central and eastern Australia is one of the largest confined aquifer systems in the world, extending for almost 2 million km2. By analysing the trace elements in water sourced from deep underground, hydrogeologists have been able to determine that water extracted from these aquifers can be more than 1 million years old.
By comparing the age of groundwater obtained from different parts of the Great Artesian Basin, hydrogeologists have found it increases in age across the basin. Where water recharges the aquifers along the Eastern Divide, ages are young. As groundwater flows westward across the continent, it increases in age, with the oldest groundwater occurring in the western parts. This means that in order to have travelled almost 1000 km from the source of recharge in 1 million years, the groundwater flowing through the Great Artesian Basin travels at an average rate of about 1 metre per year.
Groundwater recharge
Location in aquifers
Characteristics
Temperature
The high specific heat capacity of water and the insulating effect of soil and rock can mitigate the effects of climate and maintain groundwater at a relatively steady temperature. In some places where groundwater temperatures are maintained by this effect at about , groundwater can be used for controlling the temperature inside structures at the surface. For example, during hot weather relatively cool groundwater can be pumped through radiators in a home and then returned to the ground in another well. During cold seasons, because it is relatively warm, the water can be used in the same way as a source of heat for heat pumps that is much more efficient than using air.
Availability
Groundwater makes up about thirty percent of the world's fresh water supply, which is about 0.76% of the entire world's water, including oceans and permanent ice. About 99% of the world's liquid fresh water is groundwater. Global groundwater storage is roughly equal to the total amount of freshwater stored in the snow and ice pack, including the north and south poles. This makes it an important resource that can act as a natural storage that can buffer against shortages of surface water, as in during times of drought.
The volume of groundwater in an aquifer can be estimated by measuring water levels in local wells and by examining geologic records from well-drilling to determine the extent, depth and thickness of water-bearing sediments and rocks. Before an investment is made in production wells, test wells may be drilled to measure the depths at which water is encountered and collect samples of soils, rock and water for laboratory analyses. Pumping tests can be performed in test wells to determine flow characteristics of the aquifer.
The characteristics of aquifers vary with the geology and structure of the substrate and topography in which they occur. In general, the more productive aquifers occur in sedimentary geologic formations. By comparison, weathered and fractured crystalline rocks yield smaller quantities of groundwater in many environments. Unconsolidated to poorly cemented alluvial materials that have accumulated as valley-filling sediments in major river valleys and geologically subsiding structural basins are included among the most productive sources of groundwater.
Fluid flows can be altered in different lithological settings by brittle deformation of rocks in fault zones; the mechanisms by which this occurs are the subject of fault zone hydrogeology.
Uses by humans
Reliance on groundwater will only increase, mainly due to growing water demand by all sectors combined with increasing variation in rainfall patterns. Safe use of groundwater varies substantially by the elements present and use-cases, with significant differences between consumption for humans, livestocks and different crops.
Quantities
Groundwater is the most accessed source of freshwater around the world, including as drinking water, irrigation, and manufacturing. Groundwater accounts for about half of the world's drinking water, 40% of its irrigation water, and a third of water for industrial purposes.
Another estimate stated that globally groundwater accounts for about one third of all water withdrawals, and surface water for the other two thirds. Groundwater provides drinking water to at least 50% of the global population. About 2.5 billion people depend solely on groundwater resources to satisfy their basic daily water needs.
A similar estimate was published in 2021 which stated that "groundwater is estimated to supply between a quarter and a third of the world's annual freshwater withdrawals to meet agricultural, industrial and domestic demands."
Global freshwater withdrawal was probably around 600 km3 per year in 1900 and increased to 3,880 km3 per year in 2017. The rate of increase was especially high (around 3% per year) during the period 1950–1980, partly due to a higher population growth rate, and partly to rapidly increasing groundwater development, particularly for irrigation. The rate of increase is (as per 2022) approximately 1% per year, in tune with the current population growth rate.
Global groundwater depletion has been calculated to be between 100 and 300 km3 per year. This depletion is mainly caused by "expansion of irrigated agriculture in drylands".
The Asia-Pacific region is the largest groundwater abstractor in the world, containing seven out of the ten countries that extract most groundwater (Bangladesh, China, India, Indonesia, Iran, Pakistan and Turkey). These countries alone account for roughly 60% of the world's total groundwater withdrawal.
Drinking water quality aspects
Groundwater may or may not be a safe water source. In fact, there is considerable uncertainty with groundwater in different hydrogeologic contexts: the widespread presence of contaminants such as arsenic, fluoride and salinity can reduce the suitability of groundwater as a drinking water source. Arsenic and fluoride have been considered as priority contaminants at a global level, although priority chemicals will vary by country.
There is a lot of heterogeneity of hydrogeologic properties. For this reason, salinity of groundwater is often highly variable over space. This contributes to highly variable groundwater security risks even within a specific region. Salinity in groundwater makes the water unpalatable and unusable and is often the worst in coastal areas, especially due to Saltwater intrusion from excessive use, which are notable in Bangladesh, and East and West India, and many islan nations..
Due to climate change groundwater is warming. The temperature of Viennese groundwater has increased by .9 degrees Celsius between 2001 and 2010; by 1.4 degrees between 2011 and 2020. In a joint research project scientists at the Karlsruher Institut für Technologie and the University of Vienna have tried to quantify the amount of drinking water loss to be expected due to ground water warming up to the end of the current century. Stressing the fact that regional shallow groundwater warming patterns vary substantially due to spatial variability in climate change and water table depth these researchers write that we currently lack knowledge about how groundwater responds to surface warming across spatial and temporal scales. Their study shows, however, that following a medium emissions pathway, in 2100 between 77 million and 188 million people are projected to live in areas where groundwater exceeds the highest threshold for drinking water temperatures (DWTs) set by any country.
Water supply for municipal and industrial uses
Municipal and industrial water supplies are provided through large wells. Multiple wells for one water supply source are termed "wellfields", which may withdraw water from confined or unconfined aquifers. Using groundwater from deep, confined aquifers provides more protection from surface water contamination. Some wells, termed "collector wells", are specifically designed to induce infiltration of surface (usually river) water.
Aquifers that provide sustainable fresh groundwater to urban areas and for agricultural irrigation are typically close to the ground surface (within a couple of hundred metres) and have some recharge by fresh water. This recharge is typically from rivers or meteoric water (precipitation) that percolates into the aquifer through overlying unsaturated materials. In cases where the groundwater has unacceptable levels of salinity or specific ions, desalination is a common treatment,. However, for the brine, safe disposal or reuse is needed.
Irrigation
In general, the irrigation of 20% of farming land (with various types of water sources) accounts for the production of 40% of food production. Irrigation techniques across the globe includes canals redirecting surface water, groundwater pumping, and diverting water from dams. Aquifers are critically important in agriculture. Deep aquifers in arid areas have long been water sources for irrigation. A majority of extracted groundwater, 70%, is used for agricultural purposes. Significant investigation has gone into determining safe levels of specific salts present for different agricultural uses.
In India, 65% of the irrigation is from groundwater and about 90% of extracted groundwater is used for irrigation.
Occasionally, sedimentary or "fossil" aquifers are used to provide irrigation and drinking water to urban areas. In Libya, for example, Muammar Gaddafi's Great Manmade River project has pumped large amounts of groundwater from aquifers beneath the Sahara to populous areas near the coast. Though this has saved Libya money over the alternative, seawater desalination, the aquifers are likely to run dry in 60 to 100 years.
In developing countries
Challenges
First, flood mitigation schemes, intended to protect infrastructure built on floodplains, have had the unintended consequence of reducing aquifer recharge associated with natural flooding. Second, prolonged depletion of groundwater in extensive aquifers can result in land subsidence, with associated infrastructure damageas well as, third, saline intrusion. Fourth, draining acid sulphate soils, often found in low-lying coastal plains, can result in acidification and pollution of formerly freshwater and estuarine streams.
Overdraft
Groundwater is a highly useful and often abundant resource. Most land areas on Earth have some form of aquifer underlying them, sometimes at significant depths. In some cases, these aquifers are rapidly being depleted by the human population. Such over-use, over-abstraction or overdraft can cause major problems to human users and to the environment. The most evident problem (as far as human groundwater use is concerned) is a lowering of the water table beyond the reach of existing wells. As a consequence, wells must be drilled deeper to reach the groundwater; in some places (e.g., California, Texas, and India) the water table has dropped hundreds of feet because of extensive well pumping. The GRACE satellites have collected data that demonstrates 21 of Earth's 37 major aquifers are undergoing depletion. In the Punjab region of India, for example, groundwater levels have dropped 10 meters since 1979, and the rate of depletion is accelerating. A lowered water table may, in turn, cause other problems such as groundwater-related subsidence and saltwater intrusion.
Another cause for concern is that groundwater drawdown from over-allocated aquifers has the potential to cause severe damage to both terrestrial and aquatic ecosystemsin some cases very conspicuously but in others quite imperceptibly because of the extended period over which the damage occurs. The importance of groundwater to ecosystems is often overlooked, even by freshwater biologists and ecologists. Groundwaters sustain rivers, wetlands, and lakes, as well as subterranean ecosystems within karst or alluvial aquifers.
Not all ecosystems need groundwater, of course. Some terrestrial ecosystemsfor example, those of the open deserts and similar arid environmentsexist on irregular rainfall and the moisture it delivers to the soil, supplemented by moisture in the air. While there are other terrestrial ecosystems in more hospitable environments where groundwater plays no central role, groundwater is in fact fundamental to many of the world's major ecosystems. Water flows between groundwaters and surface waters. Most rivers, lakes, and wetlands are fed by, and (at other places or times) feed groundwater, to varying degrees. Groundwater feeds soil moisture through percolation, and many terrestrial vegetation communities depend directly on either groundwater or the percolated soil moisture above the aquifer for at least part of each year. Hyporheic zones (the mixing zone of streamwater and groundwater) and riparian zones are examples of ecotones largely or totally dependent on groundwater.
A 2021 study found that of ~39 million investigated groundwater wells 6-20% are at high risk of running dry if local groundwater levels decline by a few meters, oras with many areas and possibly more than half of major aquiferscontinue to decline.
Fresh-water aquifers, especially those with limited recharge by snow or rain, also known as meteoric water, can be over-exploited and depending on the local hydrogeology, may draw in non-potable water or saltwater intrusion from hydraulically connected aquifers or surface water bodies. This can be a serious problem, especially in coastal areas and other areas where aquifer pumping is excessive.
Subsidence
Subsidence occurs when too much water is pumped out from underground, deflating the space below the above-surface, and thus causing the ground to collapse. The result can look like craters on plots of land. This occurs because, in its natural equilibrium state, the hydraulic pressure of groundwater in the pore spaces of the aquifer and the aquitard supports some of the weight of the overlying sediments. When groundwater is removed from aquifers by excessive pumping, pore pressures in the aquifer drop and compression of the aquifer may occur. This compression may be partially recoverable if pressures rebound, but much of it is not. When the aquifer gets compressed, it may cause land subsidence, a drop in the ground surface.
In unconsolidated aquifers, groundwater is produced from pore spaces between particles of gravel, sand, and silt. If the aquifer is confined by low-permeability layers, the reduced water pressure in the sand and gravel causes slow drainage of water from the adjoining confining layers. If these confining layers are composed of compressible silt or clay, the loss of water to the aquifer reduces the water pressure in the confining layer, causing it to compress from the weight of overlying geologic materials. In severe cases, this compression can be observed on the ground surface as subsidence. Unfortunately, much of the subsidence from groundwater extraction is permanent (elastic rebound is small). Thus, the subsidence is not only permanent, but the compressed aquifer has a permanently reduced capacity to hold water.
The city of New Orleans, Louisiana is actually below sea level today, and its subsidence is partly caused by removal of groundwater from the various aquifer/aquitard systems beneath it. In the first half of the 20th century, the San Joaquin Valley experienced significant subsidence, in some places up to due to groundwater removal. Cities on river deltas, including Venice in Italy, and Bangkok in Thailand, have experienced surface subsidence; Mexico City, built on a former lake bed, has experienced rates of subsidence of up to per year.
For coastal cities, subsidence can increase the risk of other environmental issues, such as sea level rise. For example, Bangkok is expected to have 5.138 million people exposed to coastal flooding by 2070 because of these combining factors.
Groundwater becoming saline due to evaporation
If the surface water source is also subject to substantial evaporation, a groundwater source may become saline. This situation can occur naturally under endorheic bodies of water, or artificially under irrigated farmland. In coastal areas, human use of a groundwater source may cause the direction of seepage to ocean to reverse which can also cause soil salinization.
As water moves through the landscape, it collects soluble salts, mainly sodium chloride. Where such water enters the atmosphere through evapotranspiration, these salts are left behind. In irrigation districts, poor drainage of soils and surface aquifers can result in water tables' coming to the surface in low-lying areas. Major land degradation problems of soil salinity and waterlogging result, combined with increasing levels of salt in surface waters. As a consequence, major damage has occurred to local economies and environments.
Aquifers in surface irrigated areas in semi-arid zones with reuse of the unavoidable irrigation water losses percolating down into the underground by supplemental irrigation from wells run the risk of salination.
Surface irrigation water normally contains salts in the order of or more and the annual irrigation requirement is in the order of or more so the annual import of salt is in the order of or more.
Under the influence of continuous evaporation, the salt concentration of the aquifer water may increase continually and eventually cause an environmental problem.
For salinity control in such a case, annually an amount of drainage water is to be discharged from the aquifer by means of a subsurface drainage system and disposed of through a safe outlet. The drainage system may be horizontal (i.e. using pipes, tile drains or ditches) or vertical (drainage by wells). To estimate the drainage requirement, the use of a groundwater model with an agro-hydro-salinity component may be instrumental, e.g. SahysMod.
Seawater intrusion
Aquifers near the coast have a lens of freshwater near the surface and denser seawater under freshwater. Seawater penetrates the aquifer diffusing in from the ocean and is denser than freshwater. For porous (i.e., sandy) aquifers near the coast, the thickness of freshwater atop saltwater is about for every of freshwater head above sea level. This relationship is called the Ghyben-Herzberg equation. If too much groundwater is pumped near the coast, salt-water may intrude into freshwater aquifers causing contamination of potable freshwater supplies. Many coastal aquifers, such as the Biscayne Aquifer near Miami and the New Jersey Coastal Plain aquifer, have problems with saltwater intrusion as a result of overpumping and sea level rise.
Seawater intrusion is the flow or presence of seawater into coastal aquifers; it is a case of saltwater intrusion. It is a natural phenomenon but can also be caused or worsened by anthropogenic factors, such as sea level rise due to climate change. In the case of homogeneous aquifers, seawater intrusion forms a saline wedge below a transition zone to fresh groundwater, flowing seaward on the top. These changes can have other effects on the land above the groundwater. For example, coastal groundwater in California would rise in many aquifers, increasing risks of flooding and runoff challenges.
Sea level rise causes the mixing of sea water into the coastal groundwater, rendering it unusable once it amounts to more than 2-3% of the reservoir. Along an estimated 15% of the US coastline, the majority of local groundwater levels are already below the sea level.
Pollution
Climate change
The impacts of climate change on groundwater may be greatest through its indirect effects on irrigation water demand via increased evapotranspiration. There is an observed declined in groundwater storage in many parts of the world. This is due to more groundwater being used for irrigation activities in agriculture, particularly in drylands. Some of this increase in irrigation can be due to water scarcity issues made worse by effects of climate change on the water cycle. Direct redistribution of water by human activities amounting to ~24,000 km3 per year is about double the global groundwater recharge each year.
Climate change causes changes to the water cycle which in turn affect groundwater in several ways: There can be a decline in groundwater storage, and reduction in groundwater recharge and water quality deterioration due to extreme weather events. In the tropics intense precipitation and flooding events appear to lead to more groundwater recharge.
However, the exact impacts of climate change on groundwater are still under investigation. This is because scientific data derived from groundwater monitoring is still missing, such as changes in space and time, abstraction data and "numerical representations of groundwater recharge processes".
Effects of climate change could have different impacts on groundwater storage: The expected more intense (but fewer) major rainfall events could lead to increased groundwater recharge in many environments. But more intense drought periods could result in soil drying-out and compaction which would reduce infiltration to groundwater.
For the higher altitudes regions, the reduced duration and amount of snow may lead to reduced recharge of groundwater in spring. The impacts of receding alpine glaciers on groundwater systems are not well understood.
Global sea level rise due to climate change has induced seawater intrusion into coastal aquifers around the world, particularly in low-lying areas and small islands. However, groundwater abstraction is usually the main reason for seawater intrusion, rather than sea level rise (see in section on seawater intrusion). Seawater intrusion threatens coastal ecosystems and livelihood resilience. Bangladesh is a vulnerable country for this issue, and mangrove forest of Sundarbans is a vulnerable ecosystem.
Groundwater pollution may also increase indirectly due to climate change: More frequent and intense storms can pollute groundwater by mobilizing contaminants, for example fertilizers, wastewater or human excreta from pit latrines. Droughts reduce river dilution capacities and groundwater levels, increasing the risk of groundwater contamination.
Aquifer systems that are vulnerable to climate change include the following examples (the first four are largely independent of human withdrawals, unlike examples 5 to 8 where the intensity of human groundwater withdrawals plays a key role in amplifying vulnerability to climate change):
low-relief coastal and deltaic aquifer systems,
aquifer systems in continental northern latitudes or alpine and polar regions
aquifers in rapidly expanding low-income cities and large displaced and informal communities
shallow alluvial aquifers underlying seasonal rivers in drylands,
intensively pumped aquifer systems for groundwater-fed irrigation in drylands
intensively pumped aquifers for dryland cities
intensively pumped coastal aquifers
low-storage/low-recharge aquifer systems in drylands
Climate change adaptation
Using more groundwater, particularly in Sub-Saharan Africa, is seen as a method for climate change adaptation in the case that climate change causes more intense or frequent droughts.
Groundwater-based adaptations to climate change exploit distributed groundwater storage and the capacity of aquifer systems to store seasonal or episodic water surpluses. They incur substantially lower evaporative losses than conventional infrastructure, such as surface dams. For example, in tropical Africa, pumping water from groundwater storage can help to improve the climate resilience of water and food supplies.
Climate change mitigation
The development of geothermal energy, a sustainable energy source, plays an important role in reducing CO2 emissions and thus mitigating climate change. Groundwater is an agent in the storage, movement, and extraction of geothermal energy.
In pioneering nations, such as the Netherlands and Sweden, the ground/groundwater is increasingly seen as just one component (a seasonal source, sink or thermal 'buffer') in district heating and cooling networks.
Deep aquifers can also be used for carbon capture and sequestration, the process of storing carbon to curb accumulation of carbon dioxide in the atmosphere.
Groundwater governance
Groundwater governance processes enable groundwater management, planning and policy implementation. It takes place at multiple scales and geographic levels, including regional and transboundary scales.
Groundwater management is action-oriented, focusing on practical implementation activities and day-to-day operations. Because groundwater is often perceived as a private resource (that is, closely connected to land ownership, and in some jurisdictions treated as privately owned), regulation and top–down governance and management are difficult. Governments need to fully assume their role as resource custodians in view of the common-good aspects of groundwater.
Domestic laws and regulations regulate access to groundwater as well as human activities that impact the quality of groundwater. Legal frameworks also need to include protection of discharge and recharge zones and of the area surrounding water supply wells, as well as sustainable yield norms and abstraction controls, and conjunctive use regulations. In some jurisdictions, groundwater is regulated in conjunction with surface water, including rivers.
By country
Groundwater is an important water resource for the supply of drinking water, especially in arid countries.
The Arab region is one of the most water-scarce in the world and groundwater is the most relied-upon water source in at least 11 of the 22 Arab states. Over-extraction of groundwater in many parts of the region has led to groundwater table declines, especially in highly populated and agricultural areas.
| Physical sciences | Hydrology | null |
263274 | https://en.wikipedia.org/wiki/Manakin | Manakin | The manakins are a family, Pipridae, of small suboscine passerine birds. The group contains 55 species distributed through the American tropics. The name is from Middle Dutch mannekijn "little man" (also the source of the different bird name mannikin).
Description
Manakins range in size from and in weight from . Species in the genus Tyranneutes are the smallest manakins, those in the genus Antilophia are believed to be the largest (since the genus Schiffornis are no longer considered manakins). They are compact stubby birds with short tails, broad and rounded wings, and big heads. The bill is short and has a wide gap. Females and first-year males have dull green plumage; most species are sexually dichromatic in their plumage, the males being mostly black with striking colours in patches, and in some species having long, decorative tail or crown feathers or erectile throat feathers. In some species, males from two to four years old have a distinctive subadult plumage.
The syrinx or "voicebox" is distinctive in manakins, setting them apart from the related families Cotingidae and Tyrannidae. Furthermore, it is so acutely variable within the group that genera and even species may be identified by the syrinx alone, unlike birds of most oscine families. The sounds made are whistles, trills, and buzzes.
Distribution and habitat
Manakins occur from southern Mexico to northern Argentina, Paraguay, and southern Brazil, and on Trinidad and Tobago as well. They are highly arboreal and are almost exclusively forest and woodland birds. Most species live in humid tropical lowlands, with a few in dry forests, river forests, and the subtropical Andes. Some highland species have altitudinal migrations.
Behaviour and ecology
Feeding
Manakins feed in the understory on small fruit (but often remarkably large for the size of the bird) including berries, and to a lesser degree, insects. Since they take fruit in flight as other species "hawk" for insects, they are believed to have evolved from insect-eating birds. Females have big territories from which they do not necessarily exclude other birds of their species, instead feeding somewhat socially. Males spend much of their time together at courtship sites. Manakins sometimes join mixed feeding flocks.
Reproduction
Many manakin species have spectacular lekking courtship rituals, which are especially elaborate in the genera Pipra and Chiroxiphia. The rituals are characterized by a unique, species-specific pattern of vocalizations and movements such as jumping, bowing, wing vibration, wing snapping, and acrobatic flight. The members of the genera Machaeropterus and Manacus have heavily modified wing feathers, which they use to make buzzing and snapping sounds. Members of Manacus and Ceratopipra have superfast wing movements. The ability to produce these wing movements is supported by specialized peripheral androgen receptors in the muscular tissue.
Building of the nest (an open cup, generally low in vegetation), the incubation for 18 to 21 days, and care of the young for 13 to 15 days are undertaken by the female alone, since most manakins do not form stable pairs. (The helmeted manakin does form pairs, but the male's contribution is limited to defending the territory.) The normal clutch is two eggs, which are buff or dull white, marked with brown.
Lekking polygyny seems to have been a characteristic of the family's original ancestor, and the associated sexual selection led to an adaptive radiation in which relationships may be traced by similarities in displays. Manakin sexual displays within these leks among the ancestral subfamily Neopelminae are the most simple, while displays among the more evolutionarily recent subfamily Piprinae are the most complex. An evolutionary explanation connecting lekking to fruit-eating has been proposed.
Species list
The family Pipridae was introduced (as Pipraria) by the French polymath Constantine Samuel Rafinesque in 1815. The members of the genus Schiffornis were previously placed in this family, but are now placed in Tityridae.
| Biology and health sciences | Passerida | Animals |
263286 | https://en.wikipedia.org/wiki/Antbird | Antbird | The antbirds are a large passerine bird family, Thamnophilidae, found across subtropical and tropical Central and South America, from Mexico to Argentina. There are more than 230 species, known variously as antshrikes, antwrens, antvireos, fire-eyes, bare-eyes and bushbirds. They are related to the antthrushes and antpittas (family Formicariidae), the tapaculos, the gnateaters and the ovenbirds. Despite some species' common names, this family is not closely related to the wrens, vireos or shrikes.
Antbirds are generally small birds with rounded wings and strong legs. They have mostly sombre grey, white, brown and rufous plumage, which is sexually dimorphic in pattern and colouring. Some species communicate warnings to rivals by exposing white feather patches on their backs or shoulders. Most have heavy bills, which in many species are hooked at the tip.
Most species live in forests, although a few are found in other habitats. Insects and other arthropods form the most important part of their diet, although small vertebrates are occasionally taken. Most species feed in the understory and midstory of the forest, although a few feed in the canopy and a few on the ground. Many join mixed-species feeding flocks, and a few species are core members. To various degrees, around eighteen species specialise in following swarms of army ants to eat the small invertebrates flushed by the ants, and many others may feed in this way opportunistically.
Antbirds are monogamous, mate for life, and defend territories. They usually lay two eggs in a nest that is either suspended from branches or supported on a branch, stump, or mound on the ground. Both parents share the tasks of incubation and of brooding and feeding the nestlings. After fledging, each parent cares exclusively for one chick.
Thirty-eight species are threatened with extinction as a result of human activities. Antbirds are not targeted by either hunters or the pet trade. The principal threat is habitat loss, which causes habitat fragmentation and increased nest predation in habitat fragments.
Systematics
The antbird family Thamnophilidae used to be considered a subfamily, Thamnophilinae, within a larger family Formicariidae that included antthrushes and antpittas. Formerly, that larger family was known as the "antbird family" and the Thamnophilinae were "typical antbirds". In this article, "antbird" and "antbird family" refer to the family Thamnophilidae.
Thamnophilidae was removed from Formicariidae, leaving behind the antthrushes and antpittas, due to recognition of differences in the structure of the breastbone (sternum) and syrinx, and Sibley and Ahlquist's examination of DNA–DNA hybridization. The Thamnophilidae antbirds are members of the infraorder Tyrannides (or tracheophone suboscines), one of two infraorders in the suborder Tyranni. The Thamnophilidae are now thought to occupy a fairly basal position within the infraorder, i. e. with regard to their relatives the antthrushes and antpittas, tapaculos, gnateaters, and also the ovenbirds. The sister group of the Thamnophilidae is thought to be the gnateaters. The ovenbirds, tapaculos, antthrushes and antpittas are thought to represent a different radiation of that early split.
The antbird family contains over 230 species, variously called antwrens, antvireos, antbirds and antshrikes. The names refer to the relative sizes of the birds (increasing in the order given, though with exceptions) rather than any particular resemblance to the true wrens, vireos or shrikes. In addition, members of the genus Phlegopsis are known as bare-eyes, Pyriglena as fire-eyes and Neoctantes and Clytoctantes as bushbirds. Although the systematics of the Thamnophilidae is based on studies from the mid-19th century, when fewer than half the present species were known, comparison of the myoglobin intron 2, GAPDH intron 11 and the mtDNA cytochrome b DNA sequences has largely confirmed it. There are two major clades – most antshrikes and other larger, strong-billed species as well as Herpsilochmus, versus the classical antwrens and other more slender, longer-billed species – and the monophyly of most genera was confirmed.
The Thamnophilidae contains several large or very large genera and numerous small or monotypic ones. Several, which are difficult to assign, seem to form a third, hitherto unrecognised clade independently derived from ancestral antbirds. The results also confirmed suspicions of previous researchers that some species, most notably in Myrmotherula and Myrmeciza, need to be assigned to other genera. Still, due to the difficulties of sampling from such a large number of often poorly known species, the assignment of some genera is still awaiting confirmation.
Morphology
The antbirds are a group of small to medium-sized passerines that range in size from the large giant antshrike, which measures 45 cm (18 in) and weighs 150 g (5.29 oz), to the tiny 8-cm (3 in) pygmy antwren, which weighs 7 g (0.25 oz). In general terms, "antshrikes" are relatively large-bodied birds, "antvireos" are medium-sized and chunky, while "antwrens" include most smaller species; "antbird" genera can vary greatly in size. Members of this family have short rounded wings that provide good manoeuvrability when flying in dense undergrowth. The legs are large and strong, particularly in species that are obligate ant-followers. These species are well adapted to gripping vertical stems and saplings, which are more common than horizontal branches in the undergrowth, and thus the ability to grip them is an advantage for birds following swarms of army ants. The claws of these antbirds are longer than those of species that do not follow ants, and the soles of some species have projections that are tough and gripping when the foot is clenched. Tarsus length in antbirds is related to foraging strategy. Longer tarsi typically occur in genera such as the Thamnophilus antshrikes that forage by perch-gleaning (sitting and leaning forward to snatch insects from the branch), whereas shorter tarsi typically occur in those that catch prey on the wing, such as the Thamnomanes antshrikes.
Most antbirds have proportionately large, heavy bills. Several genera of antshrike have a strongly hooked tip to the bill, and all antbirds have a notch or 'tooth' at the tip of the bill which helps in holding and crushing insect prey. The two genera of bushbirds have upturned chisel-like bills.
The plumage of antbirds is soft and not brightly coloured, although it is occasionally striking. The colour palette of most species is blackish shades, whitish shades, rufous, chestnut and brown. Plumages can be uniform in colour or patterned with barring or spots. Sexual dimorphism – differences in plumage colour and pattern between males and females – is common in the family. Overall the pattern within the family is for the males to have combinations of grey, black or white plumage and the females having buff, rufous and brown colours. For example, the male dot-winged antwren is primarily blackish, whereas the female has rust-coloured underparts. In some genera, such as Myrmotherula, species are better distinguished by female plumage than by male. Many species of antbirds have a contrasting 'patch' of white (sometimes other colours) feathers on the back (known as interscapular patches), shoulder or underwing. This is usually concealed by the darker feathers on the back but when the bird is excited or alarmed these feathers can be raised to flash the white patch. dot-winged antwrens puff out white back patches, whereas in bluish-slate antshrikes and white-flanked antwrens the white patch is on the shoulder.
Voice
The songs and calls of antbirds are generally composed of repeated simple uncomplicated notes. The family is one of the suboscines (suborder Tyranni) which have simpler syrinxes ("voiceboxes") than other songbirds. Nevertheless, their songs are distinctive and species-specific, allowing field identification by ear. Antbirds rely on their calls for communication, as is typical of birds in dark forests. Most species have at least two types of call, the loudsong and the softsong. The functions of many calls have been deduced from their context; for example some loudsongs have a territorial purpose and are given when birds meet at the edges of their territories, or during the morning rounds of the territory. Pairs in neighbouring territories judge the proximity of rivals by the degradation of the song caused by interference by the environment. In bouts of territorial defence the male will face off with the other male and the female with her counterpart. Loudsong duets are also potentially related to the maintenance of pair bonds. The functions of softsongs are more complex, and possibly related to pair-bond maintenance. In addition to these two main calls a range of other sounds are made; these include scolding in mobbing of predators. The calls of antbirds are also used interspecifically. Some species of antbirds and even other birds will actively seek out ant-swarms using the calls of some species of ant-followers as clues.
Distribution and habitat
The distribution of the antbirds is entirely Neotropical, with the vast majority of the species being found in the tropics. A few species reach southern Mexico and northern Argentina. Some species, such as the barred antshrike, have a continental distribution that spans most of the South and Middle American distribution of the family; others, such as the ash-throated antwren, have a tiny distribution.
Antbirds are mostly birds of humid lowland rainforests. Few species are found at higher elevations, with less than 10% of species having ranges above 2000 m (6500 ft) and almost none with ranges above 3000 m (10000 ft). The highest species diversity is found in the Amazon basin, with up to 45 species being found in single locations in sites across Brazil, Colombia, Bolivia and Peru. The number of species drops dramatically towards the further reaches of the family's range; there are only seven species in Mexico, for example. Areas of lower thamnophilid diversity may contain localised endemics, however. The Yapacana antbird, for example, is restricted to the stunted woodlands that grow in areas of nutrient-poor white-sand soil (the so-called Amazonian caatinga) in Brazil, Venezuela and Colombia. Some species are predominantly associated with microhabitats within a greater ecosystem; for example, the bamboo antshrike is predominantly found in bamboo patches.
Genetic comparison of the whole genomes of higher and lower-humidity antbirds have shown some differences in genes linked to water balance and temperature regulation. More significantly, antbirds differ in the regions of the genome that regulate gene activity, suggesting that differences for antbirds are a result less of the genes themselves than of how they are deployed.
Behaviour
Antbirds are diurnal: they feed, breed and defend territories during the day. Many of the family are, however, reluctant to enter areas of direct sunlight where it breaks through the forest canopy. Antbirds will engage in anting, a behaviour in which ants (or other arthropods) are rubbed on the feathers before being discarded or eaten. While this has conventionally been considered a way to remove and control feather parasites, it has been suggested that for antbirds it may simply be a way to deal with the distasteful substances in prey items.
Feeding
The main component of the diet of all antbirds is arthropods. These are mostly insects, including grasshoppers and crickets, cockroaches, praying mantises, stick insects and the larvae of butterflies and moths. In addition antbirds often take spiders, scorpions and centipedes. They swallow smaller prey items quickly, whereas they often beat larger items against branches in order to remove wings and spines. Larger species can kill and consume frogs and lizards as well, but generally these do not form an important part of the diet of this family. Other food items may also be eaten, including fruit, eggs and slugs.
The family uses a number of techniques to obtain prey. The majority of antbirds are arboreal, with most of those feeding in the understory, many in the middle story and some in the canopy. A few species feed in the leaf litter; for example, the wing-banded antbird forages in areas of dense leaf-litter. It does not use its feet to scratch the leaf litter, as do some other birds; instead it uses its long bill to turn over leaves rapidly (never picking them up). The antbirds that forage arboreally show a number of techniques and specialisations. Some species perch-glean, perching on a branch watching for prey and snatching it by reaching forward, where others sally from a perch and snatch prey on the wing. In both cases birds will hop through the foliage or undergrowth and pause, scanning for prey, before pouncing or moving on. The time paused varies, although smaller species tend to be more active and pause for shorter times.
Mixed-species feeding flocks
Many species participate in mixed-species feeding flocks, forming a large percentage of the participating species within their range. Some of these are core or "nuclear species". These nuclear species share territories with other nuclear species but exclude conspecifics (members of the same species) and are found in almost all flocks; these are joined by "attendant species". Loud and distinctive calls and conspicuous plumage are important attributes of nuclear species as they promote cohesion in the flock. The composition of these flocks varies geographically; in Amazonia species of Thamnomanes antshrike are the leading nuclear species; elsewhere other species, such as the dot-winged antwrens and checker-throated stipplethroats, fill this role. Other species of antwren and antbird join them along with woodcreepers, ant-tanagers, foliage-gleaners and greenlets. The benefits of the mixed flock are thought to be related to predation, since many eyes are better for spotting predatory hawks and falcons. Comparisons between multi-species feeding flocks in different parts of the world found that instances of flocking were positively correlated with predation risk by raptors. For example, where Thamnomanes antshrikes lead the group they give loud warning calls in the presence of predators. These calls are understood and reacted to by all the other species in the flock. The advantage to the Thamnomanes antshrikes is in allowing the rest of the flock, which are typically gleaners, to act as beaters, flushing prey while foraging which the antshrikes can obtain by sallying. Similar roles are filled in other flocks by other antbird species or other bird families, for example the shrike-tanagers. Within the feeding flocks competition is reduced by microniche partitioning; where dot-winged antwrens, checker-throated stipplethroats and white-flanked antwrens feed in flocks together, the dot-wings feed in the densest vines, the white-flank in less dense vegetation, and the checker-throats in the same density as the latter but in dead foliage only.
Ant followers
Swarms of army ants are an important resource used by some species of antbird, and the one from which the family's common name is derived. Many species of tropical ant form large raiding swarms, but the swarms are often nocturnal or raid underground. While birds visit these swarms when they occur, the species most commonly attended by birds is the Neotropical species Eciton burchellii, which is both diurnal and surface-raiding. It was once thought that attending birds were actually eating the ants, but numerous studies in various parts of Eciton burchellii's range has shown that the ants act as beaters, flushing insects, other arthropods and small vertebrates into the waiting flocks of "ant followers". The improvement in foraging efficiency can be dramatic; a study of spotted antbirds found that they made attempts at prey every 111.8 seconds away from ants, but at swarms they made attempts every 32.3 seconds. While many species of antbirds (and other families) may opportunistically feed at army ant swarms, 18 species of antbird are obligate ant-followers, obtaining most of their diet from swarms. With only three exceptions, these species never regularly forage away from ant swarms. A further four species regularly attend swarms but are as often seen away from them. Obligate ant-followers visit the nesting bivouacs of army ants in the morning to check for raiding activities; other species do not. These species tend to arrive at swarms first, and their calls are used by other species to locate swarming ants.
Because army ants are unpredictable in their movements, it is impractical for obligate ant-followers to maintain a territory that always contains swarms to feed around. Antbirds have evolved a more complicated system than the strict territoriality of most other birds. They generally (details vary among species) maintain breeding territories but travel outside those territories in order to feed at swarms. Several pairs of the same species may attend a swarm, with the dominant pair at the swarm being the pair which holds the territory that the swarm is in. In addition to competition within species, competition among species exists, and larger species are dominant. In its range, the ocellated antbird is the largest of the obligate ant-following antbirds and is dominant over other members of the family, although it is subordinate to various species from other families (including certain woodcreepers, motmots and the rufous-vented ground cuckoo). At a swarm, the dominant species occupies positions above the central front of the swarm, which yields the largest amount of prey. Smaller, less dominant species locate themselves further away from the centre, or higher above the location of the dominant species, where prey is less plentiful.
Breeding
Antbirds are monogamous, in almost all cases forming pair bonds that last the life of the pair. Studies of the dusky antbird and the white-bellied antbird did not find "infidelity". In the white-plumed antbird, divorces between pairs are common, but, as far as known, this species is exceptional. In most species the pair defends a classic territory, although the nesting territories of ant followers are slightly different (see feeding above). Territories vary in size from as small as 0.5 ha for the Manu antbird, to 1500 m (5000 ft) in diameter for the ocellated antbird. Ocellated antbirds have an unusual social system where the breeding pair forms the nucleus of a group or clan that includes their male offspring and their mates. These clans, which can number up to eight birds, work together to defend territories against rivals. Pair bonds are formed with courtship feeding, where the male presents food items to the female. In spotted antbirds males may actually feed females sufficiently for the female to cease feeding herself, although she will resume feeding once copulation has occurred. Mutual grooming also plays a role in courtship in some species.
The nesting and breeding biology of antbirds have not been well studied. Even in relatively well-known species the breeding behaviour can be poorly known; for example the nest of the ocellated antbird was first described in 2004. Nests are constructed by both parents, although the male undertakes more of the work in some species. Antbird nests are cups of vegetation such as twigs, dead leaves and plant fibre, and they follow two basic patterns: either suspended or supported. Suspended cups, which may hang from forks in branches, or between two branches, are the more common style of nest. Supported nests rest upon branches, amongst vines, in hollows, and sometimes on mounds of vegetation on the ground. Each species nests at the level where it forages, so a midstory species would build its nest in the midstory. Closely related species nest in the same ways. For example, antvireos in the genus Dysithamnus are all suspension nesters.
Almost all antbirds lay two eggs. A few species of antshrike lay three eggs, and a smaller number of antbirds lay one egg, but this is unusual. Small clutch sizes are typical of tropical birds compared to more temperate species of the same size, possibly due to nest predation, although this is disputed. Both parents participate in incubation, although only the female incubates at night. The length of time taken for chicks to hatch is 14–16 days in most species, although some, such as the dusky antbird, can take as long as 20 days. The altricial chicks are born naked and blind. Both parents brood the young until they are able to thermoregulate, although, as with incubation, only the female broods at night. In common with many songbirds, the parents take faecal sacs for disposal away from the nest. Both parents feed the chicks, often bringing large prey items. When the chicks reach fledging age, after 8–15 days, attending parents call their chicks. As each chick leaves the nest it is cared for exclusively from then on by the parent that was present then. After the first chick fledges and leaves with a parent the remaining parent may increase the supply of food to speed up the process of fledging. After fledging, chicks spend the first few days well hidden as the parents bring them food. Chicks of some species may not become independent of the parents for as long as four months in some antwrens, but two months is more typical for the rest of the family.
Ecology
Antbirds are common components of the avifauna of some parts of the Neotropics and are thought to be important in some ecological processes. They are preyed upon by birds of prey, and their tendency to join flocks is thought to provide protection against such predation. The greater round-eared bat preys on some antbird species, such as the white-bibbed antbird and the scaled antbird; the latter is the bat's preferred prey. Nests, including incubating adults, chicks and eggs, are vulnerable to predators, particularly snakes but also nocturnal mammals. Nesting success is low for many species, particularly in areas of fragmented habitat.
It was once suggested that the relationship between the obligate and regular ant-followers and the army ants, particularly Eciton burchellii, was mutualistic, with the ants benefiting by having the birds chase prey back down towards them. However, experiments where ant followers were excluded have shown that the foraging success of the army ants was 30% lower when the birds were present, suggesting that the birds' relationship was in fact parasitic. This has resulted in a number of behaviours by the ants in order to reduce kleptoparasitism, including hiding of secured prey in the leaf litter and caching of food on trails. It has been suggested that the depressive effect of this parasitism slows the development of E. burchellii swarms and in turn benefits other ant species which are preyed upon by army ants. The ant-following antbirds are themselves followed by three species of butterfly in the family Ithomiinae which feed on their droppings. Bird droppings are usually an unpredictable resource in a rainforest, but the regular behaviour of ant followers makes the exploitation of this resource possible.
Status and conservation
As of April 2008, 38 species are considered by the IUCN to be near threatened or worse and therefore at risk of extinction. Antbirds are neither targeted by the pet trade nor large enough to be hunted; the principal cause of the decline in antbird species is habitat loss. The destruction or modification of forests has several effects on different species of antbirds. The fragmentation of forests into smaller patches affects species that are averse to crossing gaps as small as roads. If these species become locally extinct in a fragment, this reluctance to cross unforested barriers makes their re-establishment unlikely. Smaller forest fragments are unable to sustain mixed-species feeding flocks, leading to local extinctions. Another risk faced by antbirds in fragmented habitat is increased nest predation. An unplanned experiment in fragmentation occurred on Barro Colorado Island, a former hill in Panama that became an isolated island during the flooding caused by the creation of the Panama Canal. Numerous species of antbird formerly resident in the area were extirpated, in no small part due to increased levels of nest predation on the island. While the species lost from Barro Colorado are not globally threatened, they illustrate the vulnerability of species in fragmented habitats and help explain the declines of some species. The majority of threatened species have very small natural ranges. Some are also extremely poorly known; for example the Rio de Janeiro antwren is known only from a single specimen collected in 1982, although there have been unconfirmed reports since 1994 and it is currently listed as critically endangered. Additionally, new species are discovered at regular intervals; the Caatinga antwren was described in 2000, the acre antshrike in 2004, the sincorá antwren in 2007, and the description of a relative of the Paraná antwren discovered in 2005 in the outskirts of São Paulo is being prepared. While not yet scientifically described, conservation efforts have already been necessary, as the site of discovery was set out to be flooded to form a reservoir. Consequently, 72 individuals were captured and transferred to another locality.
| Biology and health sciences | Tyranni | null |
263343 | https://en.wikipedia.org/wiki/Model | Model | A model is an informative representation of an object, person, or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin modulus, a measure.
Models can be divided into physical models (e.g. a ship model or a fashion model) and abstract models (e.g. a set of mathematical equations describing the workings of the atmosphere for the purpose of weather forecasting). Abstract or conceptual models are central to philosophy of science.
In scholarly research and applied science, a model should not be confused with a theory: while a model seeks only to represent reality with the purpose of better understanding or predicting the world, a theory is more ambitious in that it claims to be an explanation of reality.
Model in specific contexts
As a noun, model has specific meanings in certain fields, derived from its original meaning of "structural design or layout":
Model (art), a person posing for an artist, e.g. a 15th-century criminal representing the biblical Judas in Leonardo da Vinci's painting The Last Supper
Model (person), a person who serves as a template for others to copy, as in a role model, often in the context of advertising commercial products; e.g. the first fashion model, Marie Vernet Worth in 1853, wife of designer Charles Frederick Worth.
Model (product), a particular design of a product as displayed in a catalogue or show room (e.g. Ford Model T, an early car model)
Model (organism) a non-human species that is studied to understand biological phenomena in other organisms, e.g. a guinea pig starved of vitamin C to study scurvy, an experiment that would be immoral to conduct on a person
Model (mimicry), a species that is mimicked by another species
Model (logic), a structure (a set of items, such as natural numbers 1, 2, 3,..., along with mathematical operations such as addition and multiplication, and relations, such as ) that satisfies a given system of axioms (basic truisms), i.e. that satisfies the statements of a given theory
Model (CGI), a mathematical representation of any surface of an object in three dimensions via specialized software
Model (MVC), the information-representing internal component of a software, as distinct from its user interface
Physical model
A physical model (most commonly referred to simply as a model but in this context distinguished from a conceptual model) is a smaller or larger physical representation of an object, person or system. The object being modelled may be small (e.g., an atom) or large (e.g., the Solar System) or life-size (e.g., a fashion model displaying clothes for similarly-built potential customers).
The geometry of the model and the object it represents are often similar in the sense that one is a rescaling of the other. However, in many cases the similarity is only approximate or even intentionally distorted. Sometimes the distortion is systematic, e.g., a fixed scale horizontally and a larger fixed scale vertically when modelling topography to enhance a region's mountains.
An architectural model permits visualization of internal relationships within the structure or external relationships of the structure to the environment. Another use is as a toy.
Instrumented physical models are an effective way of investigating fluid flows for engineering design. Physical models are often coupled with computational fluid dynamics models to optimize the design of equipment and processes. This includes external flow such as around buildings, vehicles, people, or hydraulic structures. Wind tunnel and water tunnel testing is often used for these design efforts. Instrumented physical models can also examine internal flows, for the design of ductwork systems, pollution control equipment, food processing machines, and mixing vessels. Transparent flow models are used in this case to observe the detailed flow phenomenon. These models are scaled in terms of both geometry and important forces, for example, using Froude number or Reynolds number scaling (see Similitude). In the pre-computer era, the UK economy was modelled with the hydraulic model MONIAC, to predict for example the effect of tax rises on employment.
Conceptual model
A conceptual model is a theoretical representation of a system, e.g. a set of mathematical equations attempting to describe the workings of the atmosphere for the purpose of weather forecasting. It consists of concepts used to help understand or simulate a subject the model represents.
Abstract or conceptual models are central to philosophy of science, as almost every scientific theory effectively embeds some kind of model of the physical or human sphere. In some sense, a physical model "is always the reification of some conceptual model; the conceptual model is conceived ahead as the blueprint of the physical one", which is then constructed as conceived. Thus, the term refers to models that are formed after a conceptualization or generalization process.
Examples
Conceptual model (computer science), an agreed representation of entities and their relationships, to assist in developing software
Economic model, a theoretical construct representing economic processes
Language model a probabilistic model of a natural language, used for speech recognition, language generation, and information retrieval
Large language models are artificial neural networks used for generative artificial intelligence (AI), e.g. ChatGPT
Mathematical model, a description of a system using mathematical concepts and language
Statistical model, a mathematical model that usually specifies the relationship between one or more random variables and other non-random variables
Model (CGI), a mathematical representation of any surface of an object in three dimensions via specialized software
Medical model, a proposed "set of procedures in which all doctors are trained"
Mental model, in psychology, an internal representation of external reality
Model (logic), a set along with a collection of finitary operations, and relations that are defined on it, satisfying a given collection of axioms
Model (MVC), information-representing component of a software, distinct from the user interface (the "view"), both linked by the "controller" component, in the context of the model–view–controller software design
Model act, a law drafted centrally to be disseminated and proposed for enactment in multiple independent legislatures
Standard model (disambiguation)
Properties of models, according to general model theory
According to Herbert Stachowiak, a model is characterized by at least three properties:
1. Mapping
A model always is a model of something—it is an image or representation of some natural or artificial, existing or imagined original, where this original itself could be a model.
2. Reduction
In general, a model will not include all attributes that describe the original but only those that appear relevant to the model's creator or user.
3. Pragmatism
A model does not relate unambiguously to its original. It is intended to work as a replacement for the original
a) for certain subjects (for whom?)
b) within a certain time range (when?)
c) restricted to certain conceptual or physical actions (what for?).
For example, a street map is a model of the actual streets in a city (mapping), showing the course of the streets while leaving out, say, traffic signs and road markings (reduction), made for pedestrians and vehicle drivers for the purpose of finding one's way in the city (pragmatism).
Additional properties have been proposed, like extension and distortion as well as validity. The American philosopher Michael Weisberg differentiates between concrete and mathematical models and proposes computer simulations (computational models) as their own class of models.
| Physical sciences | Science basics | Basics and measurement |
1867804 | https://en.wikipedia.org/wiki/Base%20load | Base load | The base load (also baseload) is the minimum level of demand on an electrical grid over a span of time, for example, one week. This demand can be met by unvarying power plants or dispatchable generation, depending on which approach has the best mix of cost, availability and reliability in any particular market. The remainder of demand, varying throughout a day, is met by intermittent sources together with dispatchable generation (such as load following power plants, peaking power plants, which can be turned up or down quickly) or energy storage.
Power plants that do not change their power output quickly, such as some large coal or nuclear plants, are generally called baseload power plants. In the 20th century most or all of base load demand was met with baseload power plants, whereas new capacity based around renewables often employs flexible generation.
Description
Grid operators take long and short term bids to provide electricity over various time periods and balance supply and demand continuously. The detailed adjustments are known as the unit commitment problem in electrical power production.
While historically large power grids used unvarying power plants to meet the base load, there is no specific technical requirement for this to be so. The base load can equally well be met by the appropriate quantity of intermittent power sources and dispatchable generation.
Unvarying power plants can be coal, nuclear, combined cycle plants, which may take several days to start up and shut down, hydroelectric, geothermal, biogas, and biomass.
The desirable attribute of dispatchability applies to some gas plants and hydroelectricity. Grid operators also use curtailment to shut plants out of the grid when their energy is not needed.
Economics
Grid operators solicit bids to find the cheapest sources of electricity over short and long term buying periods.
Nuclear and coal plants have very high fixed costs, high plant load factor but very low marginal costs. On the other hand, peak load generators, such as natural gas, have low fixed costs, low plant load factor and high marginal costs.
Some coal and nuclear power plants do not change production to match power consumption demands since it is sometimes more economical to operate them at constant production levels, and not all power plants are designed for it. The IEA has suggested that coal power plants should not run as baseload, because that emits a lot of carbon dioxide, which causes climate change. Some nuclear power stations, such as those in France, are physically capable of being used as load following power plants and do alter their output, to some degree, to help meet varying demands.
Some combined-cycle plants usually fuelled by gas, can provide baseload power,as well as being able to be cost-effectively cycled up and down to match more rapid fluctuations in consumption.
According to National Grid plc chief executive officer Steve Holliday in 2015, and others, baseload is "outdated". By 2019, Steve Holliday had left his position as CEO of National Grid plc and went on the record to say that, "It’s hard to conceive that nuclear does not have an important role to play"
| Technology | Concepts | null |
1868983 | https://en.wikipedia.org/wiki/Bromide | Bromide | A bromide ion is the negatively charged form (Br−) of the element bromine, a member of the halogens group on the periodic table. Most bromides are colorless. Bromides have many practical roles, being found in anticonvulsants, flame-retardant materials, and cell stains. Although uncommon, chronic toxicity from bromide can result in bromism, a syndrome with multiple neurological symptoms. Bromide toxicity can also cause a type of skin eruption, see potassium bromide. The bromide ion has an ionic radius of 196 pm.
Natural occurrence
Bromide is present in typical seawater (35 PSU) with a concentration of around 65 mg/L, which is about 0.2% of all dissolved salts. Seafood and deep sea plants generally have higher levels than land-derived foods. Bromargyrite—natural, crystalline silver bromide—is the most common bromide mineral known but is still very rare. In addition to silver, bromine is also in minerals combined with mercury and copper.
Formation and reactions of bromide
Dissociation of bromide salts
Bromide salts of alkali metal, alkaline earth metals, and many other metals dissolve in water (and even some alcohols and a few ethers) to give bromide ions. The classic case is sodium bromide, which fully dissociates in water:
NaBr → Na+ + Br−
Hydrogen bromide, which is a diatomic molecule, takes on salt-like properties upon contact with water to give an ionic solution called hydrobromic acid. The process is often described simplistically as involving formation of the hydronium salt of bromide:
HBr + H2O → H3O+ + Br−
Hydrolysis of bromine
Bromine readily reacts with water, i.e. it undergoes hydrolysis:
Br2 + H2O → HOBr + HBr
This forms hypobromous acid (HOBr), and hydrobromic acid (HBr in water). The solution is called "bromine water". The hydrolysis of bromine is more favorable in the presence of base, for example sodium hydroxide:
Br2 + NaOH → NaOBr + NaBr
This reaction is analogous to the production of bleach, where chlorine is dissolved in the presence of sodium hydroxide.
Oxidation of bromide
One can test for a bromide ion by adding an oxidizer. One method uses dilute HNO3.
Balard and Löwig's method can be used to extract bromine from seawater and certain brines. For samples testing for sufficient bromide concentration, addition of chlorine produces bromine (Br2):
Cl2 + 2 Br− → 2 Cl− + Br2
Applications
Bromide's main commercial value is its use in producing organobromine compounds, which themselves are rather specialized. Organobromine compounds are commonly used as brominated flame retardants. Some brominated flame retardants were identified as persistent, bioaccumulative, and toxic to both humans and the environment and were suspected of causing neurobehavioral effects and endocrine disruption.
Many metal bromides are produced commercially, including LiBr, NaBr, NH4Br, CuBr, ZnBr2 and AlBr3. AgBr is used for the largely obsolete photographic gelatin silver process.
Medicinal and veterinary uses
Folk and passé medicine
Lithium bromide was used as a sedative beginning in the early 1900s. However, it fell into disfavour in the 1940s due to the rising popularity of safer and more efficient sedatives (specifically, barbiturates) and when some heart patients died after using a salt substitute (see lithium chloride). Like lithium carbonate and lithium chloride, it was used as a treatment for bipolar disorder.
From 1954 - 1977, the Australian biochemist Shirley Andrews was researching safe ways to use lithium for the treatment of manic depressive illnesses while working at the Royal Park Psychiatric Hospital in Victoria. While conducting this research she discovered that bromide caused symptoms of mental illness, leading to a major reduction in its usage.
Bromide compounds, especially potassium bromide, were frequently used as sedatives in the 19th and early 20th centuries. Their use in over-the-counter sedatives and headache remedies (such as Bromo-Seltzer) in the United States extended to 1975 when bromides were withdrawn as ingredients due to chronic toxicity. This use gave the word "bromide" its colloquial connotation of a comforting cliché.
It has been said that during World War I, British soldiers were given bromide to curb their sexual urges.
Bromide salts are used in hot tubs as mild germicidal agents to generate in situ hypobromite.
The bromide ion is antiepileptic and as bromide salt, is used in veterinary medicine in the US. The kidneys excrete bromide ions. The half-life of bromide in the human body (12 days) is long compared with many pharmaceuticals, making dosing challenging to adjust. (A new dose may require several months to reach equilibrium.) Bromide ion concentrations in the cerebrospinal fluid are about 30% of those in blood and are strongly influenced by the body's chloride intake and metabolism.
Since bromide is still used in veterinary medicine in the United States, veterinary diagnostic labs can routinely measure blood bromide levels. However, this is not a conventional test in human medicine in the US since there are no FDA-approved uses for the bromide. Therapeutic bromide levels are measured in European countries like Germany, where bromide is still used therapeutically in human epilepsy.
Biochemistry
Bromide is rarely mentioned in the biochemical context. Some enzymes use bromide as substrate or as a cofactor.
Substrate
Bromoperoxidase enzymes use bromide (typically in seawater) to generate electrophilic brominating agents. Hundreds of organobromine compounds are generated by this process. Notable examples are bromoform, thousands of tons of which are produced annually in this way. The historical dye Tyrian purple is produced by similar enzymatic reactions.
Cofactor
In one specialized report, bromide is an essential cofactor in the peroxidising catalysis of sulfonimine crosslinks in collagen IV. This post-translational modification occurs in all animals and bromine is an essential trace element for humans.
Eosinophils need bromide for fighting multicellular parasites. Hypobromite is produced via eosinophil peroxidase, an enzyme that can use chloride but preferentially uses bromide.
The average concentration of bromide in human blood in Queensland, Australia, is and varies with age and gender. Much higher levels may indicate exposure to brominated chemicals. It is also found in seafood.
| Physical sciences | Halide salts | Chemistry |
20836295 | https://en.wikipedia.org/wiki/Entomological%20warfare | Entomological warfare | Entomological warfare (EW) is a type of biological warfare that uses insects to interrupt supply lines by damaging crops, or to directly harm enemy combatants and civilian populations. There have been several programs which have attempted to institute this methodology; however, there has been limited application of entomological warfare against military or civilian targets, Japan being the only state known to have verifiably implemented the method against another state, namely the Chinese during World War II. However, EW was used more widely in antiquity, in order to repel sieges or cause economic harm to states. Research into EW was conducted during both World War II and the Cold War by numerous states such as the Soviet Union, United States, Germany and Canada. There have also been suggestions that it could be implemented by non-state actors in a form of bioterrorism. Under the Biological and Toxic Weapons Convention of 1972, use of insects to administer agents or toxins for hostile purposes is deemed to be against international law.
Description
EW is a specific type of biological warfare (BW) that uses insects in a direct attack or as vectors to deliver a biological agent, such as plague or cholera. Essentially, EW exists in three varieties. One type of EW involves infecting insects with a pathogen and then dispersing the insects over target areas. The insects then act as a vector, infecting any person or animal they might bite. Another type of EW is a direct insect attack against crops; the insect may not be infected with any pathogen but instead represents a threat to agriculture. The final method of entomological warfare is to use uninfected insects, such as bees, to directly attack the enemy.
Early history
Entomological warfare is not a new concept; historians and writers have studied EW in connection to multiple historic events. A 14th-century plague epidemic in Asia Minor that eventually became known as the Black Death (carried by fleas) is one such event that has drawn attention from historians as a possible early incident of entomological warfare. That plague's spread over Europe may have been the result of a biological attack on the Crimean city of Kaffa.
According to Jeffrey Lockwood, author of Six-Legged Soldiers (a book about EW), the earliest incident of entomological warfare was probably the use of bees by early humans. The bees or their nests were thrown into caves to force the enemy out and into the open. Lockwood theorizes that the Ark of the Covenant may have been deadly when opened because it contained deadly fleas.
During the American Civil War the Confederacy accused the Union of purposely introducing the harlequin bug in the South. These accusations were never proven, and modern research has shown that it is more likely that the insect arrived by other means. The world did not experience large-scale entomological warfare until World War II; Japanese attacks in China were the only verified instance of BW or EW during the war. During, and following, the war other nations began their own EW programs.
World War II
France
France is known to have pursued entomological warfare programs during World War II. Like Germany, the nation suggested that the Colorado potato beetle, aimed at the enemy's food sources, would be an asset during the war. As early as 1939 biological warfare experts in France suggested that the beetle be used against German crops.
Germany
Germany is known to have pursued entomological warfare programs during World War II. The nation pursued the mass-production, and dispersion, of the Colorado potato beetle (Lepinotarsa decemlineata), aimed at the enemy's food sources. The beetle was first found in Germany in 1914, as an invasive species from North America. There are no records that indicate the beetle was ever employed as a weapon by Germany, or any other nation during the war. Regardless, the Germans had developed plans to drop the beetles on English crops.
Germany carried out testing of its Colorado potato beetle weaponization program south of Frankfurt, where they released 54,000 of the beetles. In 1944, an infestation of Colorado potato beetles was reported in Germany. The source of the infestation is unknown, but speculation has offered three alternative theories as to the origin of the infestation. One option is Allied action, an entomological attack, another is that it was the result of the German testing, and another more likely explanation is that it was merely a natural occurrence.
Canada
Among the Allied Powers, Canada led the pioneering effort in vector-borne warfare. After Japan became intent on developing the plague flea as a weapon, Canada and the United States followed suit. Cooperating closely with the United States, Dr. G.B. Reed, chief of Kingston's Queen's University's Defense Research Laboratory, focused his research efforts on mosquito vectors, biting flies, and plague-infected fleas during World War II. Much of this research was shared with or conducted in concert with the United States.
Canada's entire bio-weapons program was ahead of the British and the Americans during the war. The Canadians tended to work in areas their allies ignored; entomological warfare was one of these areas. As the U.S. and British programs evolved, the Canadians worked closely with both nations. The Canadian BW work would continue well after the war, including entomological research.
Japan
Japan used entomological warfare on a large scale during World War II in China. Unit 731, Japan's biological warfare unit, led by Lt. General Shirō Ishii, used plague-infected fleas and flies covered with cholera to infect the population in China. Japanese Yagi bombs developed at Pingfan consisted of two compartments, one with houseflies and another with a bacterial slurry that coated the houseflies prior to release. The Japanese military dispersed them from low-flying airplanes; spraying the fleas from them and dropping the Yagi bombs filled with a mixture of insects and disease. Localized and deadly epidemics resulted and nearly 500,000 Chinese died of disease. An international symposium of historians declared in 2002 that Japanese entomological warfare in China was responsible for the deaths of 440,000. It should also be noted that the U.S. and Soviets granted Japanese 731 officials immunity from prosecution in exchange for their research, and members of the former 731 Unit went on to have successful careers in business, academia, and medicine.
United Kingdom
A British scientist, J.B.S. Haldane, suggested that Britain and Germany were both vulnerable to entomological attack via the Colorado potato beetle. In 1942 the United States shipped 15,000 Colorado potato beetles to Britain for study as a weapon.
Cold War
Soviet Union
The Soviet Union researched, developed and tested an entomological warfare program as a major part of an anti-crop and anti-animal BW program. The Soviets developed techniques for using insects to transmit animal pathogens, such as: foot and mouth disease—which they used ticks to transmit; avian ticks to transmit Chlamydophila psittaci to chickens; and claimed to have developed an automated mass insect breeding facility, capable of outputting millions of parasitic insects per day.
United States
The United States seriously researched the potential of entomological warfare during the Cold War. Labs at Fort Detrick were set up to produce 100 million yellow fever-infected mosquitoes per month deliverable by bombs or missiles. The facility could also breed 50 million fleas per week and later experimented with other diseases such as anthrax, cholera, dengue, dysentery, malaria, relapsing fever, and tularemia. A U.S. Army report titled "Entomological Warfare Target Analysis" listed vulnerable sites within the Soviet Union that the U.S. could attack using entomological vectors. The military also tested the mosquito biting capacity by dropping uninfected mosquitoes over U.S. cities.
North Korean and Chinese officials leveled accusations that during the Korean War the United States engaged in biological warfare, including EW, in North Korea. The claim is dated to the period of the war, and has been thoroughly denied by the U.S. In 1998, Stephen Endicott and Edward Hagerman claimed that the accusations were true in their book, The United States and Biological Warfare: Secrets from the Early Cold War and Korea. The book received mixed reviews, some called it "bad history" and "appalling", while others praised the case the authors made. Other historians have revived the claim in recent decades as well. The same year Endicott and Hagerman's book was published Kathryn Weathersby and Milton Leitenberg of the Cold War International History Project at the Woodrow Wilson Center in Washington released a cache of Soviet and Chinese documents which revealed the North Korean claim was an elaborate disinformation campaign.
During the 1950s the United States conducted a series of field tests using entomological weapons. Operation Big Itch, in 1954, was designed to test munitions loaded with uninfected fleas (Xenopsylla cheopis). Big Itch went awry when some of the fleas escaped into the plane and bit all three members of the air crew. In May 1955 over 300,000 uninfected mosquitoes (Aedes aegypti) were dropped over parts of the U.S. state of Georgia to determine if the air-dropped mosquitoes could survive to take meals from humans. The mosquito tests were known as Operation Big Buzz. Operation Magic Sword was a 1965 U.S. military operation designed to test the effectiveness of the sea-borne release of insect vectors for biological agents. The U.S. engaged in at least two other EW testing programs, Operation Drop Kick and Operation May Day. A 1981 Army report outlined these tests as well as multiple cost-associated issues that occurred with EW. The report is partially declassified—some information is blacked out, including everything concerning "Drop Kick"—and included "cost per death" calculations. The cost per death, according to the report, for a vector-borne biological agent achieving a 50% mortality rate in an attack on a city was $0.29 in 1976 dollars (approximately $1.01 today). Such an attack was estimated to result in 625,000 deaths.
At Kadena Air Force Base, an Entomology Branch of the U.S. Army Preventive Medicine Activity, U.S. Army Medical Center was used to grow "medically important" arthropods, including many strains of mosquitoes in a study of disease vector efficiency. The program reportedly supported a research program studying taxonomic and ecological data surveys for the Smithsonian Institution.
The Smithsonian Institution and The National Academy of Sciences and National Research Council administered special research projects in the Pacific. The Far East Section of the Office of the Foreign Secretary (the NAS Foreign Secretary, not the UK office) administered two such projects which focused "on the flora of Okinawa" and "trapping of airborne insects and arthropods for the study of the natural dispersal of insects and arthropods over the ocean." The motivation for civilian research programs of this nature was questioned when it was learned that such international research was in fact funded by and provided to the U.S. Army as part of the U.S. military's biological warfare research.
The United States has also applied entomological warfare research and tactics in non-combat situations. In 1990 the U.S. funded a $6.5 million program designed to research, breed and drop caterpillars. The caterpillars were to be dropped in Peru on coca fields as part of the American War on Drugs.
In 1996 Russia filed charges on behalf of Cuba. The Cubans had been accusing the United States of using insects to spread dengue fever and other crop pests during the Cold War. A committee was formed to investigate the accusation but could neither confirm nor deny the charges.
In 2002 U.S. entomological anti-drug efforts at Fort Detrick were focused on finding an insect vector for a virus that affects the opium poppy.
Bioterrorism
Clemson University's Regulatory and Public Service Program listed "diseases vectored by insects" among bioterrorism scenarios considered "most likely". Because invasive species are already a problem worldwide one University of Nebraska entomologist considered it likely that the source of any sudden appearance of a new agricultural pest would be difficult, if not impossible, to determine. Lockwood considers insects a more effective means of transmitting biological agents for acts of bioterrorism than the actual agents. In his opinion insect vectors are easily gathered and their eggs are easily transportable without detection. Isolating and delivering biological agents, on the other hand, is extremely challenging and hazardous.
In one of the few suspected acts of entomological bioterrorism an eco-terror group known as The Breeders claimed to have released Mediterranean fruit flies (medflies) amidst an ongoing California infestation. Lockwood asserts that there is some evidence the group played a role in the event. The pest attacks a variety of crops and the state of California responded with a large-scale pesticide spraying program. At least one source asserted that there is no doubt that an outside hand played a role in the dense 1989 infestation. The group stated in a letter to then Los Angeles Mayor Tom Bradley that their goals were twofold. They sought to cause the medfly infestation to grow out of control which, in turn, would render the ongoing malathion spraying program financially infeasible.
Legal status
The Biological and Toxic Weapons Convention (BWC) of 1972 does not specifically mention insect vectors in its text. The language of the treaty, however, does cover vectors. Article I bans "Weapons, equipment or means of delivery designed to use such agents or toxins for hostile purposes or in armed conflict." It would appear, due to the text of the BWC, that insect vectors as an aspect of entomological warfare are covered and outlawed by the convention. The issue is less clear when warfare with uninfected insects against crops is considered.
Genetically engineered insects
US intelligence officials have suggested that insects could be genetically engineered via technologies such as CRISPR to create GMO "killer mosquitoes" or plagues that wipe out staple crops. There is research ongoing to genetically modify mosquitoes to curb the spread of diseases, such as Zika, and the West Nile virus by using mosquitoes modified using CRISPR to no longer carry the pathogen. However, this research also shows that it may also be possible to implant diseases or pathogens via genetic modification. It has been suggested by the Max Planck Institute for Evolutionary Biology that current US research into genetically modified insects for crop protection via infectious diseases which spread genetic modifications to crops en masse could lead to the creation of genetically modified insects for use in warfare.
| Technology | Weapon of mass destruction | null |
23661260 | https://en.wikipedia.org/wiki/Zygosity | Zygosity | Zygosity (the noun, zygote, is from the Greek "yoked," from "yoke") () is the degree to which both copies of a chromosome or gene have the same genetic sequence. In other words, it is the degree of similarity of the alleles in an organism.
Most eukaryotes have two matching sets of chromosomes; that is, they are diploid. Diploid organisms have the same loci on each of their two sets of homologous chromosomes except that the sequences at these loci may differ between the two chromosomes in a matching pair and that a few chromosomes may be mismatched as part of a chromosomal sex-determination system. If both alleles of a diploid organism are the same, the organism is homozygous at that locus. If they are different, the organism is heterozygous at that locus. If one allele is missing, it is hemizygous, and, if both alleles are missing, it is nullizygous.
The DNA sequence of a gene often varies from one individual to another. These gene variants are called alleles. While some genes have only one allele because there is low variation, others have only one allele because deviation from that allele can be harmful or fatal. But most genes have two or more alleles. The frequency of different alleles varies throughout the population. Some genes may have alleles with equal distributions. Often, the different variations in the genes do not affect the normal functioning of the organism at all. For some genes, one allele may be common, and another allele may be rare. Sometimes, one allele is a disease-causing variation while another allele is healthy.
In diploid organisms, one allele is inherited from the male parent and one from the female parent. Zygosity is a description of whether those two alleles have identical or different DNA sequences. In some cases the term "zygosity" is used in the context of a single chromosome.
Types
The words homozygous, heterozygous, and hemizygous are used to describe the genotype of a diploid organism at a single locus on the DNA. Homozygous describes a genotype consisting of two identical alleles at a given locus, heterozygous describes a genotype consisting of two different alleles at a locus, hemizygous describes a genotype consisting of only a single copy of a particular gene in an otherwise diploid organism, and nullizygous refers to an otherwise-diploid organism in which both copies of the gene are missing.
Homozygous
A cell is said to be homozygous for a particular gene when identical alleles of the gene are present on both homologous chromosomes.
An individual that is homozygous-dominant for a particular trait carries two copies of the allele that codes for the dominant trait. This allele, often called the "dominant allele", is normally represented by the uppercase form of the letter used for the corresponding recessive trait (such as "P" for the dominant allele producing purple flowers in pea plants). When an organism is homozygous-dominant for a particular trait, its genotype is represented by a doubling of the symbol for that trait, such as "PP".
An individual that is homozygous-recessive for a particular trait carries two copies of the allele that codes for the recessive trait. This allele, often called the "recessive allele", is usually represented by the lowercase form of the letter used for the corresponding dominant trait (such as, with reference to the example above, "p" for the recessive allele producing white flowers in pea plants). The genotype of an organism that is homozygous-recessive for a particular trait is represented by a doubling of the appropriate letter, such as "pp".
Heterozygous
A diploid organism is heterozygous at a gene locus when its cells contain two different alleles (one wild-type allele and one mutant allele) of a gene. The cell or organism is called a heterozygote specifically for the allele in question, and therefore, heterozygosity refers to a specific genotype. Heterozygous genotypes are represented by an uppercase letter (representing the dominant/wild-type allele) and a lowercase letter (representing the recessive/mutant allele), as in "Rr" or "Ss". Alternatively, a heterozygote for gene "R" is assumed to be "Rr". The uppercase letter is usually written first.
If the trait in question is determined by simple (complete) dominance, a heterozygote will express only the trait coded by the dominant allele, and the trait coded by the recessive allele will not be present. In more complex dominance schemes the results of heterozygosity can be more complex.
A heterozygous genotype can have a higher relative fitness than either the homozygous-dominant or homozygous-recessive genotype – this is called a heterozygote advantage.
Hemizygous
A chromosome in a diploid organism is hemizygous when only one copy is present. The cell or organism is called a hemizygote. Hemizygosity is also observed when one copy of a gene is deleted, or, in the heterogametic sex, when a gene is located on a sex chromosome. Hemizygosity is not the same as haploinsufficiency, which describes a mechanism for producing a phenotype. For organisms in which the male is heterogametic, such as humans, almost all X-linked genes are hemizygous in males with normal chromosomes, because they have only one X chromosome and few of the same genes are on the Y chromosome. Transgenic mice generated through exogenous DNA microinjection of an embryo's pronucleus are also considered to be hemizygous, because the introduced allele is expected to be incorporated into only one copy of any locus. A transgenic individual can later be bred to homozygosity and maintained as an inbred line to reduce the need to confirm the genotype of each individual.
In cultured mammalian cells, such as the Chinese hamster ovary cell line, a number of genetic loci are present in a functional hemizygous state, due to mutations or deletions in the other alleles.
Nullizygous
A nullizygous organism carries two mutant alleles for the same gene. The mutant alleles are both complete loss-of-function or 'null' alleles, so homozygous null and nullizygous are synonymous. The mutant cell or organism is called a nullizygote.
Autozygous and allozygous
Zygosity may also refer to the origin(s) of the alleles in a genotype. When the two alleles at a locus originate from a common ancestor by way of nonrandom mating (inbreeding), the genotype is said to be autozygous. This is also known as being "identical by descent", or IBD. When the two alleles come from different sources (at least to the extent that the descent can be traced), the genotype is called allozygous. This is known as being "identical by state", or IBS.
Because the alleles of autozygous genotypes come from the same source, they are always homozygous, but allozygous genotypes may be homozygous too. Heterozygous genotypes are often, but not necessarily, allozygous because different alleles may have arisen by mutation some time after a common origin. Hemizygous and nullizygous genotypes do not contain enough alleles to allow for comparison of sources, so this classification is irrelevant for them.
Monozygotic and dizygotic twins
As discussed above, "zygosity" can be used in the context of a specific genetic locus (example). The word zygosity may also be used to describe the genetic similarity or dissimilarity of twins. Identical twins are monozygotic, meaning that they develop from one zygote that splits and forms two embryos. Fraternal twins are dizygotic because they develop from two separate oocytes (egg cells) that are fertilized by two separate sperm. Sesquizygotic twins are halfway between monozygotic and dizygotic and are believed to arise after two sperm fertilize a single oocyte which subsequently splits into two morula.
Medicine and disease
Zygosity is an important factor in human medicine. If one copy of an essential gene is mutated, the (heterozygous) carrier is usually healthy. However, more than 1,000 human genes appear to require both copies, that is, a single copy is insufficient for health. This is called haploinsufficiency. For instance, a single copy of the Kmt5b gene leads to haploinsufficiency and results in a skeletal muscle developmental deficit.
Heterozygosity in population genetics
In population genetics, the concept of heterozygosity is commonly extended to refer to the population as a whole, i.e., the fraction of individuals in a population that are heterozygous for a particular locus. It can also refer to the fraction of loci within an individual that are heterozygous.
In an admixed population, whose members derive ancestry from two or more separate sources, its heterozygosity is proven to be at least as great as the least heterozygous source population and potentially more than the heterozygosity of all the source populations. It reflects the contributions of its multiple ancestral groups. Admixed populations show high levels of genetic variation due to the fusion of source populations with different genetic variants.
Typically, the observed () and expected () heterozygosities are compared, defined as follows for diploid individuals in a population:
Observed
where is the number of individuals in the population, and are the alleles of individual at the target locus.
Expected
where is the number of alleles at the target locus, and is the allele frequency of the allele at the target locus.
| Biology and health sciences | Genetics | Biology |
23661523 | https://en.wikipedia.org/wiki/Vachellia | Vachellia | Vachellia is a genus of flowering plants in the legume family, Fabaceae, commonly known as thorn trees or acacias. It belongs to the subfamily Mimosoideae. Its species were considered members of genus Acacia until 2009. Vachellia can be distinguished from other acacias by its capitate inflorescences and spinescent stipules. Before discovery of the New World, Europeans in the Mediterranean region were familiar with several species of Vachellia, which they knew as sources of medicine, and had names for them that they inherited from the Greeks and Romans.
The wide-ranging genus occurs in a variety of open, tropical to subtropical habitats, and is locally dominant. In parts of Africa, Vachellia species are shaped progressively by grazing animals of increasing size and height, such as gazelle, gerenuk, and giraffe. The genus in Africa has thus developed thorns in defence against such herbivory .
Nomenclature
By 2005, taxonomists had decided that Acacia sensu lato should be split into at least five separate genera. The ICN dictated that under these circumstances, the name of Acacia should remain with the original type, which was Acacia nilotica. However, that year the General Committee of the IBC decided that Acacia should be given a new type (Acacia verticillatum) so that the ~920 species of Australian acacias would not need to be renamed Racosperma. This decision was opposed by 54.9% or 247 representatives at its 2005 congress, while 45.1% or 203 votes were cast in favor. However, since a 60% vote was required to override the committee, the decision was carried, and a nom. cons. propositum was listed in Appendix III (p. 286). The 2011 congress voted 373 to 172 to uphold the 2005 decision, which means that the name Acacia and a new type follow the majority of the species in Acacia sensu lato, rather than this genus. However, some members of the botanical community remain unconvinced, and the use of Acacia in the scientific literature continues to exceed the use of the new generic names.
Description
The members of Vachellia are trees or shrubs, sometimes climbing, and are always armed. Younger plants, especially, are armed with spines which are modified stipules, situated near the leaf bases. Some (cf. V. tortilis, , V. luederitzii and V. reficiens) are also armed with paired, recurved prickles (in addition to the spines). The leaves are alternate and bipinnately arranged, and their pinnae are usually opposite. The racemose inflorescences usually grow from the leaf axils. The yellow or creamy white flowers are produced in spherical heads, or seldom in elongate spikes, which is the general rule in the related genus Senegalia. The flowers are typically bisexual with numerous stamens, but unisexual flowers have been noted in V. nilotica (cf. Sinha, 1971). The calyx and corolla are usually 4 to 5-lobed. Glands are usually present on the rachis and the upper side of the petiole. The seed pod may be straight, curved or curled, and either dehiscent or indehiscent.
Species list
Of the 163 species currently assigned to Vachellia, 52 are native to the Americas, 83 to Africa, Madagascar and the Mascarene Islands, 32 to Asia and 9 to Australia and the Pacific Islands. Vachellia comprises the following species:
Vachellia abyssinica (Hochst. ex. Benth.) Kyal. & Boatwr.—flat top acacia
subsp. abyssinica (Hochst. ex. Benth.) Kyal. & Boatwr.
subsp. calophylla (Brenan) Kyal. & Boatwr.
Vachellia acuifera (Benth.) Seigler & Ebinger—Bahama acacia, cassip, pork-and-doughboy, (Bahamas) rosewood
Vachellia albicortata (Burkart) Seigler & Ebinger
Vachellia allenii (D. H. Janzen) Seigler & Ebinger—Allen acacia
Vachellia amythethophylla (Steud. ex A.Rich.) Kyal. & Boatwr.
Vachellia ancistroclada (Brenan) Kyal. & Boatwr.
Vachellia anegadensis (Britton) Seigler & Ebinger—Anegada acacia, blackbrush-wattle, pokemeboy
Vachellia antunesii (Harms) Kyal. & Boatwr.
Vachellia arenaria (Schinz) Kyal. & Boatwr.—sand acacia
Vachellia aroma (Gillies ex Hook. & Arn.) Seigler & Ebinger
var. aroma Gillies ex Hook. & Arn.
var. huarango Ruíz & J.Macbr.
Vachellia astringens (Gillies in Hook. et Arn.) Speg.
Vachellia baessleri Clarke, Siegler & Ebinger
Vachellia barahonensis (Urb. & Ekman) Seigler & Ebinger
Vachellia bavazzanoi (Pichi-Sermolli) Kyal. & Boatwr.
Vachellia belairioides (Urb.) Seigler & Ebinger—Bellair acacia
Vachellia bellula (Drake) Boatwr.
Vachellia biaciculata (S. Watson) Seigler & Ebinger
Vachellia bidwillii (Benth.) Kodela—corkwood wattle, dogwood. "'Waneu', of the aboriginals of Central Queensland; 'Yadthor', of those of the Cloncurry River, Northern Queensland."
Vachellia bilimekii (J. Macbr.) Seigler & Ebinger
Vachellia bolei (R.P. Subhedar) Ragupathy, Seigler, Ebinger & Maslin
Vachellia borleae (Burtt Davy) Kyal. & Boatwr.—sticky acacia, named for the collector Jeanne M. Borle.
Vachellia brandegeana (I. M. Johnst.) Seigler & Ebinger—Baja California acacia
Vachellia bravoensis (Isely) Seigler & Ebinger
Vachellia bricchettiana (Chiov.) Kyal. & Boatwr.
Vachellia bucheri (Marie-Victorín) Seigler & Ebinger—Bucher acacia
Vachellia bullockii (Brenan) Kyal. & Boatwr.
var. bullockii (Brenan) Kyal. & Boatwr.
var. induta (Brenan) Kyal. & Boatwr.
Vachellia burttii (Bak. f.) Kyal. & Boatwr.
Vachellia bussei (Harms ex Sjöstedt) Kyal. & Boatwr.
Vachellia californica (Brandegee) Seigler & Ebinger
Vachellia campechiana (Mill.) Seigler & Ebinger—boatthorn acacia, spoon-thorn acacia
f. campechiana (Mill.) Seigler & Ebinger
f. houghii (Britton & Rose) Seigler & Ebinger
Vachellia caurina (Barneby & Zanoni) Seigler & Ebinger
Vachellia caven (Molina) Seigler & Ebinger
var. caven (Molina) Seigler & Ebinger
var. dehiscens (Ciald.) Seigler & Ebinger
var. microcarpa (Speg.) Seigler & Ebinger
var. stenocarpa (Speg.) Seigler & Ebinger
Vachellia cernua (Thulin & Hassan) Kyal. & Boatwr.
Vachellia chiapensis (Saff.) Seigler & Ebinger—Chiapas acacia
Vachellia choriophylla (Benth.) Seigler & Ebinger—cinnecord acacia, Florida acacia, (Bahamas) cinnecord
Vachellia clarksoniana (Pedley) Kodela
Vachellia collinsii (Saff.) Seigler & Ebinger—Collins acacia
Vachellia constricta (Benth.) Seigler & Ebinger—Whitethorn acacia, Mescat acacia
Vachellia cookii (Saff.) Seigler & Ebinger—Cook acacia, cockspur acacia
Vachellia cornigera (L.) Seigler & Ebinger—bullhorn wattle, bull's-horn acacia, bull-horn thorn, oxhorn acacia
Vachellia cucuyo (Barneby & Zanoni) Seigler & Ebinger—Cucuyo acacia
Vachellia curvifructa (Burkart) Seigler & Ebinger
Vachellia daemon (Ekman & Urb.) Seigler & Ebinger—Camagüey acacia
Vachellia davyi (N.E.Br.) Kyal. & Boatwr.—corky-bark acacia
Vachellia ditricha (Pedley) Kodela
Vachellia dolichocephala (Harms) Kyal. & Boatwr.
Vachellia douglasica (Pedley) Kodela
Vachellia drepanolobium (Harms ex Sjöstedt) P.J.H. Hurter—whistling thorn
Vachellia dyeri (P.P.Swartz) Kyal. & Boatwr.
Vachellia eburnea (L.f.) P. Hurter & Mabb.
Vachellia ebutsiniorum (P.J.H. Hurter) Kyal. & Boatwr.—Ebutsini acacia
Vachellia edgeworthii (T.Anders.) Kyal. & Boatwr.
Vachellia elatior (Brenan) Kyal. & Boatwr.
subsp. elatior (Brenan) Kyal. & Boatwr.
subsp. turkanae (Brenan) Kyal. & Boatwr.
Vachellia erioloba (E.Mey.) P.J.H. Hurter—camel thorn (Kameeldoring)
Vachellia erythrophloea (Brenan) Kyal. & Boatwr.
Vachellia etbaica (Schweinf.) Kyal. & Boatwr.—savannah thorn
subsp. australis (Brenan) Kyal. & Boatwr.
subsp. etbaica (Schweinf.) Kyal. & Boatwr.
subsp. platycarpa (Brenan) Kyal. & Boatwr.
subsp. uncinata (Brenan) Kyal. & Boatwr.
Vachellia exuvialis (Verdoorn) Kyal. & Boatwr.—flaky-bark acacia
Vachellia farnesiana (L.) Wight & Arn.—huisache
var. farnesiana (L.) Wight & Arn.
var. guanacastensis (H.D.Clarke et al.) Wight & Arn.
var. minuta (M.E.Jones) Wight & Arn.
var. pinetorum (F.J.Herm.) Wight & Arn.—pineland wattle
Vachellia fischeri (Harms) Kyal. & Boatwr.—flat-topped thorn
Vachellia flava (Forssk.) Kyal. & Boatwr.
Vachellia gentlei (Standley) Seigler & Ebinger—gentle acacia
Vachellia gerrardii (Benth.) P.J.H. Hurter—red acacia
var. calvescens (Brenan) P.J.H. Hurter
var. gerrardii (Benth.) P.J.H. Hurter
var. latisiliqua (Brenan) P.J.H. Hurter
Vachellia glandulifera (S. Watson) Seigler & Ebinger
Vachellia globulifera (Saff.) Seigler & Ebinger—globular acacia
Vachellia grandicornuta (Gerstner) Seigler & Ebinger—horned-thorn acacia
Vachellia guanacastensis (Clark, Seigler, & Ebinger) Seigler & Ebinger
Vachellia gummifera (Willd.) Kyal. & Boatwr.—gum-bearing acacia
Vachellia haematoxylon (Willd.) Seigler & Ebinger—gray camel thorn, giraffe thorn
Vachellia harmandiana (Pierre) Maslin, Seigler & Ebinger
Vachellia hebeclada (DC.) Kyal. & Boatwr.—candle-pod acacia
subsp. chobiensis (O.B.Miller) Kyal. & Boatwr.
subsp. hebeclada (DC.) Kyal. & Boatwr.
subsp. tristis (A.Schreiber) Kyal. & Boatwr.
Vachellia hindsii (Benth.) Seigler & Ebinger—Hinds acacia
Vachellia hockii (De Wild.) Seigler & Ebinger
Vachellia horrida (L.) Kyal. & Boatwr.—long white-galled acacia
subsp. benadirensis (Chiov.) Kyal. & Boatwr.
subsp. horrida (L.) Kyal. & Boatwr.
Vachellia hydaspica (J.R. Drumm. ex R. Parker) Ali
Vachellia inopinata (Prain) Maslin, Seigler & Ebinger
Vachellia insulae-iacobi (L. Riley) Seigler & Ebinger
Vachellia jacquemontii (Benth.) ali—baonḷī, raati-banwali
Vachellia janzenii (Ebinger & Seigler) Seigler & Ebinger—Janzen acacia
Vachellia karroo (Hayne) Banfi & Galasso—Karroo Bush
Vachellia kingii (Prain) Maslin, Seigler & Ebinger
Vachellia kirkii (Oliv.) Kyal. & Boatwr.—flood plain acacia
subsp. kirkii (Oliv.) Kyal. & Boatwr.
var. kirkii (Oliv.) Kyal. & Boatwr.
var. sublaevis (Brenan) Kyal. & Boatwr.
subsp. mildbraedii (Harms) Kyal. & Boatwr.
Vachellia koltermanii R. García, M. Mejía, Ebinger, & Seigler
Vachellia kosiensis (P.P.Sw. ex Coates Palgr.) Kyal. & Boatwr.—dune acacia, dune sweet-thorn
Vachellia lahai (Steud. & Hochst. ex. Benth.) Kyal. & Boatwr.—red-thorn acacia
Vachellia lasiopetala (Oliv.) Kyal. & Boatwr.
Vachellia latispina (J.E.Burrows & S.M.Burrows) Kyal. & Boatwr.
Vachellia leucophloea (Roxb.) Maslin, Seigler & Ebinger—pilang
var. leucophloea (Roxb.) Maslin, Seigler & Ebinger
var. microcephala (Kurz) Maslin, Seigler & Ebinger
Vachellia leucospira (Brenan) Kyal. & Boatwr.
Vachellia luederitzii (Engl.) Kyal. & Boatwr.—bastard umbrella thorn
var. luederitzii (Engl.) Kyal. & Boatwr.—Kalahari-sand acacia
var. retinens (Sim) Kyal. & Boatwr.—balloon-thorn acacia
Vachellia macracantha (Humb. & Bonpl. ex Willd.) Seigler & Ebinger—longspine acacia, French casha, long-spine acacia, porknut, cambrón, long-spined acacia, (Jamaica) parknut, (Virgin Islands) wild tamarind, (Netherlands Antilles) Creole casha, Spanish casha, steel acacia, (Virgin Islands) stink casha, strink casha
Vachellia macrothyrsa (Harms) Kyal. & Boatwr.
Vachellia malacocephala (Harms) Kyal. & Boatwr.—black-galled acacia
Vachellia mayana (Lundell) Seigler & Ebinger—Maya Acacia
Vachellia mbuluensis (Brenan) Kyal. & Boatwr.—hairy-galled acacia
Vachellia melanoceras (Beurl.) Seigler & Ebinger—blackthorn acacia, bullhorn acacia
Vachellia montana (P.P.Swartz) Kyal. & Boatwr.
Vachellia myaingii (Lace) Maslin, Seigler & Ebinger
Vachellia myrmecophila (R.Vig.) Boatwr.
Vachellia natalitia (E.Mey.) Kyal. & Boatwr.—pale-bark acacia, pale-bark sweet thorn
Vachellia nebrownii (Burtt Davy) Seigler & Ebinger—water acacia, water thorn
Vachellia negrii (Pichi-Sermolli) Kyal. & Boatwr.
Vachellia nilotica (L.) P.J.H. Hurter & Mabb.—scented-pod acacia, gum Arabic tree, babul, Amrad gum, thorny mimosa of India
subsp. adstringens (Schumach. & Thonn.) P.J.H. Hurter & Mabb.
subsp. cupressiformis (J.L.Stewart) P.J.H. Hurter & Mabb.
subsp. hemispherica (Ali & Faruqi) P.J.H. Hurter & Mabb.
subsp. indica (Benth.) P.J.H. Hurter & Mabb.—Babul, Prickly acacia
subsp. kraussiana (Benth.) P.J.H. Hurter & Mabb.
subsp. leiocarpa (Brenan) P.J.H. Hurter & Mabb.
subsp. nilotica (L.) P.J.H. Hurter & Mabb.
subsp. subalata (Vatke) P.J.H. Hurter & Mabb.
subsp. tomentosa (Benth.) P.J.H. Hurter & Mabb.
Vachellia nubica (Benth.) Kyal. & Boatwr.
Vachellia oerfota (Forssk) Kyal. & Boatwr.
var. brevifolia (Boulos) Kyal. & Boatwr.
var. oerfota (Forssk) Kyal. & Boatwr.
Vachellia origena (Hunde) Kyal. & Boatwr.
Vachellia ormocarpoides (P.J.H. Hurter) Kyal. & Boatwr.—Leolo thorn
Vachellia oviedoensis (R. García & M. Mejía) Seigler & Ebinger
Vachellia pacensis (Rudd & Carter) Seigler & Ebinger
Vachellia pachyphloia (W. Fitzg.) Kodela
subsp. brevipinnula (Tindale & Kodela) Kodela
subsp. pachyphloia (W. Fitzg.) Kodela
Vachellia pallidifolia (Tindale) Kodela
Vachellia paolii (Chiov.) Kyal. & Boatwr.
subsp. paolii (Chiov.) Kyal. & Boatwr.
subsp. paucijuga (Brenan) Kyal. & Boatwr.
Vachellia pennatula (Schltdl. & Cham.) Seigler & Ebinger—feather acacia
var. parvicephala Seigler & Ebinger
var. pennatula (Schltdl. & Cham.) Seigler & Ebinger
Vachellia permixta (Burtt Davy) Kyal. & Boatwr.—slender acacia
Vachellia pilispina (Pichi-Sermolli) Kyal. & Boatwr.—mpande
Vachellia polypyrigenes (Greenm.) Seigler & Ebinger
Vachellia prasinata (Hunde) Kyal. & Boatwr.
Vachellia pringlei (Rose) Seigler & Ebinger—Pringle acacia
var. californica —California Pringle acacia
var. pringlei (Rose) Seigler & Ebinger—typical Pringle acacia
Vachellia pseudofistula (Harms) Kyal. & Boatwr.—ant-galled acacia
Vachellia qandalensis (Thulin) Kyal. & Boatwr.
Vachellia quintanilhae (Torre) Kyal. & Boatwr.
Vachellia reficiens (Wawra) Kyal. & Boatwr.—red-bark acacia
subsp. misera (Vatke) Kyal. & Boatwr.
subsp. reficiens (Wawra) Kyal. & Boatwr.
Vachellia rehmanniana (Schinz) Kyal. & Boatwr.—silky acacia
Vachellia rigidula (Benth.) Seigler & Ebinger—blackbrush acacia, blackbrush
Vachellia robbertsei (P.P.Swartz) Kyal. & Boatwr.—Sekhukhune acacia
Vachellia robusta (Burch.) Kyal. & Boatwr.—splendid acacia
subsp. clavigera (E.Mey.) Kyal. & Boatwr.—river acacia
subsp. robusta (Burch.) Kyal. & Boatwr.—robust acacia
subsp. usambarensis (Taub.) Kyal. & Boatwr.
Vachellia roigii (Léon) Seigler & Ebinger—Roig acacia
Vachellia rorudiana (Christopherson) Seigler & Ebinger—Galapagos acacia
Vachellia ruddiae (D. H. Janzen) Seigler & Ebinger—Rudd acacia
Vachellia schaffneri (S. Watson) Seigler & Ebinger—Schaffner's wattle, twisted acacia
var. bravoensis (Isely) Seigler & Ebinger
var. schaffneri (S. Watson) Seigler & Ebinger
Vachellia schottii (Torr.) Seigler & Ebinger—Schott's wattle
Vachellia sekhukhuniensis (P.J.H. Hurter) Kyal. & Boatwr.—Sekhukhune thorn
Vachellia seyal (Delile) P.J.H. Hurter—white whistling thorn
var. fistula (Schweinf.) P.J.H. Hurter
var. seyal (Delile) P.J.H. Hurter
Vachellia siamensis (Craib) Maslin, Seigler & Ebinger
Vachellia sieberiana (DC.) Kyal. & Boatwr.—longpod thorn, false paperbark thorn
var. sieberiana (DC.) Kyal. & Boatwr.
var. villosa (A.Chev.) Kyal. & Boatwr.
var. woodii (Burtt Davy) Kyal. & Boatwr.—paperbark acacia, paperbark thorn
Vachellia sphaerocephala (Schltdl. & Cham.) Seigler & Ebinger—roundhead acacia, bee wattle
Vachellia stuhlmannii (Taub.) Kyal. & Boatwr.—olive-barked thorn, vlei acacia
Vachellia suberosa (A. Cunn. ex Benth.) Kodela—corkybark wattle
Vachellia sutherlandii (F. Muell.) Kodela—corkwood wattle
Vachellia swazica (Burtt Davy) Kyal. & Boatwr.—Swazi acacia
Vachellia tenuispina (Verdoorn) Kyal. & Boatwr.—turf acacia
Vachellia tephrophylla (Thulin) Kyal. & Boatwr.
Vachellia theronii (P.P.Sw.) Boatwr.—slender mountain thorn
Vachellia tomentosa (Rottler) Maslin, Seigler & Ebinger—klampis
Vachellia torrei (Brenan) Kyal. & Boatwr.—Mozambique sticky thorn
Vachellia tortilis (Forssk.) Galasso & Banfi—umbrella thorn, umbrella acacia
subsp. heteracantha (Burch.) Galasso & Banfi
subsp. raddiana (Savi) Galasso & Banfi
var. pubescens (A.Chev.) Galasso & Banfi
var. raddiana (Savi) Galasso & Banfi
subsp. spirocarpa (Hochst. ex A.Rich.) Galasso & Banfi
var. crinita (Chiov.) Galasso & Banfi
var. spirocarpa (Hochst. ex. A.Rich.) Galasso & Banfi
subsp. tortilis (Forssk.) Galasso & Banfi
Vachellia tortuosa (L.) Seigler & Ebinger—twisted acacia, acacia bush, casia, catclaw, Dutch casha, huisachillo, Rio Grande acacia, sweet briar, sweet-briar, wild poponax
Vachellia turnbulliana (Brenan) Kyal. & Boatwr.—velvet pod acacia
Vachellia valida (Tindale & Kodela) Kodela
Vachellia vernicosa (Britton & Rose) Seigler & Ebinger—viscid acacia
Vachellia viguieri (Villiers & Du Puy) Boatwr.
Vachellia villaregalis (McVaugh) Seigler & Ebinger
Vachellia walwalensis (Gilliland) Kyal. & Boatwr.
Vachellia xanthophloea (Benth.) P.J.H. Hurter—fever tree
Vachellia zanzibarica (S.Moore) Kyal. & Boatwr.—coastal whistling thorn
var. microphylla (Brenan) Kyal. & Boatwr.
var. zanzibarica (S.Moore) Kyal. & Boatwr.
Vachellia zapatensis (Urb. & Ekman) Seigler & Ebinger
Incertae sedis
These species are suspected to belong to Vachellia, but have not been formally transferred.
Acacia callicoma Meisn.
Acacia harala Thulin & Gifri
Acacia hunteri Oliv.
Acacia johnwoodii Boulos
Acacia planifrons Koenig ex Wight & Arn.
Acacia pseudo-eburnea J.R. Drumm. ex Dunn
Acacia tanjorensis Ragup., Thoth. & Mahad.
Acacia yemenensis Boulos
Hybrids
Vachellia × cedilloi (Rico Arce) Seigler & Ebinger
Vachellia campechiana × pennatula
Vachellia erioloba × haematoxylon
Vachellia × gladiata (Saff.) Seigler & Ebinger
Vachellia kirkii × seyal
Vachellia macracantha × pennatula
Vachellia seyal var. fistula × xanthophloea
Vachellia × standleyi (Saff.) Seigler & Ebinger
| Biology and health sciences | Fabales | Plants |
23664612 | https://en.wikipedia.org/wiki/Halide%20mineral | Halide mineral | Halide minerals are those minerals with a dominant halide anion (, , and ). Complex halide minerals may also have polyatomic anions.
Examples include the following:
Atacamite
Avogadrite (K,Cs)BF
Bararite (β)
Bischofite
Brüggenite
Calomel
Carnallite
Carnallite
Cerargyrite/Horn silver AgCl
Chlorargyrite AgCl, bromargyrite AgBr, and iodargyrite AgI
Cryolite
Cryptohalite (a)
Dietzeite
Eglestonite
Embolite AgCl+AgBr
Eriochalcite
Fluorite
Halite NaCl
Lautarite
Marshite CuI
Miersite AgI
Nantokite CuCl
Sal Ammoniac
Sylvite KCl
Terlinguaite
Tolbachite
Villiaumite NaF
Yttrocerite (Ca,Y,Ce)F2
Yttrofluorite (Ca,Y)F2
Zavaritskite (BiO)F
Many of these minerals are water-soluble and are often found in arid areas in crusts and other deposits as are various borates, nitrates, iodates, bromates and the like. Others, such as the fluorite group, are not water-soluble. As a collective whole, simple halide minerals (containing fluorine through iodine, alkali metals, alkaline Earth metals, in addition to other metals/cations) occur abundantly at the surface of the Earth in a variety of geologic settings. More complex minerals as shown below are also found.
Commercially significant halide minerals
Two commercially important halide minerals are halite and fluorite. The former is a major source of sodium chloride, in parallel with sodium chloride extracted from sea water or brine wells. Fluorite is a major source of hydrogen fluoride, complementing the supply obtained as a byproduct of the production of fertilizer. Carnallite and bischofite are important sources of magnesium. Natural cryolite was historically required for the production of aluminium, however, currently most cryolite used is produced synthetically.
Many of the halide minerals occur in marine evaporite deposits. Other geologic occurrences include arid environments such as deserts. The Atacama Desert has large quantities of halide minerals as well as chlorates, iodates, oxyhalides, nitrates, borates and other water-soluble minerals. Not only do those minerals occur in subsurface geologic deposits, they also form crusts on the Earth's surface due to the low rainfall (the Atacama is the world's driest desert as well as one of the oldest at 25 million years of age).
Nickel–Strunz Classification -03- Halides
IMA-CNMNC proposes a new hierarchical scheme (Mills et al., 2009). This list uses the Classification of Nickel–Strunz (mindat.org, 10 ed, pending publication).
Abbreviations
REE: rare-earth element (Sc, Y, La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu)
PGE: platinum-group element (Ru, Rh, Pd, Os, Ir, Pt)
* : discredited (IMA/CNMNC status)
? : questionable/doubtful (IMA/CNMNC status)
Regarding 03.C Aluminofluorides, 06 Borates, 08 Vanadates (04.H V[5,6] Vanadates), 09 Silicates:
neso-: insular (from Greek , "island")
soro-: grouped (from Greek , "heap, pile, mound")
cyclo-: ringed (from Greek , "circle")
ino-: chained (from Greek , "fibre", [from Ancient Greek ])
phyllo-: sheeted (from Greek , "leaf")
tecto-: of three-dimensional framework (from Greek , "of building")
Nickel–Strunz code scheme NN.XY.##x
NN: Nickel–Strunz mineral class number
X: Nickel–Strunz mineral division letter
Y: Nickel–Strunz mineral family letter
##x: Nickel–Strunz mineral/group number; x an add-on letter
Class: halides
03.A Simple halides, without H2O
03.AA M:X = 1:1, 2:3, 3:5, etc.: Panichiite; 05 Nantokite, 05 Marshite, 05 Miersite; 10 Iodargyrite, 10 Tocornalite; 15 Bromargyrite, 15 Embolite*, 15 Chlorargyrite; 20 Carobbiite, 20 Griceite, 20 Halite, 20 Sylvite, 20 Villiaumite; 25 Sal ammoniac, 25 Lafossaite; 30 Calomel, 30 Kuzminite, 30 Moschelite; 35 Neighborite; 40 Chlorocalcite, 45 Kolarite, 50 Radhakrishnaite; 55 Hephaistosite, 55 Challacolloite
03.AB M:X = 1:2: 05 Tolbachite, 10 Coccinite, 15 Sellaite; 20 Chloromagnesite*, 20 Lawrencite, 20 Scacchite; 25 Frankdicksonite, 25 Fluorite; 30 Tveitite-(Y); 35 Gagarinite-(Y); 35 Zajacite-(Ce)
03.AC M:X = 1:3: 05 Zharchikhite, 10 Molysite; 15 Fluocerite-(Ce), 15 Fluocerite-(La), 20 Gananite
03.B Simple Halides, with H2O
03.BA M:X = 1:1 and 2:3: 05 Hydrohalite, 10 Carnallite
03.BB M:X = 1:2: 05 Eriochalcite, 10 Rokuhnite, 15 Bischofite, 20 Nickelbischofite, 25 Sinjarite, 30 Antarcticite, 35 Tachyhydrite
03.BC M:X = 1:3: 05 Chloraluminite
03.BD Simple Halides with H2O and additional OH: 05 Cadwaladerite, 10 Lesukite, 15 Korshunovskite, 20 Nepskoeite, 25 Koenenite
03.C Complex Halides
03.C: Steropesite, IMA2008-032, IMA2008-039
03.CA Borofluorides: 05 Ferruccite; 10 Avogadrite, 10 Barberiite
03.CB Neso-aluminofluorides: 05 Cryolithionite; 15 Cryolite, 15 Elpasolite, 15 Simmonsite; 20 Colquiriite, 25 Weberite, 30 Karasugite, 35 Usovite; 40 Pachnolite, 40 Thomsenolite; 45 Carlhintzeite, 50 Yaroslavite
03.CC Soro-aluminofluorides: 05 Gearksutite; 10 Acuminite, 10 Tikhonenkovite; 15 Artroeite; 20 Calcjarlite, 20 Jarlite, 20 Jorgensenite
03.CD Ino-aluminofluorides: 05 Rosenbergite, 10 Prosopite
03.CE Phyllo-aluminofluorides: 05 Chiolite
03.CF Tekto-aluminofluorides: 05 Ralstonite, 10 Boldyrevite?, 15 Bogvadite
03.CG Aluminofluorides with CO3, SO4, PO4: 05 Stenonite; 10 Chukhrovite-(Nd), 10 Chukhrovite-(Ce), 10 Chukhrovite-(Y), 10 Meniaylovite; 15 Creedite, 20 Boggildite, 25 Thermessaite
03.CH: 05 Malladrite, 10 Bararite; 15 Cryptohalite, 15 Hieratite; 20 Demartinite, 25 Knasibfite
03.CJ With MX6 complexes; M = Fe, Mn, Cu: 05 Chlormanganokalite, 05 Rinneite; 10 Erythrosiderite, 10 Kremersite; 15 Mitscherlichite, 20 Douglasite, 30 Zirklerite
03.D Oxyhalides, Hydroxyhalides and Related Double Halides
03.DA With Cu, etc., without Pb: 05 Melanothallite; 10a Atacamite, 10a Kempite, 10a Hibbingite, 10b Botallackite, 10b Clinoatacamite, 10b Belloite, 10c Gillardite, 10c Kapellasite, 10c Haydeeite, 10c Paratacamite, 10c Herbertsmithite; 15 Claringbullite, 20 Simonkolleite; 25 Buttgenbachite, 25 Connellite; 30 Abhurite, 35 Ponomarevite; 40 Calumetite, 40 Anthonyite; 45 Khaidarkanite, 50 Bobkingite, 55 Avdoninite, 60 Droninoite
03.DB With Pb, Cu, etc.: 05 Diaboleite, 10 Pseudoboleite, 15 Boleite, 20 Cumengite, 25 Bideauxite, 30 Chloroxiphite, 35 Hematophanite; 40 Asisite, 40 Parkinsonite; 45 Murdochite, 50 Yedlinite
03.DC With Pb (As, Sb, Bi), without Cu: 05 Laurionite, 05 Paralaurionite; 10 Fiedlerite, 15 Penfieldite, 20 Laurelite; 25 Zhangpeishanite, 25 Matlockite, 25 Rorisite, 25 Daubreeite, 25 Bismoclite, 25 Zavaritskite; 30 Nadorite, 30 Perite; 35 Aravaipaite, 37 Calcioaravaipaite, 40 Thorikosite, 45 Mereheadite, 50 Blixite, 55 Pinalite, 60 Symesite; 65 Ecdemite, 65 Heliophyllite; 70 Mendipite, 75 Damaraite, 80 Onoratoite, 85 Cotunnite, 90 Pseudocotunnite, 95 Barstowite
03.DD With Hg: 05 Eglestonite, 05 Kadyrelite; 10 Poyarkovite, 15 Hanawaltite, 20 Terlinguaite, 25 Pinchite; 30 Mosesite, 30 Gianellaite; 35 Kleinite, 40 Tedhadleyite, 45 Vasilyevite, 50 Aurivilliusite, 55 Terlinguacreekite, 60 Kelyanite, 65 Comancheite
03.DE With Rare-Earth Elements: 05 Haleniusite-(La)
03.X Unclassified Strunz Halogenides
03.XX Unknown: 00 Hydrophilite?, 00 Hydromolysite?, 00 Yttrocerite*, 00 Lorettoite?, 00 IMA2009-014, 00 IMA2009-015
| Physical sciences | Minerals | Earth science |
22313860 | https://en.wikipedia.org/wiki/Roystonea%20oleracea | Roystonea oleracea | Roystonea oleracea, sometimes known as the Caribbean royal palm, palmiste, imperial palm or cabbage palm, is a species of palm which is native to the Lesser Antilles, Colombia, Venezuela, and Trinidad and Tobago. It is also reportedly naturalized in Guyana and on the islands of Mauritius and Réunion in the Indian Ocean.
Its specific epithet oleracea means "vegetable/herbal" in Latin and is a form of (). The plant's buds was eaten in the West Indies.
Description
Roystonea oleracea is a large palm which reaches heights of , with the record being not including the crownshaft or fronds. Stems are grey or whitish-grey. and range from in diameter. The upper portion of the stem is encircled by leaf sheaths, forming a green portion known as the crownshaft which is normally about long. Individuals are reported to have 16–22 or 20–22 leaves. Leaves are once-pinnate and consist of a long petiole and a rachis. The leaflets are attached to the rachis at various angles, giving the frond a bottlebrush-like appearance. The inflorescence bears white male and female flowers. Fruit are long and long, and turn purplish-black when ripe.
Taxonomy
Roystonea is placed in the subfamily Arecoideae and the tribe Roystoneae. The placement Roystonea within the Arecoideae is uncertain; a phylogeny based on plastid DNA failed to resolve the position of the genus within the Arecoideae. As of 2008, there appear to be no molecular phylogenetic studies of Roystonea and the relationship between R. oleracea and the rest of the genus is uncertain.
The species was first described by Nikolaus von Jacquin in 1763 as Areca oleracea. The epithet oleracea means "vegetable- or herb-like", and is used in botanical Latin for edible or cultivated plants (as in Brassica oleracea or Portulaca oleracea). In 1838, Carl Friedrich Philipp von Martius transferred it to the genus Oreodoxa as O. oleracea. Berthold Carl Seemann transferred it to the genus Kentia in 1838. In 1900 Orator F. Cook proposed a new genus for the royal palms, and moved this species from Oreodoxa to Roystonea the following year.
In 1825 Curt Polycarp Joachim Sprengel described Euterpe caribaea, citing Jacquin's A. oleracea as a synonym. In 1903 Carl Lebrecht Udo Dammer and Ignatz Urban transferred this species to the genus Oreodoxa. Percy Wilson moved it to Roystonea in 1917. Since Sprengel was aware of Jacquin's description, his name is superfluous. Liberty Hyde Bailey described Roystonea venezuelana in 1949 based on a collection by Julian Steyermark. In his 1996 monograph on the genus Roystonea, Scott Zona reported that he was "unable to find any consistent morphological or molecular differences between the two taxa", and placed R. venezuelana in synonym with R. oleracea.
Based on cultivated plants at the botanical garden in Georgetown, Guyana (then British Guiana), John Frederick Waby described Oreodoxa regia var. jenmanii in 1919. The distinguishing feature of this variety was the fact that it held its lowest leaves at a 45° angle above horizontal. In 1935 Bailey described R. oleracea var. excelsior based on specimens collected from the Georgetown Botanic Gardens. Hyde cited Waby's name as an unpublished synonym, apparently unaware that Waby's name was a valid, published name. In 1996 Zona coined a new combination, R. oleracea var. jenmannii to correct Hyde's mistake and update Waby's name. However, he noted that this variety, which was only known from cultivation, did not differ from the typical in floral or fruit characters. Rafaël Govaerts merged the variety into synonym with the typical variety.
Common names
Roystonea oleracea is known as the palmiste in Trinidad and Tobago, the royal palm or cabbage palm in Barbados and chaguaramo or maparó in Venezuela. In Colombia it is known as mapora in Spanish, mapórbot in Jitnu and mapoloboto in Sikuani. It is also called the cabbage tree, palmetto royal, palmier franc and chou palmiste, among other names.
Distribution
Roystonea oleracea is native to Guadeloupe, Dominica and Martinique in the Lesser Antilles, Barbados, Trinidad and Tobago, northern Venezuela and northeastern Colombia. It is naturalised in Antigua, Guyana, Suriname and French Guiana. It often grows in areas subject which are wet for at least part of the year—coastal areas near the sea, gallery forests in seasonally flooded savannas.
Ecology
Roystonea oleracea fruit is an important component of the diet of orange-winged amazon parrots and red-bellied macaws in Nariva Swamp, Trinidad and Tobago. Over the course of a study conducted between 1995 and 1996, R. oleracea fruit was an important element of the diet of both species between June and January, and was their dominant food item from July to November.
Uses
The tallest and "most majestic" royal palm, Roystonea oleracea is often used as an ornamental. The wood can be used for construction. The terminal bud is edible. The sap of young inflorescences can be fermented to produce alcohol. In his 1750 Natural History of Barbados Griffith Hughes reported that the immature inflorescences could be pickled and eaten as a vegetable.
| Biology and health sciences | Arecales (inc. Palms) | Plants |
22316046 | https://en.wikipedia.org/wiki/Jasminum%20officinale | Jasminum officinale | Jasminum officinale, known as the common jasmine or simply jasmine, is a species of flowering plant in the olive family Oleaceae. It is native to the Caucasus and parts of Asia, also widely naturalized.
It is also known as summer jasmine, poet's jasmine, white jasmine, true jasmine or jessamine, and is particularly valued by gardeners throughout the temperate world for the intense fragrance of its flowers in summer. It is also the National flower of Pakistan.
Description
Jasminum officinale is a vigorous, twining deciduous climber with sharply pointed pinnate leaves and clusters of starry, pure white flowers in summer, which are the source of its heady scent. The leaf has 5 to 9 leaflets.
Etymology
The Latin specific epithet officinale means "useful".
Distribution
It is found in the Caucasus, northern Iran, Afghanistan, Pakistan, the Himalayas, Tajikistan, India, Nepal and western China (Guizhou, Sichuan, Xizang (Tibet), Yunnan). The species is also widely cultivated in many places, and is reportedly naturalized in Spain, France, Italy, Portugal, Romania, Croatia, Bosnia and Herzegovina, Montenegro, Serbia, Algeria, Florida and the West Indies.
Chemical composition
J. officinale has been found to contain alkaloids, coumarins, flavonoids, tannins, terpenoids, glycosides, emodine, leucoanthocyanins, steroids, anthocyanins, phlobatinins, essential oil and saponins.
Garden history
Jasminum officinale is so ancient in cultivation that its country of origin, though somewhere in Central Asia, is not certain. H.L. Li, The Garden Flowers of China, notes that in the third century CE, jasmines identifiable as J. officinale and J. sambac were recorded among "foreign" plants in Chinese texts, and that in ninth-century Chinese texts J. officinale was said to come from Byzantium. Its Chinese name, Yeh-hsi-ming is a version of the Persian and Arabic name.
Its entry into European gardens was most likely through the Arab-Norman culture of Sicily, but, as the garden historian John Harvey has said, "surprisingly little is known, historically or archaeologically, of the cultural life of pre-Norman Sicily". In the mid-14th century the Florentine author Boccaccio in his Decameron describes a walled garden in which "the sides of the alleys were all, as it were, walled in with roses white and red and jasmine; insomuch that there was no part of the garden but one might walk there not merely in the morning but at high noon in grateful shade." Jasmine water also features in the story of Salabaetto in the Decameron. Jasminum officinale, "of the household office" where perfumes were distilled, was so thoroughly naturalized that Linnaeus thought it was native to Switzerland. As a garden plant in London it features in William Turner's Names of Herbes, 1548.
Double forms, here as among many flowers, were treasured in the 16th and 17th centuries.
Cultivars
Numerous cultivars have been developed for garden use, often with variegated foliage. The cultivar 'Argenteovariegatum', with cream-white variegation on the leaves, has gained the Royal Horticultural Society's Award of Garden Merit.
Aromatherapy
The essential oil of Jasminum officinale is used in aromatherapy. Jasmine absolute has a heavy, sweet scent valued by perfumers. The flowers release their perfume at dusk, so flowers are picked at night and a tiny amount of oil is obtained from each blossom by solvent extraction. The result is an expensive oil which can be used in low concentrations.
Safety
Jasmine is "generally recognized as safe" (GRAS) as a food ingredient by the U.S. Food and Drug Administration.
It is unknown whether jasmine consumption affects breastmilk, as the safety and efficacy of jasmine in nursing mothers or infants has not been adequately studied. Drinking small amounts of jasmine tea likely are not harmful during nursing.
Allergic reactions to jasmine may occur.
| Biology and health sciences | Lamiales | Plants |
2595791 | https://en.wikipedia.org/wiki/Chaos%20%28genus%29 | Chaos (genus) | Chaos is a genus of single-celled amoeboid organisms in the family Amoebidae. The largest and most-known species, the so-called "giant amoeba" (Chaos carolinensis), can reach lengths up to 5 mm, although most specimens fall between 1 and 3 mm.
Members of this genus closely resemble those of the genus Amoeba and share the same general morphology, producing numerous cylindrical pseudopods, each of which is rounded at the tip. However, while Amoeba have a single nucleus, Chaos can have as many as a thousand. Because of this attribute, C. carolinensis was once placed in the genus Pelomyxa alongside the giant multinucleate amoeba Pelomyxa palustris. Recently, molecular phylogenetic studies of this species have confirmed the view of some earlier researchers that it is more closely related to Amoeba than to Pelomyxa. The species is now placed in the independent genus Chaos, a sister group to Amoeba.
Dietary habits
Chaos species are versatile heterotrophs, able to feed on bacteria, algae, other protists, and even small multicellular invertebrates. Like all Amoebozoa, they take in food by phagocytosis, encircling food particles with its pseudopodia, then enclosing them within a food ball, or vacuole, where they are broken down by enzymes. The cell does not have a mouth or cytostome, nor is there any fixed site on the cell membrane at which phagocytosis normally occurs.
Movement
The cell's membrane, or plasmalemma, is extremely flexible, allowing the organism to change shape from one moment to the next. The cytoplasm within the membrane is conventionally described as having two parts: the internal fluid, or endoplasm, which contains loose granules and food vacuoles, as well as organelles such as nuclei and mitochondria; and a more viscous ectoplasm around the perimeter of the cell, which is relatively clear and contains no conspicuous granules. Like other lobose amoebae, Chaos move by extending pseudopodia. As a new pseudopod is extended, a variable zone of ectoplasm forms at the leading edge and a fountaining stream of endoplasm circulates within. The effort of describing these motions, and explaining how they result in the cell's forward movement, has generated a large body of scientific literature.
Early history and naming controversy
The genus Chaos has had a long and often confusing history. In 1755, Rösel von Rosenhof saw and depicted an amoeboid he named "der kleine Proteus" ("the little Proteus"). Three years later, Linnaeus gave Rösel's creature the name Volvox chaos. However, because the name Volvox had already been applied to a genus of flagellate algae, he later changed it to Chaos chaos. In subsequent decades, as new names and species proliferated, accounts of Chaos, under a variety of synonyms, became so thoroughly entangled with descriptions of similar organisms, that it is virtually impossible to differentiate one historic amoeboid from another. In 1879, Joseph Leidy suggested collapsing all the "common" large, freshwater amoebae into one species, which he proposed to call Amoeba proteus. A dozen species, including several that had been identified as belonging to Chaos, were to be regarded as synonyms of Amoeba proteus. However, in the description he gives of this organism, it is clearly defined as a uninucleate amoeba, unlike the modern Chaos.
In 1900, the biologist H. V. Wilson, at the University of North Carolina, discovered and isolated a giant amoeba that resembled Amoeba proteus but had cellular nuclei numbering in the hundreds. Since there existed already a genus of giant multinucleate amoebae, Pelomyxa, Wilson placed his organism in that taxon, naming it Pelomyxa carolinensis. This amoeba was easily cultivated and became a widely distributed and studied laboratory organism.
In 1926, Asa A. Schaeffer argued that Pelomyxa carolinensis was, in fact, identical to the amoeba that had been seen by Rösel in 1755, the "little Proteus" which Linnaeus had named Chaos chaos. Therefore, he urged that, in keeping with the principle of priority governing biological nomenclature, the name of the organism should be Chaos chaos. Several investigators argued vigorously against the validity of that name, but others adopted it. A third faction accepted the validity of the genus Chaos for Wilson's amoeba, but retained the second half of the binomial, referring to the organism as Chaos carolinensis. By the early 1970s, all three names were in use concurrently, by various investigators. However, studies of the fine structure and physiology of the amoeba made it increasingly clear that there were profound differences between it and the other Pelomyxa (including the complete absence, in true Pelomyxa, of mitochondria). Since then, a nomenclatural consensus has emerged, and today the organism is generally known as Chaos carolinensis, as first proposed by Robert L. King and Theodore L. Jahn in 1948.
Recent phylogeny
Until quite recently, the genus Chaos was included, along with all other protists that extend lobose pseudopods or move about by protoplasmic flow, in the phylum Sarcodina. Molecular phylogenies based on the examination of ribosomal DNA, have shown that Sarcodina is a polyphyletic grouping: that some amoeboids shared a more recent common ancestor with members of other phyla than with other Sarcodina. Consequently, the amoeboids of Sarcodina have been distributed among two newly created supergroups, Rhizaria and Amoebozoa. Chaos and its close relative, Amoeba, are now placed in the latter, within the order Tubulinida: naked amoebas (lacking a test, or shell), either monopodial or possessing somewhat cylindrical pseudopods, with non-adhesive uroid (a region at the posterior of the cell which has a crumpled appearance).
While the monophyly of Amoebozoa has yet to be established, current information confirms the status of Chaos and Amoeba as closely related taxa within the group. However, the same research raises questions about the monophyly of the genus Chaos, since Chaos nobile may be basal to a group containing Chaos carolinensis and at least two species of Amoeba, as illustrated below, following Pawlowski and Burki (2009):
| Biology and health sciences | Eukaryotes | Plants |
2596495 | https://en.wikipedia.org/wiki/Eared%20dove | Eared dove | The eared dove (Zenaida auriculata) is a New World dove. It is a resident breeder throughout South America from Colombia to southern Argentina and Chile, and on the offshore islands from the Grenadines southwards. It may be a relatively recent colonist of Tobago and Trinidad. It appears to be partially migratory, its movements driven by food supplies.
It is a close relative of the North American mourning dove. With that species, the Socorro dove, and possibly the Galápagos dove, it forms a superspecies. The latter two are insular offshoots, the Socorro birds from ancestral mourning doves, and the Galápagos ones from more ancient stock.
Description
The eared dove is long with a long, wedge-shaped tail, and weighs normally about . Adult males have mainly olive-brown upperpart plumage, with black spots on the wings. The head has a grey crown, black line behind the eye, and the blue-black on the lower ear coverts. These black markings give the species its English and specific names. The underparts are vinous, and the tail is tipped with cinnamon. The bill is black and the legs dark red.
The female is duller than the male, and immature birds are greyish-brown, very dull, with pale barring. The species' call is a deep soft oo-ah-oo.
Ecology
The eared dove is common to abundant in savannahs and other open areas, including cultivation, and it readily adapts to human habitation, being seen on wires and telephone posts near towns in Trinidad and Venezuela, in almost all public spaces of large urban areas such as Bogotá, Colombia, and feeding near beach resorts in Tobago.
Eared doves feed mainly on seed and grain taken from the ground. They can be agricultural pests. When in season, agricultural plants such as wheat, rice, sorghum, maize and soybeans may comprise the entirety of the diet. Echinochloa colona, a common savannah grass, and Croton jacobinensis are important seed food for these doves. Their diet may also be augmented by animal foods, such as caterpillars, insect pupae, aphids and snails. This is a gregarious bird when not feeding, and forms flocks especially at migration time or at communal roosts.
Its flight is high, fast, and direct, with the regular beats and an occasional sharp flick of the wings, which are characteristic of pigeons in general. It also has a breeding display with a steep climb and semicircular glide down to its original perch. It builds a small stick nest several meters up in a tree and lays two white eggs. These hatch in 12–14 days with another 9 days to fledging. No fixed breeding season is seen in most of their range, and provided with plentiful food and habitat, birds breed almost continuously.
Hunting
Eared doves provide the last big-bag shooting experience in the world. More than 23 million of these doves are thought to be in the fields around Córdoba in northern Argentina, and recent estimates put the figure in the 32-million range. Not uncommonly, a single gun can shoot 1000 birds in a day.
The scale of this wing-shooting recalls the numbers of passenger pigeons taken by North American gunners in the 1800s. That hunting pressure brought the passenger pigeon to rapid extinction, but the eared dove seems to be more resilient. Indeed, as with the passenger pigeons, eared dove populations in Argentina and Bolivia sometimes "darken the skies". Thus, populations on the sporting estates of Argentina seem to be holding their own, with the birds breeding four times a year and thriving on the vast areas of grain, some grown for their benefit, most of it on commercial farms, which are happy to support the dove shooting. Dozens of luxury lodges specialize in dove hunting, and the season extends all year long.
The eared doves around Córdoba do not migrate, and the enormous flocks are described as flying constantly between their roosting woods and the open fields. In the Córdoba region in Argentina, the eared doves are known as palomas doradas because of the shining feathers sometimes present in their plumage.
Further north, in Bolivia, around de Gran Chaco region, near the immense soy and sorghum plantations around Santa Cruz de la Sierra, the dove shooting is more seasonal, going from May to September, with large flocks arriving from Argentina to raid the grain crops. Locals attest that eared doves, which they call by the Guaraní name of totaky, were quite rare in the region just a few decades ago, a testimony not only to the resilience of the species, but also to the huge impact that the newly created large feeding grounds have on dove populations.
| Biology and health sciences | Columbimorphae | Animals |
2599233 | https://en.wikipedia.org/wiki/Siamese%20fighting%20fish | Siamese fighting fish | The Siamese fighting fish (Betta splendens), commonly known as the betta, is a freshwater fish native to Southeast Asia, namely Cambodia, Laos, Myanmar, Malaysia, Indonesia, Thailand, and Vietnam. It is one of 76 species of the genus Betta, but the only one eponymously called "betta", owing to its global popularity as a pet; Betta splendens are among the most popular aquarium fish in the world, due to their diverse and colorful morphology and relatively low maintenance.
Betta fish are endemic to the central plain of Thailand, where they were first domesticated at least 1,000 years ago, among the longest of any fish. They were initially bred for aggression and subject to gambling matches akin to cockfighting. Bettas became known outside Thailand through King Rama III (1788–1851), who is said to have given some to Theodore Cantor, a Danish physician, zoologist, and botanist. They first appeared in the West in the late 19th century, and within decades became popular as ornamental fish. B. splendens long history of selective breeding has produced a wide variety of coloration and finnage, earning it the moniker, "designer fish of the aquatic world".
Bettas are well known for being highly territorial, with males prone to attacking each other if housed in the same tank; without a means of escape, this will usually result in the death of one or both fish. Female bettas can also become territorial towards one another in confined spaces. Bettas are exceptionally tolerant of low oxygen levels and poor water quality, owing to their special labyrinth organ, a characteristic unique to the suborder Anabantoidei that allows for the intake of surface air.
In addition to its worldwide popularity, the Siamese fighting fish is the national aquatic animal of Thailand, which remains the primary breeder and exporter of bettas for the global aquarium market. Despite their abundance as pets, in the wild, B. splendens is listed as "vulnerable" by the IUCN, due to increasing pollution and habitat destruction. Efforts are being made to support betta fish breeders in Thailand as a result of their popularity as pets, cultural significance, and need for conservation.
Etymology
Outside Southeast Asia, the name "betta" is used specifically to describe B. splendens, despite the term scientifically applying to the entire genus, which includes B. splendens and at least 72 other species. Betta splendens is more accurately called by its scientific name or "Siamese fighting fish" to avoid confusion with the other members of the genus.
English-speakers often pronounce betta as "bay-tuh", after the second letter in the Greek alphabet. However, it is believed the name is derived from the Malay word ikan betta, with ikan meaning "fish" and bettah referring to an ancient warrior tribe, which is pronounced "bet-tah". Alternative sources suggests the name Betta splendens is formed from two languages, consisting of Malay for "enduring fish" and the Latin word for shining.
Another vernacular name for Siamese fighting fish is plakat, often applied to the short-finned ornamental strains, which is derived from the Thai word pla kat (Thai: ปลากัด), which literally means "biting fish". This name is used in Thailand for all members of the Betta genus, which share similar aggressive tendencies, rather than for any specific strain of the Siamese fighting fish. Thus, the term "fighting fish" is used to generalise all Betta species besides the Siamese fighting fish.
Siamese fighting fish were originally given the scientific name Macropodus pugnax in 1849—literally "aggressive fish with big feet", likely in reference to their elongated pelvic fins. In 1897 they were identified with the genus Betta and became known as Betta pugnax, referring to their aggressiveness. In 1909, the species was finally renamed Betta splendens upon the discovery that an existing species was already named pugnax.
Description
B. splendens usually grows to a length of about . Although aquarium specimens are widely known for their brilliant colours and large, flowing fins, the natural coloration of B. splendens is generally green, brown and grey, while the fins are short; wild fish exhibit strong colours only when agitated. In captivity, Siamese fighting fish have been selectively bred to display a vibrant array of colours and tail types.
Distribution and habitat
According to Witte and Schmidt (1992), Betta splendens is native to Southeast Asia, including the northern Malay Peninsula, central and eastern Thailand, Kampuchea (Cambodia), and southern Vietnam. Based on Vidthayanon (2013), a Thai ichthyologist and senior researcher of biodiversity at WWF Thailand, the species is endemic to Thailand, from the Mae Khlong to Chao Phraya basins, the eastern slope of the Cardamom mountains (Cambodia), and from the Isthmus of Kra. Similarly, a report from Froese and Pauly (2019) identifies Betta splendens as native to Cambodia, Laos, Thailand, and Vietnam. They are also found throughout the neighbouring Malay Peninsula and in adjacent parts of Sumatra, likely due to human introduction.
Wherever they are found, Betta splendens generally inhabit shallow bodies of water with abundant vegetation, including marshes, floodplains, and paddy fields. The historic prevalence of rice farming across Southeast Asia, which provided an ideal habitat for bettas, led to their discovery and subsequent domestication by humans. The combination of shallow water and high air temperature causes gases to rapidly evaporate, leading to a significant deficit of oxygen in the betta's natural habitat. This environment likely led to the evolution of the lung-like labyrinth organ, which allows Siamese fighting fish—like all members of the suborder Anabantoidei—to breathe directly from the air. Subsequently, bettas can live and even thrive in harsher environments than other freshwater fish, which in turn leaves them with fewer natural predators and competitors. In the wild, bettas thrive at a fairly low population density of 1.7 individuals per square meter.
The tropical climate of the betta's natural habitat is characterized by sudden and extreme fluctuations in water availability, chemistry, and temperature. Water pH can range from slightly acidic (pH 6.9) to highly alkaline (pH 8.2), while air temperatures drop as low as 15 °C (59 °F) and rise as high as 40 °C (100 °F). Consequently, Siamese fighting fish are highly adaptable and durable, able to tolerate a variety of harsh or toxic environments; this accounts for their popularity as pets, as well as their ability to successfully colonize bodies of water all over the world.
Wild bettas prefer to live in bodies of water teeming with aquatic vegetation and surface foliage, such as fallen leaves and water lilies. The abundance of plants provides security from predators and a buffer between aggressive males, who coexist by claiming dense sections of plants as territory. Such vegetation also offers protection to females during spawning and to fry during their earliest and most vulnerable stages.
Invasive species
The betta's worldwide popularity has led to its release and home in similarly tropical areas, including southeast Australia, Brazil, Colombia, the Dominican Republic, southeast United States, and Singapore.
In January 2014, a large population of bettas was discovered in the Adelaide River Floodplain in the Northern Territory, Australia. As an invasive species they pose a threat to native fish, frogs and other wetland wildlife. Bettas have also become established in subtropical areas of the United States, namely southern Texas and Florida, although an assessment by the U.S. Fish and Wildlife Service determined they were no threat to natural ecosystems.
Conservation status
Due to their popularity, Siamese fighting fish are highly abundant in captivity. In the wild, betta habitats are threatened by chemical and agricultural run off, in addition to the contamination of human medication residue into aquatic ecosystems from the sewage system. Such contamination can also alter the reproductive behavior of the species by decreasing hatch rate and increasing the likelihood of fathers eating their own eggs. Due to the expansion of palm oil plantation in Southeast Asia, wild bettas are also facing habitat loss. The primary threats are habitat destruction and pollution, caused by urban and agricultural development across central Thailand. Wild specimens are categorized by the IUCN as vulnerable, indicating the species is likely to become endangered without conservation efforts.
Diet
Betta splendens is naturally carnivorous, feeding on zooplankton, small crustaceans, and the larvae of aquatic insects such as mosquitoes, as well as insects that have fallen into the water and algae. Contrary to some marketing materials in the pet trade, bettas cannot subsist solely on vegetation or the roots of plants.
Bettas can be fed a varied diet of pellets, flakes, or frozen foods like brine shrimp, bloodworms, daphnia and many others. Due to their short digestive tracts—a characteristic of most carnivores—bettas have difficulty processing carbohydrates such as corn and wheat, which are commonly used as fillers in many commercial fish foods. Thus, regardless of the source, a proper betta diet should consist mostly of animal protein.
Bettas are susceptible to overfeeding, which can lead to obesity, constipation, swim bladder disease, and other health problems; excessive food may also pollute the water. It is generally advised to feed a betta at least once daily, with only the amount of food it can eat within 3–5 minutes; leftover food should be removed.
Reproduction and early development
If interested in a female, male bettas will flare their gills, spread their fins and twist their bodies in a dance-like performance. Receptive females will respond by darkening in color and developing vertical lines known as "breeding bars". Males build bubble nests of various sizes and thicknesses at the surface of the water, which interested females may examine. Most do this regularly even if there is no female present.
Plants or rocks that break the surface often form a base for bubble nests. During courtship, the male betta may exhibit aggressive behavior towards the female by acts of chasing or nipping at her fins. The act of spawning itself is called a "nuptial embrace", for the male wraps his body around the female; around 10–40 eggs are released during each embrace, until the female is exhausted of eggs. With each deposit of eggs, the male releases milt into the water, and fertilisation takes place externally. During and after spawning, the male uses his mouth to retrieve sinking eggs and place them in the bubble nest; during mating some females assist their partner, but more often will simply devour all the eggs she manages to catch. Once the female has released all of her eggs, she is chased away from the male's territory, as she will likely eat the eggs. If she is not removed from the tank, she will most likely be killed by the male.
The eggs remain in the male's care. He carefully keeps them in his bubble nest, making sure none fall to the bottom, repairing the bubble nest as needed. Incubation lasts for 24–36 hours; newly hatched larvae remain in the nest for the next two to three days until their yolk sacs are fully absorbed. Afterwards, the fry leave the nest and the free-swimming stage begins. In this first period of their lives, B. splendens fry are totally dependent on their gills; the labyrinth organ, which allows the species to breathe atmospheric oxygen, typically develops at three to six weeks of age, depending on the general growth rate, which can be highly variable. B. splendens can reach sexual maturity in as early as 4–5 months. Typically, the morphological differences between males and females can be noticed around two months after hatching. During development, betta fry can be fed either commercial artificial feeds, or live moving prey, which tends to be favored more. Examples of live feed for betta fry include baby brine shrimp, water fleas, and mosquito larvae. Although common fed to fish fry, boiled egg yolks are not preferred by the fish.
History
Information on precisely how and when Siamese fighting fish were first domesticated and brought out of Asia is sparse. Genetic analysis implies domestication at least 1,000 years ago. Additional evidence from DNA sampling suggests bettas may have been bred for fighting since the 13th century. Over time, this led to the diverse genetics of modern domestic and wild bettas.
Fighting fish
Some people in Malaysia and Thailand are known to have collected wild bettas at least by the 19th century, observing their aggressive nature and pitting them against each other in gambling matches akin to cockfights. In the wild, betta spar for only a few minutes before one fish retreats; domesticated betta, namely Plakat bettas, are bred specifically for heightened aggression, and can engage for much longer, with winners determined by a willingness to continue fighting; once a fish retreats, the match is over. Fights to the death were rare, so bets were placed on the bravery of the fish rather than its survival. Due to the difference in genetics from domesticated bettas being originally bred for fighting, captive ornamental species tends to be more aggressive than wild betta species.
The popularity of these fights garnered the attention of king of Siam (Thailand) who regulated and taxed the matches and collected fighting fish of his own. In 1840, he gave some of his prized fish to Danish physician Theodore Edward Cantor, who worked in the Bengal medical service. Nine years later, Cantor published the first recorded article describing these fish, giving them the name Macropodus pugnax. In 1909, British ichthyologist Charles Tate Regan found there was a related species already named Macropodus pugnax, and thus renamed the domesticated Siamese fighting fish, Betta splendens, or "splendid fighter".
Aquarium fish
Betta splendens first entered the Western aquarium trade in the late 19th century; the earliest known arrival is 1874 in France, when French aquaria expert and ichthyologist Pierre Carbonnier began importing and breeding several specimens. In 1896, German tropical fish expert Paul Matte brought the first specimens into Germany from Moscow, most likely from the strain developed by Carbonnier. This indicates bettas were already somewhat established in France and Russia by the turn of the 20th century. Fighting fish were also present in Australia by 1904, based on an article written by British-born zoologist Edgar Ravenswood Waite and published by the Australian Museum in Sydney. Waite indicates that Australian specimens were brought from Penang, Malaysia, near the border with Thailand. He also makes reference to two articles about "fighting fish" published by Carbonnier in 1874 and 1881. Bettas may have first entered the United States in 1910, via importers in California; there is also evidence they were imported in 1927 from Cambodia.
While it is unclear when bettas became popular in the aquarium trade, the early 20th century marked the first known departure from centuries of breeding bettas for aggression, to instead selecting for colour, finnage, and overall beauty for ornamental purposes. In 1927, an article was published in Germany describing the long, flowing fins of the "veiltail" breed, which indicates an emphasis on aesthetic beauty. In the 1950s, an American breeder created a larger and longer-finned veiltail, while around 1960, Indian breeders discovered a genetic mutation that allowed for two caudal fins, producing the "doubletail" variety. Within that decade, a German breeder created the "deltatail" characterised by its broader, triangular fins.
In 1967, a group of betta breeders formed the International Betta Congress (IBC), the first formal interest group dedicated to Siamese fighting fish. The IBC aimed to breed varieties that would be healthier and more symmetrical in fins and body shape, with an emphasis on animal welfare.
In the aquarium
Water
As tropical fish, bettas prefer a water temperature of around , but have been observed surviving temporarily at extremes of to . When kept in colder climates, aquarium heaters are recommended, as colder water weakens their immune system and makes them susceptible to certain diseases.
Bettas are also affected by the pH of the water: a neutral pH of 7.0 is ideal, but slightly higher levels are tolerable. Due to their labyrinth organ, bettas can endure low oxygen levels, but cannot survive for long in unmaintained aquaria, as poor water quality makes all tropical fish more susceptible to diseases like fin rot, or scale loss. Thus, notwithstanding the betta's well known tolerance of still water, a mechanical filter is considered necessary for their long-term health and longevity. Similarly, live aquatic plants provide a supplemental source of filtration, in addition to crucial enrichment to the betta.
Aquarium size and cohabitants
Despite frequently being displayed and sold in small containers in the pet trade, bettas do best in larger environments; while they can survive in cups, bowls, and other confined spaces, they will be much happier, healthier, and longer-lived in a larger aquarium. Although some betta enthusiasts claim there is a minimum tank size, determining a strict baseline is somewhat arbitrary and subject to debate, but most people consider a 5 gallons tank as the minimum.
Although male bettas are solitary and aggressive towards one another, they can generally cohabit with many types of fish and invertebrates if there is adequate space and hiding places. However, compatibility varies based on the temperament of the individual betta, and it is advised to carefully supervise the betta's interaction with other fish. Tankmates must be tropical, communal, nonterritorial, and not have a similar body type or long flowing fins; coldwater fish like goldfish have incompatible temperature requirements, while aggressive and predatory fish are likely to nip at the betta's fins or erode their slime coat. Species that shoal, such as tetras and danios, are considered most ideal, since they usually keep to themselves and can endure the territorial nature of bettas with their numbers. Brightly coloured fish with large fins, such as male guppies, should be avoided, as they may invite fin nipping by the male betta. Potential tankmates should usually be added before the male betta so they can establish their respective territories beforehand, rather than compete with the betta.
Female bettas are less aggressive and territorial than males, and thus can live with a greater variety of fish; for example, brightly coloured or large-finned fish will not usually disturb a female. Generally, female fighting fish can also tolerate larger or more numerous tankmates than males. However, like male bettas, a female's tolerance of other fish will vary by individual temperament.
It is not recommended to keep male and female bettas together, except temporarily for breeding purposes, which should always be undertaken with caution and supervision.
Setup
Bettas are fairly intelligent and inquisitive, and thus require stimulation; otherwise they can become bored and depressed, leading to lethargy and a weaker immune system. Decorations such as silk or live plants, rocks, caves, driftwood, and other ornaments provide crucial enrichment—provided they do not have rough textures or jagged edges, which can damage their delicate fins. In the wild, Siamese fighting fish spend most of their time concealing themselves under floating debris or overhanging plants to avoid potential predators. Floating plants and leaves can help bettas feel more secure, while also giving males an anchor from which to build their bubble nests. Abundant vegetation of any kind is generally recommended to provide maximum security and to cater to the betta's instinct to claim protective territory.
Indian almond leaves are increasingly popular for providing something closer to the natural foliage under which bettas would hide in the wild. Their tannins allegedly confer several health benefits, including treating certain ailments like fin rot and bladder disease, and stabilising the pH of the water.
Health and wellness
When properly kept and fed a correct diet, Siamese fighting fish generally live between three and five years in captivity, though in rare cases may live as long as seven to ten years. One study found that bettas kept in tanks of several gallons and provided with proper nutrition and "exercise"—in the form of being chased around by a stick for a short period—lived over nine years; by contrast, a control group of bettas confined to small jars lived far fewer years. A larger tank with proper filtration, regular maintenance, and an abundance of decor and hiding spaces, along with a rich, protein-based diet, increases the likelihood of a long lifespan.
Like all tropical fish in captivity, bettas are susceptible to several kinds of diseases, usually caused by bacterial, fungal, or parasitic infections. Most illnesses result from poor water quality and cold water, both of which weaken the immune system. The four most common illnesses are white spot, velvet, fin rot, and dropsy; with the exception of the latter, which is incurable, these ailments can be treated with a combination of over-the-counter fish medication, increased water temperature, and/or regular water changes.
Varieties
Over a century of intensive selective breeding has produced a wide variety of colours and fin types, and breeders around the world continue to develop new varieties. Often, the males of the species are sold preferentially in stores because of their beauty relative to the females, which almost never develop fins or vibrant colours as showy as their male counterparts; however, some breeders have produced females with fairly long fins and bright colours.
Betta splendens can be hybridised with B. imbellis, B. mahachaiensis, and B. smaragdina, though with the latter, the fry tend to have low survival rates. In addition to these hybrids within the genus Betta, intergeneric hybridisation of Betta splendens and Macropodus opercularis, the paradise fish, has been reported.
Colors
Wild bettas exhibit strong colours only when agitated. Over the centuries, breeders have been able to make this coloration permanent, and a wide variety of hues breed true. Colours among captive bettas include red, orange, yellow, blue, steel blue, turquoise/green, black, pastel, opaque white, and multi-coloured. Recent evidence suggest blue-colored males may show higher levels of aggression than red-colored males. On the other hand, female bettas may have a preference for red-colored mates as opposed to their blue counterparts.
The betta's diverse colours are due to different layers of pigmentation in their skin. The layers, from deepest within to the outermost, consists of red, yellow, black, iridescent (blue and green), and metallic (not a colour itself, but reacts with the other colours). Any combination of these layers can be present, leading to a wide variety of colours within and among bettas.
The shades of blue, turquoise, and green are slightly iridescent, and can appear to change colour with different lighting conditions or viewing angles; this is because these colours (unlike black or red) are not due to pigments, but created through refraction within a layer of translucent guanine crystals. Breeders have also developed different colour patterns such as marble and butterfly, as well as metallic shades like copper, gold, or platinum, which were obtained by crossing B. splendens to other Betta species).
Some bettas will change colours throughout their lifetime, a process known as marbling, which is attributed to a transposon, in which a DNA sequence can change its position within a genome, thereby altering a cell. Koi bettas have mutated over time and in some case change colours or patterns throughout their lifetime (known as true Koi), due to the defective gene that causes marbling not being repaired in the color layers after some time.
Common colours:
Super Red
Super Blue
Super Yellow
Opaque
Super Black
Super White
Orange
Marble
Candy
Nemo
Galaxy Nemo
Koi
Alien
Copper
Cellophane
Gold
Galaxy Koi
Rarer colours:
Super Orange
Metallic
Turquoise
Lavender
Mustard Gas
Grizzle
Green
Purple
Colour patterns:
Finnage variations
Breeders have developed several different finnage and scale variations:
Veiltail – Extended finnage length and non-symmetrical tail; caudal fin rays usually only split once; the most common tail type seen in pet stores.
Crowntail – Fin rays are extended well beyond the membrane and consequently the tail can take on the appearance of a crown; also called fringetail
Combtail – Less extended version of the crown tail, derived from breeding crown and another finnage type
Halfmoon – D-shaped caudal fin that forms a 180° angle, the edges of the tail are crisp and straight
Over-Halfmoon or Super Delta Tail – Caudal fin exceeds 180° angle (a byproduct of trying to breed half-moons), which can sometimes cause problems because the fins are too big for the fish to swim properly
Rosetail – Variation with so much finnage that it overlaps and looks like a rose
Feather tail – Similar to the Rosetail, with a rougher appearance
Plakat – Short fins that resemble the fins seen in wild-type bettas
Halfmoon plakat – Short-finned Halfmoon; plakat and halfmoon cross
Double tail or Full-moon – Tail fin is duplicated into two lobes and the dorsal fin is significantly elongated, the two tails can show different levels of bifurcation depending on the individual
Delta tail – Tail spread less than that of a Halfmoon (less than 180°)
Super Delta (aka SD or SDT) – Enhanced version of the Delta; one step closer to the Halfmoon variety in that their tails have a span between 130–170 degrees
Half-Sun – Combtail with caudal fin going 180°, like a half-moon
Elephant Ear – Pectoral fins are much larger than normal, often white, resembling the ears of an elephant
Spade Tail – Caudal fin has a wide base that narrows to a small point
Behaviour and intelligence
Siamese fighting fish display complex behavioural patterns and social interactions, which vary among individual specimens. Research indicates they are capable of associative learning, in which they adopt a consistent response following exposure to new stimuli. These characteristics have made bettas subject to intensive study by ethologists, neurologists, and comparative psychologists.
Males and females flare or puff out their gill covers (opercula) to appear more impressive, either to intimidate other rivals or as an act of courtship. Flaring also occurs when they are intimidated by movement or a change of scene in their environments. In captivity, bettas can be seen flaring at their own reflection because they don't pass the mirror test for self-recognition. Both sexes display pale horizontal bars if stressed or frightened. However, such colour changes, common in females of any age, are rare in mature males due to their intensity of colour. Females often flare at other females, especially when setting up a pecking order. Flirting fish behave similarly, with vertical instead of horizontal stripes indicating a willingness and readiness to breed.
Betta splendens enjoy a decorated tank, as they seek to establish territory even when housed alone. They may set up a territory centered on a plant or rocky alcove, sometimes becoming highly possessive of it and aggressive toward trespassing rivals; consequently, bettas, if housed with other fish, require at least 45 litres (about 10 gallons). Contrary to popular belief, bettas are compatible with many other species of aquarium fish. Given the proper parameters bettas will only be aggressive towards smaller and slower fish than themselves, such as guppies.
Betta aggression has historically made them objects of gambling; two male fish are pitted against each other to fight, with bets placed on which one will win. Combat is characterised by fin nipping, flared gills, extended fins, and intensified colour. The fight continues until one participant is submissive or tries to retreat; one or both fish may die depending on the seriousness of their injuries, though bettas rarely intend to fight to the death. To avoid fights over territory, male Siamese fighting fish are best isolated from one another. Males will occasionally respond aggressively even to their own reflections. Though this is obviously safer than exposing the fish to another male, prolonged sight of their reflection may lead to stress in some individuals. Not all Siamese fighting fish respond negatively to other males, especially if the tank is large enough for each fish to create their own designated territory.
Aggression in females
In general, studies have shown that females exhibit similar aggressive behaviours to males, albeit less frequently and intensely. An observational study examined a group of female Siamese fighting fish over a period of two weeks, during which time they were recorded attacking, flaring, and biting food. This indicated that when females are housed in small groups, they form a stable dominance order, or "pecking order". For example, the fish ranked at the top showed higher levels of mutual displays, in comparison to the fish who were of lower ranks. The researchers also found that the duration of the displays differed depending on whether an attack occurred. The results of this research suggest that female Siamese fighting fish warrant as much scientific study as males, as they seem to have variations in their behaviours as well.
Courtship behaviour
There has been much research in the courtship behaviour between male and female Siamese fighting fish. Studies generally focus on the aggressive behaviours of males during the courtship process. For example, one study found that when male fish are in the bubble nest phase, their aggression toward females is quite low. This is due to the males attempting to attract potential mates to their nest, so eggs can successfully be laid. It has also been found that in determining a suitable mate, females often "eavesdrop" on pairs of males that are fighting. When a female witnesses aggressive behaviour between males, she is more likely to be attracted to the male who won. In contrast, if a female did not "eavesdrop" on a fight between males, she will show no preference in mate choice. In regards to the males, the "loser" is more likely to attempt to court the fish who did not "eavesdrop", while the "winner" showed no preference between females who "eavesdropped" and those who did not.
One study considered the ways in which male Siamese fighting fish alter their behaviours during courtship when another male is present. During this experiment, a dummy female was placed in the tank. The researchers expected that males would conceal their courtship from intruders; instead, when another male fish was present, the male was more likely to engage in courtship behaviours with the dummy female fish. When no barriers were present, the males were more likely to engage in gill flaring at an intruder male fish. The researchers concluded that the male was attempting to court the female and communicate with its rival at the same time. These results indicate the importance of considering courtship behaviour, as the literature has suggested there are many factors that can dramatically affect the ways in which both male and females can act in courtship settings.
Metabolic costs of aggression
Studies have found that Siamese fighting fish often begin with behaviours that require high cost, and gradually decrease their behaviours as the encounter proceeds. This indicates that Siamese fighting fish will first begin an encounter using much metabolic energy, but will gradually decrease, as to not use too much energy, thus making the encounter a waste if the fish is not successful. Similarly, researchers have found that when pairs of male Siamese fighting fish were kept together in the same tank for a three-day period, aggressive behaviour was most prevalent during the mornings of the first two days of their cohabitation. However, the researchers observed that the fighting between the two males decreased as the day progressed. The male in the dominant position initially had metabolic advantage; although as the experiment progressed, both fish became equal in regards to metabolic advantages. In regards to oxygen consumption, one study found that when two male bettas fought, the metabolic rates of both fish did not differ before or during the fight. However, the fish who won showed higher oxygen consumption during the evening following the fight. This indicates that aggressive behaviour in the form of fighting has long-lasting effects on metabolism.
Behavioural effects of chemical exposure
Siamese fighting fish are popular models for studying the neurological and physiological impact of certain chemicals, such as hormones, since their aggression is the result of cell signalling and possibly genes.
One study investigated the effect of testosterone on female Siamese fighting fish. Females were given testosterone, which resulted in changes to fin length, body coloration and gonads that resembled typical male fish. Their aggressive behaviour was found to be elevated when interacting with other females, but reduced when interacting with males. The researchers then allowed the females to interact with a control group of unaltered females; when the female fish stopped receiving testosterone, those who were exposed to the normal females still exhibited male-typical behaviours. In contrast, the female fish who were kept isolated did not continue to exhibit the male typical behaviours after testosterone was discontinued.
Another study exposed male Siamese fighting fish to endocrine-disrupting chemicals. The researchers were curious if exposure to these chemicals would affect the ways in which females respond to the exposed males. It was found that when shown videos of the exposed males, the females favoured those who were not exposed to the endocrine-disrupting chemicals, and avoided those males that were exposed. The researchers concluded that exposure to these chemicals can negatively affect the mating success of male Siamese fighting fish.
A psychology study used male Siamese fighting fish to investigate the effects of fluoxetine, an SSRI used primarily as an antidepressant in humans. Siamese fighting fish were selected as prime models due to having comparable serotonin transporter pathways, which accounts for their aggression. It was found that when exposed to fluoxetine, male Siamese fighting fish exhibited less aggressive behaviour than is characteristic of their species. Similarly, research has found that bettas are responsive to serotonin, dopamine, and GABA.
Sleep behavior
Betta fish can exhibit unusual sleep behaviors, often resulting in new betta owners assuming that their betta fish has died. In an aquarium, betta fish sleep anywhere in the tank they feel comfortable, including at the bottom on the substrate, floating at the mid-level, or at the surface. Betta fish will sleep on their side, upside down, with their nose pointing up, or with their tail pointing up. They are also known to curl up or wedge between tight spaces, such as behind a heater. One of the more unusual sleep behaviors that betta fish exhibit is their ability to sleep out of the water, resting on a leaf or any other flat object protruding from the water. This is made possible by the betta's labyrinth organ, which acts like a human lung, pulling oxygen from the air instead of from the water. When betta fish sleep, their bright colors will often fade, and when combined with their unusual napping positions, they can appear dead. Predatory fish will often avoid eating a dead fish because of the risk of contracting diseases and parasites, making this an excellent defensive mechanism.
Genetics
Despite its commercial popularity, little is known about the Betta splendens genome. Current understanding is so limited that there is little evidence for the genetic basis of basic traits, including sex determination. A 2021 review article argued for increased scientific investigation into the genome of the Siamese fighting fish, and listed several areas of interest which are paraphrased below:
monophyly of the genus Betta including a single-versus-multiple origin of mouthbrooding;
the state of cryptic diversity and evolutionary forces driving speciation in the betta lineage;
responsive genes or genetic interaction to parental care, behavioural aggression, pigmentation and other betta biology; and
preservation technology for betta as insurance against accidental loss of biodiversity this century.
Additionally, betta fish have been used in several studies to assess the impacts of various environmental contaminants, including oil. Improved understanding of the betta genome would allow for more accurate generalisations from these studies. Lastly, the betta fish is an excellent candidate for a model organism, particularly for aggression and pigmentation development, due to their extreme phenotypes in these areas.
Currently, the complete B. splendens chromosomal and mitochondrial genomes have been sequenced. Both genomes have yet to be annotated, though a roadmap for future efforts has been outlined. Notably, the mitochondrial genome for the peaceful betta, P. imbellis, has also been sequenced, potentially allowing for meaningful comparison between species in the future.
Phylogeny and cryptic diversity
There are many species in the genus Betta, the majority of which are very morphologically similar. Within Thailand alone there are twelve nominal species with new species being discovered every 5–10 years. Past efforts to differentiate Betta species have been based on observable morphology, but given their visible similarity, this approach has masked much of the cryptic diversity in the genus. Recent speciation efforts have included use of DNA barcoding to differentiate species, specifically comparing the CO1 gene of the mitochondrial genome, resulting in new theories about the relatedness between species and allowing for the construction of new phylogenetic trees.
The morphological similarity between species that can be distinguished genetically suggests that species radiation with cryptic diversity occurred in the Betta lineage. Current theories about the species radiation and speciation take into account the geographic considerations of their native habitat of Thailand, and suggest that the speciation is best described by a model of either allopatric or parapatric speciation.
Genetics of betta biology
Aggression
B. splendens are known for their intense aggression, which has resulted from intense selective pressures imposed upon them from many generations of artificial selection. Fighting strains of B. splendens have been bred for aggression for over six centuries due to the culture surrounding fighting betta fish and betting money on the results. This has genetically differentiated them from their wild-type counterparts – fighting strains of B. splendens have been shown to be significantly more aggressive than wild bettas, and in addition show differential responses in cortisol production in new environments.
The extreme genetically driven aggression in fighting strains of B. splendens and their differences from the still-observable wild-type makes them an excellent candidate for a model organism through which to study the genetic basis for aggression.
At present, use of the betta fish as a model organism for studying aggression is in its beginning phases. Little is known about the genetic basis of aggression in bettas, though differential degrees of aggression have been observed in different domesticated betta populations.
Research to date
There is evidence that the genetic basis for aggression in betta fish is not exclusively sex-linked – a 2019 study found that female bettas of the fighting strain show significantly higher levels of aggression than their female wild-type counterparts, despite the fact that historically only male bettas have been used in fights and thus artificially selected for aggression. However these results are of limited usefulness, given the lack of scientific consensus on the nature of sex determination in bettas.
A recent study found that a fighting pair of bettas will synchronise their gene expression profiles, with particular emphasis on 37 co-expression gene modules, some of which were only synchronised after a certain duration of time had been spent fighting.
Work to identify the genetic basis for aggression has also been performed more generally in other model species, such as zebrafish. These studies have identified dozens of candidate genes in their respective model organisms which could serve as starting points for research into aggression in betta fish. However, more progress must be made on the annotation of the betta genome before this is feasible.
Pigmentation
Due to the incredible variation in pigmentation of adult bettas and visible pigment in developing embryos, bettas are an attractive model organism for studying the genetic basis for coloration. Additionally, producing a specific color on demand would be of great interest to the commercial betta fish industry, as the price of a fish is largely determined by its coloration. Prices for attractively coloured fish can be high – single fish with the colours of the Thai flag was sold for over $1,500.
The genetic basis for the synthesis and regulation of pigmentation in teleost fish is generally poorly understood, and bettas are no exception. Most work in this area has been done on other model organisms such as zebrafish or African cichlid fish, however as with aggression, work done with other model organisms to identify candidate genes will be tremendously helpful in identifying the genetic basis of pigmentation in bettas.
Work to date
In 1990, genetic differences (polymorphisms at several loci) were found between four different color varieties of bettas, though the variations were noted to be small. Later experiments confirmed the presence of genetic variation in hatchery stocks in Thailand, with low average numbers of alleles per locus and high heterozygosity rates.
Notable color phenotypes in B. splendens include the marbled phenotype and the color changing phenotype, the latter of which changes color over the course of its lifetime. While theories for the genetic basis of these phenotypes exist, scientific evidence for the genetic basis of these phenomena are slim to nonexistent.
Other genetic work
Some of the few candidate genes identified in the literature specific to bettas are immune related genes, which were found in the first whole-body transcriptome of B. splendens obtained by high-throughput sequencing.
Breeding facility
Today, Southeast Asia dominates the industry of breeding and distribution of Siamese fighting fish globally. Initially, male and female fry are raised together in communal tanks up until males begin to display aggression at about 4–5 months of age. Males are then kept isolated in a half-pint liquor flask, where hundreds of these pints may be arranged tightly packed on the ground in the breeding facility. Daily feeding of usually bloodworms are dispensed into each flask. At the shoulder of each flask, there is typically a cut to allow water to flow out for an easy water change. Meanwhile, females may be kept in communal tanks until shipping day. Roughly 100,000 male bettas are shipped each week from Thailand to countries around the world.
According to Dr. Amonrat Sermwatanakul, Head of Senior Fisheries Experts of the Department of Fisheries, Bangkok, Thailand, the Thai government is playing a role in supporting the betta fish breeders. As not only a source of revenue for the country and being a symbol of cultural significance, betta fish breeders have become a focal point of the government's assistance as they lack the capital and resources when wanting to expand their businesses. The DOF and partner agencies created the Ornamental Fish Strategy Plan during 2013–2016. It was created with the intention of making Thailand the number one exporter of aquarium fish in Asia. The three goals of the program included:
1) Improving quality and quantity of ornamental fish
2) Enhancing the ornamental fish trade on a domestic and international level
3) Uplifting ornamental fish farmers to become successful farmer-entrepreneurs
This program also included trainings in basic English, how to navigate advertisement on social media and the internet, pricing and classification of fish, and value-added and creative product. Dr. Sermwatanakul also highlights that around 40% of betta fish farmers registered with the DOF are women, and claims that the industry empowers women. She notes that they are actively engaged in the marketing of their bettas online, alluding to their abilities in speaking other languages and that they succeed in the international market.
In popular culture
In 2019, the pla kat, or Siamese fighting fish, was officially recognized as Thailand's national aquatic animal. The Fisheries Department of Thailand had promoted this recognition the previous year, which was approved by both the National Identity Committee and the National Cultural Committee, then officially announced as adopted in February 2019.
The titular character in the novel Rumble Fish and subsequent film adaptation is a Siamese fighting fish. In both, the character Motorcycle Boy is fascinated with the creatures and dubs them "rumble fish". He speculates that if the fish were to be set free in the river, they wouldn't behave so aggressively. A common misconception regarding keeping B. splendens is that they should live in vases or bowls. However, this has been proven to damage their health, life expectancy, and cause negative behavioural changes.
A scene in the James Bond film From Russia with Love shows three Siamese fighting fish in an aquarium as the villain Ernst Stavro Blofeld likens the modus operandi of his criminal organisation, SPECTRE, to one of the fish that observes as the other two fight to the death, then kills the weakened victor.
In 2020, a Siamese fighting fish kept in a home aquarium in Japan named Lala was livestreamed successfully 'completing' a copy of Pokémon Sapphire by use of a laser that followed the fish and triggered button inputs mapped on a grid behind the tank. Lala's playthrough of the game was carried out over four months, commencing in June 2020 and concluding in November, and the experiment also resulted in the discovery of a glitch that softlocked the game that had previously gone undiscovered.
| Biology and health sciences | Acanthomorpha | null |
2599511 | https://en.wikipedia.org/wiki/Smudge%20pot | Smudge pot | A smudge pot (also known as a choofa or orchard heater) is an oil-burning device used to prevent frost on fruit trees. Usually a smudge pot has a large round base with a chimney coming out of the middle of the base. The smudge pot is placed between trees in an orchard. The burning oil creates heat, smoke, carbon dioxide, and water vapor. It was believed that this oil burning heater would help keep the orchard from cooling too much during the cold snaps.
History
In 1907, a young inventor, Willis Frederick Charles “W.C.” Scheu (Dec 1st, 1868April 11th, 1942), at that time in Grand Junction, Colorado, developed an oil-burning stack heater that was more effective than open fires in heating orchards and vineyards. In 1911, he opened Scheu Manufacturing Company in Upland, California, and began producing a line of orchard heaters. Scheu Steel is still in business, in 2021. The use of smudge pots became widespread after a disastrous freeze in Southern California, January 4–8, 1913, wiped out a whole crop.
Smudge pots were commonly used for seven decades in areas such as California's numerous citrus groves and vineyards. The Redlands district had 462,000 orchard heaters for the winter of 1932–33, reported P. E. Simpson, of the supply department of the California Fruit Growers Exchange, requiring 3,693,000 gallons of oil for a single refilling, or about 330 tank car loads. To fill all of the smudge pots in Southern California one time required 2,000 car loads.
Smudge pot use in Redlands, California groves continued into the 1970s, but fell out of favor as oil prices rose and environmental concerns increased. Pots came in two major styles: a single louvered stack above a fuel-oil–filled base, and a slightly taller version that featured a cambered, louvered neck and a galvanized re-breather feed pipe out of the side of the chimney that siphoned stack gas back into the burn chamber and produced more complete combustion. The return-stack heater was developed by the University of California and became commercially available about 1940. Filler caps have a three- or four-hole flue control. The stem into the pot usually has a piece of oil-soaked wood ("down-draft tube and wick") secured inside the neck to aid in lighting the pot. Pots are ignited when the air temperature reaches , and for each additional degree of drop, another hole is opened on the control cap ("draft regulator"). Below 25 degrees, nothing more can be done to enhance the heating effects.
Types and usage
Some groves used natural gas pots on lines from a gas source, but these are not smudge pots in the usual sense, and they represented only a fraction of the smudging practice. Experiments using natural gas heaters were conducted in Rialto, California, in 1912. Sometimes, large smudge pots are used for heating large open buildings, such as mechanics' workshops. In Australia they are called "choofers" because of the noise they make when lit: "choofa choofa choofa".
Lighting an Australian "choofer" is a tricky business. Because of the voluminous clouds of oily black smoke they produce when cold, they must be lit outside. This is accomplished by holding a burning rag next to the open damper on the fuel tank. The draught caused by the breeze passing through the chimney will draw air through the open damper into the fuel tank, where the surface of the fuel inside will light and burn instantly. Once the choofer is sufficiently warm, the damper may be closed until a steady rate of burning is attained, when the characteristic "choofa choofa choofa" noise is produced. If the damper is not closed, the choofa may choke itself with its own smoke, causing periodic "explosions" of unburnt gases in the chimney. Such explosions are not dangerous, but they are noisy and they produce a lot of smoke. Once the heater is burning hot enough, the smoke will disappear and the pot may be dragged slowly and carefully inside. They still produce dangerous gas and must only be used in well-ventilated spaces.
Choofers will burn almost any combustible liquid fuel, including kerosene, diesel fuel, or used sump oil.
Prior to the development of battery-powered safety blinkers on saw-horses, many highway departments used small oil-burning safety pot markers to denote work zones, and many railroad systems still rely on oil-fired switch heaters, long tubs of fuel with a wicks, that fit between the ties and keeps snow and ice from fouling the points of a switch. This is generally only used in yard applications. Mainline switches are usually heated by natural gas heaters.
The smudge pot was also used at construction sites and other cold places to take the chill out of buildings so workers would be comfortable, and for several decades (1920s–1970s) they were used as emergency night landing illumination at remote airfields without electric runway lights, acting as a series of small bonfires.
Use in war
Smudge pots were used by the Germans, the Japanese, and the United States Navy during World War II, and by the North Vietnamese in their invasions of Laos during the Vietnam War to protect valuable targets. The oily black clouds of smoke produced from these smudge pots was intended to limit the ability to locate a target. In Vietnam, smoke from smudge pots was used as a defense against laser-guided bombs. Smoke would diffuse the laser beam and break the laser's connection with its intended target.
Other significance
The smudge pot often became a symbolic prize in Southern California high school football rivalries.
Bonita High School and San Dimas High School, affiliated with the Bonita Unified School District in Southern California, compete in varsity football for a silver-plated smudge pot.
In Redlands, California, Redlands High School and Redlands East Valley High School also compete in varsity football for a blue-and-red smudge pot. The game is known among football fans as the 'Smudge Bowl'
Sometimes called a "highway torch", smaller smudge pots were historically used to warn oncoming traffic of road maintenance at night.[See talk]
| Technology | Horticulture | null |
471880 | https://en.wikipedia.org/wiki/Picea%20abies | Picea abies | Picea abies, the Norway spruce or European spruce, is a species of spruce native to Northern, Central and Eastern Europe.
It has branchlets that typically hang downwards, and the largest cones of any spruce, 9–17 cm long. It is very closely related to the Siberian spruce (Picea obovata), which replaces it east of the Ural Mountains, and with which it hybridizes freely. The Norway spruce has a wide distribution for it being planted for its wood, and is the species used as the main Christmas tree in several countries around the world. It was the first gymnosperm to have its genome sequenced. The Latin specific epithet abies means "like
Abies, Fir tree".
Description
Norway spruce is a large, fast-growing evergreen coniferous tree growing tall and with a trunk diameter of 1 to 1.5 m. It can grow fast when young, up to 1 m per year for the first 25 years under good conditions, but becomes slower once over tall. The shoots are orange-brown and glabrous. The leaves are needle-like with blunt tips, 12–14 mm long, quadrangular in cross-section, and dark green on all four sides with inconspicuous stomatal lines. The seed cones are 9–17 cm long (the longest of any spruce), and have bluntly to sharply triangular-pointed scale tips. They are green or reddish, maturing brown 5–7 months after pollination. The seeds are black, 4–5 mm long, with a pale brown 15 mm wing.
The tallest measured Norway spruce is tall and grows near Ribnica na Pohorju, Slovenia.
Range and ecology
The Norway spruce grows throughout Europe from Norway in the northwest and Poland eastward, and also in the mountains of central Europe, southwest to the western end of the Alps, and southeast in the Carpathians and Balkans to the extreme north of Greece. The northern limit is in the arctic, just north of 70° N in Norway. Its eastern limit in Russia is hard to define, due to extensive hybridization and intergradation with the Siberian spruce, but is usually given as the Ural Mountains. However, trees showing some Siberian spruce characters extend as far west as much of northern Finland, with a few records in northeast Norway. The hybrid is known as Picea × fennica (or P. abies subsp. fennica, if the two taxa are considered subspecies), and can be distinguished by a tendency towards having hairy shoots and cones with smoothly rounded scales.
Norway spruce cone scales are used as food by the caterpillars of the tortrix moth Cydia illutana, whereas Cydia duplicana feeds on the bark around injuries or canker.
Taxonomy
Populations in southeast Europe tend to have on average longer cones with more pointed scales; these are sometimes distinguished as Picea abies var. acuminata, but there is extensive overlap in variation with trees from other parts of the range.
Some botanists treat Siberian spruce as a subspecies of Norway spruce, though in their typical forms, they are very distinct, the Siberian spruce having cones only 5–10 cm long, with smoothly rounded scales, and pubescent shoots. Genetically Norway and Siberian spruces have turned out to be extremely similar and may be considered as two closely related subspecies of P. abies.
Another spruce with smoothly rounded cone scales and hairy shoots occurs rarely in the Central Alps in eastern Switzerland. It is also distinct in having thicker, blue-green leaves. Many texts treat this as a variant of Norway spruce, but it is as distinct as many other spruces, and appears to be more closely related to Siberian spruce (Picea obovata), Schrenk's spruce (Picea schrenkiana) from central Asia and Morinda spruce (Picea smithiana) in the Himalaya. Treated as a distinct species, it takes the name Alpine spruce (Picea alpestris). As with Siberian spruce, it hybridizes extensively with Norway spruce; pure specimens are rare. Hybrids are commonly known as Norwegian spruce, which should not be confused with the pure species Norway spruce.
Cultivation
The Norway spruce is one of the most widely planted spruces, both in and outside of its native range, and one of the most economically important coniferous species in Europe. It is used as an ornamental tree in parks and gardens. It is also widely planted for use as a Christmas tree. Every Christmas, the Norwegian capital city, Oslo, provides the cities of London (the Trafalgar Square Christmas tree), Edinburgh and Washington, D.C., with a Norway spruce, which is placed at the central most square of each city. This is mainly a sign of gratitude for the aid these countries gave during the Second World War.
In North America, Norway spruce is widely planted, specifically in the Northeastern, Pacific Coast, and Rocky Mountain states, as well as in southeastern Canada. It is naturalised in some parts of North America. There are naturalized populations occurring from Connecticut to Michigan, and it is probable that they occur elsewhere. Norway spruces prefer cool-summer areas and they will grow up to USDA Growing Zone 7.
Seed production begins when the tree is in its fourth decade and total lifespan is up to 300 years in its natural range in Europe. Introduced Norway spruces in the British Isles and North America have a much shorter life expectancy. As the tree ages, its crown thins out and lower branches die off.
In the northern US and Canada, Norway spruce is reported as invasive in some locations; however, it does not pose a problem in Zone 6 and up as the seeds have a significantly reduced germination rate in areas with hot, humid summers.
The Norway spruce tolerates acidic soils well, but does not do well on dry or deficient soils. From 1928 until the 1960s it was planted on surface mine spoils in Indiana.
Cultivars
Several cultivars have been selected as ornamentals ('Barrya', 'Capitata', 'Decumbens', 'Dumosa', 'Clanbrassiliana', 'Gregoryana', 'Inversa', 'Microsperma', 'Nidiformis', 'Ohlendorffii', 'Repens', 'Tabuliformis', 'Maxwellii', 'Virgata', 'Inversa', 'Pendula'), with a wide variety of sizes and shapes, from full-sized forest trees to extremely slow-growing, prostrate forms. They are occasionally traded under the obsolete scientific name Picea excelsa (an illegitimate name). The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit:
'Acrocona' – tall and broad
'Clanbrassiliana' – tall by broad
'Inversa' – tall by broad
'Little Gem' – tall and broad
'Nidiformis' – tall by broad
Uses
The Norway spruce is used in forestry for (softwood) timber, and paper production.
The Norwegian company Borregaard produces the synthetic substitute for natural vanilla Vanillin using the Norwegian spruce. They are currently the only company to produce wood based vanillin and is claimed by the company to be preferred by their customers due to, among other reasons, its much lower carbon footprint than petrochemically synthesized vanillin.
It is esteemed as a source of tonewood by stringed-instrument makers, and is commonly used for violins. One form of the tree called (Hazel-spruce) grows in the European Alps and has been recognized by UNESCO as intangible cultural heritage. This form was used by Stradivarius for instruments.
As food
The tree is the source of spruce beer, which was once used to prevent and even cure scurvy.
Norway spruce shoot tips have been used in traditional Austrian medicine internally (as syrup or tea) and externally (as baths, for inhalation, as ointments, as resin application or as tea) for treatment of disorders of the respiratory tract, skin, locomotor system, gastrointestinal tract and infections.
During the production of Mont d'Or cheese it is wrapped in a "sangle" made from the cambium of a Norway spruce (French: ) for about two weeks at least, which gives the cheese a unique flavour.
In Finland, Norway spruce tips (Finnish: kuusenkerkkä) are used as a spice, for example, in syrup, herbal tea, alcohol, smoothies, salt, and desserts. Spruce tip syrup is also used as a cold medicine.
Longevity
A press release from Umeå University says that a Norway spruce clone named Old Tjikko, carbon dated as 9,550 years old, is the "oldest living tree". The oldest individual specimen of Norway spruce discovered by tree ring dating found in 2012 in a nature reserve of Buskerud County, Norway, was found to be 532 years old.
However, Pando, a stand of 47,000 quaking aspen clones, is estimated to be between 14,000 and one million years old.
The stress is on the difference between the singular "oldest tree" and the multiple "oldest trees", and between "oldest clone" and "oldest non-clone". Old Tjikko is one of a series of genetically identical clones growing from a root system, one part of which is estimated to be 9,550 years old based on carbon dating. The oldest known individual tree (that has not taken advantage of vegetative cloning) is a Great Basin bristlecone pine over 5,000 years old (germination in 3051 BC).
Genetics
The genome of Picea abies was sequenced in 2013, the first gymnosperm genome to be completely sequenced. The genome contains approximately 20 billion base pairs and is about six times the size of the human genome, despite possessing a similar number of genes. A large proportion of the spruce genome consists of repetitive DNA sequences, including long terminal repeat transposable elements. Despite recent advances in massively parallel DNA sequencing, the assembly of such a large and repetitive genome is a particularly challenging task, mainly from a computational perspective.
Within populations of Picea abies there is great genetic variability, which most likely reflect populations' isolation in glacial refugia and post-glacial evolutionary history. Genetic diversity can in particular be detected when looking at how the populations respond to climatic conditions. E.g. variations in timing and length of the annual growth period as well as differences in frost-hardiness in spring and autumn. These annual growth patterns are important to recognize in order to choose the proper reforestation material of Picea abies.
Chemistry
p-Hydroxybenzoic acid glucoside, picein, piceatannol and its glucoside (astringin), isorhapontin (the isorhapontigenin glucoside), catechin and ferulic acid are phenolic compounds found in mycorrhizal and non-mycorrhizal roots of Norway spruces. Piceol and astringin are also found in P. abies.
Research
Extracts from Picea abies have shown inhibitory activity on porcine pancreatic lipase in vitro.
Synonyms
Picea abies (L.) H. Karst is the accepted name of this species. More than 150 synonyms of Picea abies have been published.
Homotypic synonyms of Picea abies are:
Pinus abies L.
Abies picea Mill.
Pinus pyramidalis Salisb.
Pinus abies subsp. vulgaris Voss
Abies abies (L.) Druce
Some heterotypic synonyms of Picea abies are:
Abies alpestris Brügger
Abies carpatica (Loudon) Ravenscr.
Abies cinerea Borkh.
Abies clambrasiliana Lavallée
Abies clanbrassiliana P. Lawson
Abies coerulescens K. Koch
Abies conica Lavallée
Abies elegans Sm. ex J.Knight
Abies eremita K.Koch
Abies erythrocarpa (Purk.) Nyman
Abies excelsa (Lam.) Poir.
Abies extrema Th.Fr.
Abies finedonensis Gordon
Abies gigantea Sm. ex Carrière
Abies gregoryana H. Low. ex Gordon
Abies inverta R. Sm. ex Gordon
Abies lemoniana Booth ex Gordon
Abies medioxima C.Lawson
Abies minuta Poir.
Abies montana Nyman
Abies parvula Knight
Abies subarctica (Schur) Nyman
Abies viminalis Wahlenb.
Picea alpestris (Brügger) Stein
Picea cranstonii Beissn.
Picea elegantissima Beissn.
Picea excelsa (Lam.) Link
Picea finedonensis Beissn.
Picea gregoryana Beissn.
Picea integrisquamis (Carrière) Chiov.
Picea maxwellii Beissn.
Picea montana Schur
Picea remontii Beissn.
Picea rubra A. Dietr.
Picea subarctica Schur
Picea velebitica Simonk. ex Kümmerle
Picea viminalis (Alstr.) Beissn.
Picea vulgaris Link
Pinus excelsa Lam.
Pinus sativa Lam.
Pinus viminalis Alstr.
| Biology and health sciences | Pinaceae | Plants |
472190 | https://en.wikipedia.org/wiki/Batrachoididae | Batrachoididae | Batrachoididae is the only family in the ray-finned fish order Batrachoidiformes . Members of this family are usually called toadfish or frogfish: both the English common name and scientific name refer to their toad-like appearance (batrakhos is Greek for frog).
Toadfish are benthic ambush predators that favor sandy or muddy substrates where their cryptic coloration helps them avoid detection by their prey. Toadfish are well known for their ability to "sing", males in particular using the swim bladder as a sound-production device used to attract mates.
Evolution
The earliest fossil remains of toadfish are of otoliths from the Early Eocene of France. The earliest articulated fossil taxa are Louckaichthys from the Oligocene of the Czech Republic and Zappaichthys from the Miocene of Austria. Bacchiaichthys from the Late Cretaceous (Maastrichtian) of Italy very closely resembles toadfish, but it has some features that seem to refute any classification to the Batrachoidiformes as currently defined; however, toadfish are still thought to have likely diverged from their closest relatives in the Late Cretaceous.
Description
Toadfish are usually scaleless, with eyes set high on large heads. Their mouths are also large, with both a maxilla and premaxilla, and often decorated with barbels and skin flaps. They are generally drab in colour, although those living on coral reefs may have brighter patterns. They range in size from length in Thlassophryne megalops, to in the Pacuma toadfish.
The gills are small and occur only on the sides of the fish. The pelvic fins are forward of the pectoral fins, usually under the gills, and have one spine with several soft rays. For the two separate dorsal fins, the first is smaller with spines, while the second has from 15 to 25 soft rays. The number of vertebrae range from 25 to 47.
Toadfishes of the genus Porichthys, the midshipman fishes, have photophores and four lateral lines. All toadfishes possess sharp spines on the first dorsal fin and on the opercle (gill cover). In fish of the subfamily Thalassophryninae, these are hollow and connect to venom glands capable of delivering a painful wound to predators.
Distribution and habitat
Toadfishes are found worldwide. Most toadfish are marine, though some are found in brackish water and one subfamily, the Thalassophryninae, is found exclusively in freshwater habitats in South America. In particular, Daector quadrizonatus and Thalassophryne amazonica are known from the Atrato River in Colombia and the Amazon River, respectively.
Habits and reproduction
Toadfishes are bottom-dwellers, ranging from near-shore areas to deep waters. They tend to be omnivorous, eating sea worms, crustaceans, mollusks, and other fish. They often hide in rock crevices, among the bottom vegetation, or even dig dens in the bottom sediments, from which they ambush their prey. Toadfish can survive out of water for as long as 24 hours, and some can move across exposed mudflats at low tide using their fins.
Males make nests, and then attract females by "singing", that is, by releasing air by contracting muscles on their swim bladders. The sound has been called a 'hum' or 'whistle', and can be loud enough to be clearly audible from the surface. The eggs are sticky on one side, so the female can attach them to the side of the nest. Each male attracts numerous females to his nest, so the eggs within have multiple mothers.
The male then guards the nest against predators. During this period, the male must survive on a limited supply of food, as he is not able to leave the immediate vicinity to hunt. The eggs rapidly develop into embryos, but these remain attached to the side of the nest until the age of about three to four weeks. After this time, they continue to cluster around and hide behind the male, until they are large enough to fend for themselves. This degree of parental care is very unusual among fishes.
Genera
About 83 species of toadfishes are grouped into 21 genera, as:
Order Batrachoidiformes
Family Batrachoididae
Subfamily Batrachoidinae
Genus Amphichthys (two species)
Genus Batrachoides (9 species)
Genus Opsanus (six species)
Genus Potamobatrachus (one species)
Genus Sanopus (six species)
Genus Vladichthys (one species)
Subfamily Halophryninae
Genus Allenbatrachus (three species)
Genus Austrobatrachus (two species)
Genus Barchatus (one species)
Genus Batrachomoeus (five species)
Genus Batrichthys (two species)
Genus Bifax (one species)
Genus Chatrabus (three species)
Genus Colletteichthys (three species)
Genus Halobatrachus - Lusitanian toadfish (one species)
Genus Halophryne (four species)
Genus Perulibatrachus (three species)
Genus Riekertia - broadbodied toadfish (one species)
Genus Triathalassothia (two species)
Subfamily Porichthyinae
Genus Aphos (one species)
Genus Porichthys - midshipmen (14 species)
Subfamily Thalassophryninae
Genus Daector (five species)
Genus Thalassophryne (six species)
Timeline of genera
Economics
Toadfish are not normally commercially exploited, but they are taken by local fishermen as a food fish, and by trawlers where they usually end up as a source of fishmeal and oil. Some smaller toadfish from brackish-water habitats have been exported as freshwater aquarium fishes.
The western Atlantic species Opsanus tau, known as the oyster toadfish, is quite widely used as a research animal, while a few species, most notably Thalassophryne amazonica, are occasionally kept as aquarium fish.
| Biology and health sciences | Acanthomorpha | Animals |
472195 | https://en.wikipedia.org/wiki/Garden%20cress | Garden cress | Cress (Lepidium sativum), sometimes referred to as garden cress (or curly cress) to distinguish it from similar plants also referred to as cress (from Old English cresse), is a rather fast-growing, edible herb.
Garden cress is genetically related to watercress and mustard, sharing their peppery, tangy flavour and aroma. In some regions, garden cress is known as mustard and cress, garden pepper cress, pepperwort, pepper grass, or poor man's pepper.
This annual plant can reach a height of , with many branches on the upper part. The white to pinkish flowers are only across, clustered in small branched racemes.
When consumed raw, cress is a high-nutrient food containing substantial content of vitamins A, C and K and several dietary minerals.
In agriculture
Cultivation of cress is practical both on mass scales and on the individual scale. Garden cress is suitable for hydroponic cultivation and thrives in slightly alkaline water. In many local markets, the demand for hydroponically grown cress can exceed available supply, partially because cress leaves are not suitable for distribution in dried form, so they can only be partially preserved. Consumers commonly acquire cress as seeds or (in Europe) from markets as boxes of young live shoots.
Edible shoots are typically harvested in one to two weeks after planting, when they are tall.
Culinary uses
Garden cress is added to soups, sandwiches and salads for its tangy flavour. It is also eaten as sprouts, and the fresh or dried seed pods can be used as a peppery seasoning (haloon). In the United Kingdom, cut cress shoots are commonly used in sandwiches with boiled eggs and mayonnaise.
Nutrition
Raw cress is 89% water, 6% carbohydrates (including 1% dietary fiber), 3% protein and less than 1% fat (table). In a reference quantity, raw cress supplies of food energy and numerous nutrients in significant content, including vitamin K (516% of the Daily Value, DV), vitamin C (83% DV) and vitamin A (43% DV). Among dietary minerals, manganese levels are high (26% DV) while several others, including potassium and magnesium, are in moderate content (table).
Other uses
Garden cress, known as chandrashoor, and the seeds, known as aaliv or aleev in Marathi, or halloon in India, are commonly used in the system of Ayurveda. It is also known as asario in India and the Middle East where it is prized as a medicinal herb, called habbat al hamra (literally red seeds) in Arabic. In the Arabian Peninsula, the seeds are traditionally mixed with custard to make a hot drink.
L. sativum is often used in experiments to teach biology to students in schools. The plant grows readily on damp paper or cotton, and its fast germination and development time makes it useful in demonstrating plant growth.
Gallery
| Biology and health sciences | Herbs and spices | Plants |
472212 | https://en.wikipedia.org/wiki/Occipital%20lobe | Occipital lobe | The occipital lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The name derives from its position at the back of the head, from the Latin , 'behind', and , 'head'.
The occipital lobe is the visual processing center of the mammalian brain containing most of the anatomical region of the visual cortex. The primary visual cortex is Brodmann area 17, commonly called V1 (visual one). Human V1 is located on the medial side of the occipital lobe within the calcarine sulcus; the full extent of V1 often continues onto the occipital pole. V1 is often also called striate cortex because it can be identified by a large stripe of myelin, the stria of Gennari. Visually driven regions outside V1 are called extrastriate cortex. There are many extrastriate regions, and these are specialized for different visual tasks, such as visuospatial processing, color differentiation, and motion perception. Bilateral lesions of the occipital lobe can lead to cortical blindness (see Anton's syndrome).
Structure
The two occipital lobes are the smallest of four paired lobes in the human brain. Located in the rearmost portion of the skull, the occipital lobes are part of the posterior cerebrum. The lobes of the brain are named from the overlying bone and the occipital bone overlies the occipital lobes.
The lobes rest on the tentorium cerebelli, a process of dura mater that separates the cerebrum from the cerebellum. They are structurally isolated in their respective cerebral hemispheres by the separation of the cerebral fissure. At the front edge of the occipital lobe are several occipital gyri, which are separated by lateral occipital sulcus.
The occipital aspects along the inside face of each hemisphere are divided by the calcarine sulcus. Above the medial, Y-shaped sulcus lies the cuneus, and the area below the sulcus is the lingual gyrus.
Damage to the primary visual areas of the occipital lobe can cause partial or complete blindness.
Function
The occipital lobe is divided into several functional visual areas. Each visual area contains a full map of the visual world. Although there are no anatomical markers distinguishing these areas (except for the prominent striations in the striate cortex), physiologists have used electrode recordings to divide the cortex into different functional regions.
The first functional area is the primary visual cortex. It contains a low-level description of the local orientation, spatial-frequency and color properties within small receptive fields. Primary visual cortex projects to the occipital areas of the ventral stream (visual area V2 and visual area V4), and the occipital areas of the dorsal stream—visual area V3, visual area MT (V5), and the dorsomedial area (DM).
The ventral stream is known for processing the "what" in vision, while the dorsal stream handles the "where/how". This is because the ventral stream provides important information for the identification of stimuli that are stored in memory. With this information in memory, the dorsal stream is able to focus on motor actions in response to the outside stimuli.
Although numerous studies have shown that the two systems are independent and structured separately from another, there is also evidence that both are essential for successful perception, especially as the stimuli take on more complex forms. For example, a case study using fMRI was done on shape and location. The first procedure consisted of location tasks. The second procedure was in a lit-room where participants were shown stimuli on a screen for 600 ms. They found that the two pathways play a role in shape perception even though location processing continues to lie within the dorsal stream.
The dorsomedial (DM) is not as thoroughly studied. However, there is some evidence that suggests that this stream interacts with other visual areas. A case study on monkeys revealed that information from V1 and V2 areas make up half the inputs in the DM. The remaining inputs are from multiple sources that have to do with any sort of visual processing
A significant functional aspect of the occipital lobe is that it contains the primary visual cortex.
Retinal sensors convey stimuli through the optic tracts to the lateral geniculate bodies, where optic radiations continue to the visual cortex. Each visual cortex receives raw sensory information from the outside half of the retina on the same side of the head and from the inside half of the retina on the other side of the head. The cuneus (Brodmann's area 17) receives visual information from the contralateral superior retina representing the inferior visual field. The lingula receives information from the contralateral inferior retina representing the superior visual field. The retinal inputs pass through a "way station" in the lateral geniculate nucleus of the thalamus before projecting to the cortex. Cells on the posterior aspect of the occipital lobes' gray matter are arranged as a spatial map of the retinal field. Functional neuroimaging reveals similar patterns of response in cortical tissue of the lobes when the retinal fields are exposed to a strong pattern.
Clinical significance
If one occipital lobe is damaged, the result can be homonymous hemianopsia vision loss from similarly positioned "field cuts" in each eye. Occipital lesions can cause visual hallucinations. Lesions in the parietal-temporal-occipital association area are associated with color agnosia, movement agnosia, and agraphia. Lesions near the left occipital lobe can result in pure alexia (alexia without agraphia). Damage to the primary visual cortex, which is located on the surface of the posterior occipital lobe, can cause blindness due to the holes in the visual map on the surface of the visual cortex that resulted from the lesions.
Epilepsy
Recent studies have shown that specific neurological findings have affected idiopathic occipital lobe epilepsies. Occipital lobe seizures are triggered by a flash, or a visual image that contains multiple colors. These are called flicker stimulation (usually through TV) these seizures are referred to as photo-sensitivity seizures. Patients having experienced occipital seizures described their seizures as featuring bright colors, and severely blurring their vision (vomiting was also apparent in some patients). Occipital seizures are triggered mainly during the day, through television, video games or any flicker stimulatory system. Occipital seizures originate from an epileptic focus confined within the occipital lobes. They may be spontaneous or triggered by external visual stimuli. Occipital lobe epilepsies are etiologically idiopathic, symptomatic, or cryptogenic. Symptomatic occipital seizures can start at any age, as well as any stage after or during the course of the underlying causative disorder. Idiopathic occipital epilepsy usually starts in childhood. Occipital epilepsies account for approximately 5% to 10% of all epilepsies.
Additional images
| Biology and health sciences | Nervous system | Biology |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.