id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
93084
https://en.wikipedia.org/wiki/Crab
Crab
Crabs are decapod crustaceans of the infraorder Brachyura (meaning "short tail" in Greek), which typically have a very short projecting tail-like abdomen, usually hidden entirely under the thorax. They live in all the world's oceans, in freshwater, and on land. They are generally covered with a thick exoskeleton. They generally have five pairs of legs, and they have pincer claws on the ends of the frontmost pair. They first appeared during the Jurassic period, around 200 million years ago. Description Crabs are generally covered with a thick exoskeleton, composed primarily of highly mineralized chitin. Behind their pair of chelae (claws) are six walking legs and then two swimming legs. The crab breathes through gills on its underside; gills must be at least moist to work. Crabs vary in size from the pea crab, a few millimeters wide, to the Japanese spider crab, with a leg span up to . Several other groups of crustaceans with similar appearances – such as king crabs and porcelain crabs – are not true crabs, but have evolved features similar to true crabs through a process known as carcinisation. Environment Crabs are found in all of the world's oceans, as well as in fresh water and on land, particularly in tropical regions. About 850 species are freshwater crabs. Sexual differences Crabs often show marked sexual dimorphism. Males often have larger claws, a tendency that is particularly pronounced in the fiddler crabs of the genus Uca (Ocypodidae). In fiddler crabs, males have one greatly enlarged claw used for communication, particularly for attracting a mate. Another conspicuous difference is the form of the pleon (abdomen); in most male crabs, this is narrow and triangular in form, while females have a broader, rounded abdomen. This is because female crabs brood fertilised eggs on their pleopods. Life cycle Crabs attract a mate through chemical (pheromones), visual, acoustic, or vibratory means. Pheromones are used by most fully aquatic crabs, while terrestrial and semiterrestrial crabs often use visual signals, such as fiddler crab males waving their large claws to attract females. The vast number of brachyuran crabs have internal fertilisation and mate belly-to-belly. For many aquatic species, mating takes place just after the female has moulted and is still soft. Females can store the sperm for a long time before using it to fertilise their eggs. When fertilisation has taken place, the eggs are released onto the female's abdomen, below the tail flap, secured with a sticky material. In this location, they are protected during embryonic development. Females carrying eggs are called "berried" since the eggs resemble round berries. When development is complete, the female releases the newly hatched larvae into the water, where they are part of the plankton. The release is often timed with the tidal and light/dark diurnal cycle. The free-swimming tiny zoea larvae can float and take advantage of water currents. They have a spine, which probably reduces the rate of predation by larger animals. The zoea of most species must find food, but some crabs provide enough yolk in the eggs that the larval stages can continue to live off the yolk. Each species has a particular number of zoeal stages, separated by moults, before they change into a megalopa stage, which resembles an adult crab, except for having the abdomen (tail) sticking out behind. After one more moult, the crab is a juvenile, living on the bottom rather than floating in the water. This last moult, from megalopa to juvenile, is critical, and it must take place in a habitat that is suitable for the juvenile to survive. Most species of terrestrial crabs must migrate down to the ocean to release their larvae; in some cases, this entails very extensive migrations. After living for a short time as larvae in the ocean, the juveniles must do this migration in reverse. In many tropical areas with land crabs, these migrations often result in considerable roadkill of migrating crabs. Once crabs have become juveniles, they still have to keep moulting many more times to become adults. They are covered with a hard shell, which would otherwise prevent growth. The moult cycle is coordinated by hormones. When preparing for moult, the old shell is softened and partly eroded away, while the rudimentary beginnings of a new shell form under it. At the time of moulting, the crab takes in a lot of water to expand and crack open the old shell at a line of weakness along the back edge of the carapace. The crab must then extract all of itself – including its legs, mouthparts, eyestalks, and even the lining of the front and back of the digestive tract – from the old shell. This is a difficult process that takes many hours, and if a crab gets stuck, it will die. After freeing itself from the old shell (now called an exuvia), the crab is extremely soft and hides until its new shell has hardened. While the new shell is still soft, the crab can expand it to make room for future growth. Behaviour Crabs typically walk sideways (hence the term crabwise), because of the articulation of the legs which makes a sidelong gait more efficient. Some crabs walk forward or backward, including raninids, Libinia emarginata and Mictyris platycheles. Some crabs, like the Portunidae and Matutidae, are also capable of swimming, the Portunidae especially so as their last pair of walking legs are flattened into swimming paddles. Crabs are mostly active animals with complex behaviour patterns such as communicating by drumming or waving their pincers. Crabs tend to be aggressive toward one another, and males often fight to gain access to females. On rocky seashores, where nearly all caves and crevices are occupied, crabs may also fight over hiding holes. Fiddler crabs (genus Uca) dig burrows in sand or mud, which they use for resting, hiding, and mating, and to defend against intruders. Crabs are omnivores, feeding primarily on algae, and taking any other food, including molluscs, worms, other crustaceans, fungi, bacteria, and detritus, depending on their availability and the crab species. For many crabs, a mixed diet of plant and animal matter results in the fastest growth and greatest fitness. Some species are more specialised in their diets, based in plankton, clams or fish. Crabs are known to work together to provide food and protection for their family, and during mating season to find a comfortable spot for the female to release her eggs. Human consumption Fisheries Crabs make up 20% of all marine crustaceans caught, farmed, and consumed worldwide, amounting to 1.5 million tonnes annually. One species, Portunus trituberculatus, accounts for one-fifth of that total. Other commercially important taxa include Portunus pelagicus, several species in the genus Chionoecetes, the blue crab (Callinectes sapidus), Charybdis spp., Cancer pagurus, the Dungeness crab (Metacarcinus magister), and Scylla serrata, each of which yields more than 20,000 tonnes annually. In some crab species, meat is harvested by manually twisting and pulling off one or both claws and returning the live crab to the water in the knowledge that the crab may survive and regenerate the claws. Crabs as food Crabs are prepared and eaten as a dish in many different ways all over the world. Some species can be eaten whole, including the shell, as soft-shell crabs; with other species, just the claws or legs are eaten. The latter is particularly common for larger crabs, such as the snow crab. In many cultures, the roe of the female crab is also eaten, which usually appears orange or yellow in fertile crabs. This is popular in Southeast Asian cultures, some Mediterranean and Northern European cultures, and on the East, Chesapeake, and Gulf Coasts of the United States. In some regions, spices improve the culinary experience. In Southeast Asia and the Indosphere, masala crab and chilli crab are examples of heavily spiced dishes. In the Chesapeake Bay region, blue crab is often steamed with Old Bay Seasoning. Alaskan king crab or snow crab legs are usually simply boiled and served with garlic or lemon butter. For the British dish dressed crab, the crab meat is extracted and placed inside the hard shell. One American way to prepare crab meat is by extracting it and adding varying amounts of binders, such as egg white, cracker meal, mayonnaise, or mustard, creating a crab cake. Crabs can also be made into a bisque, a global dish of French origin which in its authentic form includes in the broth the pulverized shells of the shellfish from which it is made. Imitation crab, also called surimi, is made from minced fish meat that is crafted and colored to resemble crab meat. While it is sometimes disdained among some elements of the culinaryindustry as an unacceptably low-quality substitute for real crab, this does not hinder its popularity, especially as a sushi ingredient in Japan and South Korea, and in home cooking, where cost is often a chief concern. Indeed, surimi is an important source of protein in most East and Southeast Asian cultures, appearing in staple ingredients such as fish balls and fish cake. Pain Whether crustaceans as a whole experience pain or not is a scientific debate that has ethical implications for crab dish preparation. Crabs are very often boiled alive as part of the cooking process. Evolution The earliest unambiguous crab fossils date from the Early Jurassic, with the oldest being Eocarcinus from the early Pliensbachian of Britain, which likely represents a stem-group lineage, as it lacks several key morphological features that define modern crabs. Most Jurassic crabs are only known from dorsal (top half of the body) carapaces, making it difficult to determine their relationships. Crabs radiated in the Late Jurassic, corresponding with an increase in reef habitats, though they would decline at the end of the Jurassic as the result of the decline of reef ecosystems. Crabs increased in diversity through the Cretaceous and represented the dominant group of decapods by the end of the period. The crab infraorder Brachyura belongs to the group Reptantia, which consists of the walking/crawling decapods (lobsters and crabs). Brachyura is the sister clade to the infraorder Anomura, which contains the hermit crabs and relatives. The cladogram below shows Brachyura's placement within the larger order Decapoda, from analysis by Wolfe et al., 2019. Brachyura is separated into several sections, with the basal Dromiacea diverging the earliest in the evolutionary history, around the Late Triassic or Early Jurassic. The group consisting of Raninoida and Cyclodorippoida split off next, during the Jurassic period. The remaining clade Eubrachyura then divided during the Cretaceous period into Heterotremata and Thoracotremata. A summary of the high-level internal relationships within Brachyura can be shown in the cladogram below: There is a no consensus on the relationships of the subsequent superfamilies and families. The proposed cladogram below is from analysis by Tsang et al, 2014: Classification The infraorder Brachyura contains approximately 7,000 species in 98 families, as many as the remainder of the Decapoda. The evolution of crabs is characterized by an increasingly robust body, and a reduction in the abdomen. Although many other groups have undergone similar processes, carcinisation is most advanced in crabs. The telson is no longer functional in crabs, and the uropods are absent, having probably evolved into small devices for holding the reduced abdomen tight against the sternum. In most decapods, the gonopores (sexual openings) are found on the legs. Since crabs use their first two pairs of pleopods (abdominal appendages) for sperm transfer, this arrangement has changed. As the male abdomen evolved into a slimmer shape, the gonopores have moved toward the midline, away from the legs, and onto the sternum. A similar change occurred, independently, with the female gonopores. The movement of the female gonopore to the sternum defines the clade Eubrachyura, and the later change in the position of the male gonopore defines the Thoracotremata. It is still a subject of debate whether a monophyletic group is formed by those crabs where the female, but not male, gonopores are situated on the sternum. Families Numbers of extant and extinct (†) species are given in brackets. The superfamily Eocarcinoidea, containing Eocarcinus and Platykotta, was formerly thought to contain the oldest crabs; it is now considered part of the Anomura. Section †Callichimaeroida †Callichimaeroidea (1†) Section Dromiacea †Dakoticancroidea (6†) Dromioidea (147, 85†) Glaessneropsoidea (45†) Homolodromioidea (24, 107†) Homoloidea (73, 49†) Section Raninoida (46, 196†) Section Cyclodorippoida (99, 27†) Section Eubrachyura Subsection Heterotremata Aethroidea (37, 44†) Bellioidea (7) Bythograeoidea (14) Calappoidea (101, 71†) Cancroidea (57, 81†) Carpilioidea (4, 104†) Cheiragonoidea (3, 13†) Corystoidea (10, 5†) †Componocancroidea (1†) Dairoidea (4, 8†) Dorippoidea (101, 73†) Eriphioidea (67, 14†) Gecarcinucoidea (349) Goneplacoidea (182, 94†) Hexapodoidea (21, 25†) Leucosioidea (488, 113†) Majoidea (980, 89†) Orithyioidea (1) Palicoidea (63, 6†) Parthenopoidea (144, 36†) Pilumnoidea (405, 47†) Portunoidea (455, 200†) Potamoidea (662, 8†) Pseudothelphusoidea (276) Pseudozioidea (22, 6†) Retroplumoidea (10, 27†) Trapezioidea (58, 10†) Trichodactyloidea (50) Xanthoidea (736, 134†) Subsection Thoracotremata Cryptochiroidea (46) Grapsoidea (493, 28†) Ocypodoidea (304, 14†) Pinnotheroidea (304, 13†) Recent studies have found the following superfamilies and families to not be monophyletic, but rather paraphyletic or polyphyletic: The Thoracotremata superfamily Grapsoidea is polyphyletic The Thoracotremata superfamily Ocypodoidea is polyphyletic The Heterotremata superfamily Calappoidea is polyphyletic The Heterotremata superfamily Eriphioidea is polyphyletic The Heterotremata superfamily Goneplacoidea is polyphyletic The Heterotremata superfamily Potamoidea is paraphyletic with respect to Gecarcinucoidea, which is resolved by placing Gecarcinucidae within Potamoidea The Majoidea families Epialtidae, Mithracidae and Majidae are polyphyletic with respect to each other The Dromioidea family Dromiidae may be paraphyletic with respect to Dynomenidae The Homoloidea family Homolidae is paraphyletic with respect to Latreilliidae The Xanthoidea family Xanthidae is paraphyletic with respect to Panopeidae Cultural influences Both the constellation Cancer and the astrological sign Cancer are named after the crab, and depicted as a crab. William Parsons, 3rd Earl of Rosse drew the Crab Nebula in 1848 and noticed its similarity to the animal; the Crab Pulsar lies at the centre of the nebula. The Moche people of ancient Peru worshipped nature, especially the sea, and often depicted crabs in their art. In Greek mythology, Karkinos was a crab that came to the aid of the Lernaean Hydra as it battled Heracles. One of Rudyard Kipling's Just So Stories, The Crab that Played with the Sea, tells the story of a gigantic crab who made the waters of the sea go up and down, like the tides. The auction for the crab quota in 2019, Russia is the largest revenue auction in the world except the spectrum auctions. In Malay mythology (as related by Hugh Clifford to Walter William Skeat), ocean tides are believed to be caused by water rushing in and out of a hole in the Navel of the Seas (Pusat Tasek), where "there sits a gigantic crab which twice a day gets out in order to search for food". The Kapsiki people of North Cameroon use the way crabs handle objects for divination. The term crab mentality is derived from a type of detrimental social behavior observed in crabs. Explanatory notes
Biology and health sciences
Crustaceans
null
93099
https://en.wikipedia.org/wiki/Ship%20of%20the%20line
Ship of the line
A ship of the line was a type of naval warship constructed during the Age of Sail from the 17th century to the mid-19th century. The ship of the line was designed for the naval tactic known as the line of battle, which involved the two columns of opposing warships manoeuvering to volley fire with the cannons along their broadsides. In conflicts where opposing ships were both able to fire from their broadsides, the faction with more cannons firingand therefore more firepowertypically had an advantage. From the end of the 1840s, the introduction of steam power brought less dependence on the wind in battle and led to the construction of screw-driven wooden-hulled ships of the line; a number of purely sail-powered ships were converted to this propulsion mechanism. However, the rise of the ironclad frigate, starting in 1859, made steam-assisted ships of the line obsolete. The ironclad warship became the ancestor of the 20th-century battleship, whose very designation is itself a contraction of the phrase "ship of the line of battle" or, more colloquially, "battleship of the line". The term "ship of the line" fell into disuse except in historical contexts, after warships and naval tactics evolved and changed from the mid-19th century. Some other languages did keep the name however; the Imperial German Navy called its battleships Linienschiffe until World War I. History Predecessors The heavily armed carrack, first developed in Spain and Portugal for either trade or war in the Atlantic Ocean, was the precursor of the ship of the line. Other maritime European states quickly adopted it in the late 15th and early 16th centuries. These vessels were developed by fusing aspects of the cog of the North Sea and galley of the Mediterranean Sea. The cogs, which traded in the North Sea, in the Baltic Sea and along the Atlantic coasts, had an advantage over galleys in battle because they had raised platforms called "castles" at bow and stern that archers could occupy to fire down on enemy ships or even to drop heavy weights from. At the bow, for instance, the castle was called the forecastle (usually contracted as fo'c'sle or fo'c's'le, and pronounced FOHK-səl). Over time these castles became higher and larger, and eventually were built into the structure of the ship, increasing overall strength. This aspect of the cog remained in the newer-style carrack designs and proved its worth in battles like that at Diu in 1509. The Mary Rose was an early 16th-century English carrack or "great ship". She was heavily armed with 78 guns and 91 after an upgrade in the 1530s. Built in Portsmouth in 1510–1512, she was one of the earliest purpose-built men-of-war in the English navy. She was over 500 tons burthen and had a keel of over and a crew of over 200 sailors, composed of 185 soldiers and 30 gunners. Although the pride of the English fleet, she accidentally sank during the Battle of the Solent, 19 July 1545. Henri Grâce à Dieu (English: "Henry Grace of God"), nicknamed "Great Harry", was another early English carrack. Contemporary with Mary Rose, Henri Grâce à Dieu was long, measuring 1,000–1,500 tons burthen and having a complement of 700–1,000. She was ordered by Henry VIII in response to the Scottish ship Michael, launched in 1511. She was originally built at Woolwich Dockyard from 1512 to 1514 and was one of the first vessels to feature gunports and had twenty of the new heavy bronze cannon, allowing for a broadside. In all, she mounted 43 heavy guns and 141 light guns. She was the first English two-decker, and when launched she was the largest and most powerful warship in Europe, but she saw little action. She was present at the Battle of the Solent against Francis I of France in 1545 (in which Mary Rose sank) but appears to have been more of a diplomatic vessel, sailing on occasion with sails of gold cloth. Indeed, the great ships were almost as well known for their ornamental design (some ships, like the Vasa, were gilded on their stern scrollwork) as they were for the power they possessed. Carracks fitted for war carried large-calibre guns aboard. Because of their higher freeboard and greater load-bearing ability, this type of vessel was better suited than the galley to wield gunpowder weapons. Because of their development for conditions in the Atlantic, these ships were more weatherly than galleys and better suited to open waters. The lack of oars meant that large crews were unnecessary, making long journeys more feasible. Their disadvantage was that they were entirely reliant on the wind for mobility. Galleys could still overwhelm great ships, especially when there was little wind and they had a numerical advantage, but as great ships increased in size, galleys became less and less useful. Another detriment was the high forecastle, which interfered with the sailing qualities of the ship; the bow would be forced low into the water while sailing before the wind. But as guns were introduced and gunfire replaced boarding as the primary means of naval combat during the 16th century, the medieval forecastle was no longer needed, and later ships such as the galleon had only a low, one-deck-high forecastle. By the time of the 1637 launching of England's Sovereign of the Seas, the forecastle had disappeared altogether. During the 16th century the galleon evolved from the carrack. It was a narrower ship, with a much reduced forecastle, and was much more manoeuvrable than the carrack. It was particularly favored from an early date by the Spanish for their trans-Atlantic trade. The main ships of the English and Spanish fleets in the Battle of Gravelines of 1588 were galleons; all of the English and most of the Spanish galleons survived the battle and the great storm on the voyage home, even though the Spanish galleons had suffered the heaviest attacks from the English while regrouping their scattered fleet. By the 17th century every major European naval power was building ships like these. With the growing importance of colonies and exploration and the need to maintain trade routes across stormy oceans, galleys and galleasses (a larger, higher type of galley with side-mounted guns, but lower than a galleon) were used less and less, and only in ever more restricted purposes and areas, so that by about 1750, with a few notable exceptions, they were of little use in naval battles. Line-of-battle adoption King Erik XIV of Sweden initiated construction of the ship in 1563; this might have been the first attempt of this battle tactic, roughly 50 years ahead of widespread adoption of the line of battle strategy. Mars was likely the largest ship in the world at the time of her build, equipped with 107 guns at a full-length of . Mars became the first ship to be sunk by gunfire from other ships in a naval battle. In the early to mid-17th century, several navies, particularly those of the Netherlands and England, began to use new fighting techniques. Previously battles had usually been fought by great fleets of ships closing with each other and fighting in whatever arrangement they found themselves in, often boarding enemy vessels as opportunities presented themselves. As the use of broadsides (coordinated fire by the battery of cannon on one side of a warship) became increasingly dominant in battle, tactics changed. The evolving line-of-battle tactic, first used in an ad hoc way, required ships to form single-file lines and close with the enemy fleet on the same tack, battering the enemy fleet until one side had had enough and retreated. Any manoeuvres would be carried out with the ships remaining in line for mutual protection. In order that this order of battle, this long thin line of guns, may not be injured or broken at some point weaker than the rest, there is at the same time felt the necessity of putting in it only ships which, if not of equal force, have at least equally strong sides. Logically it follows, at the same moment in which the line ahead became definitively the order for battle, there was established the distinction between the ships 'of the line', alone destined for a place therein, and the lighter ships meant for other uses. The lighter ships were used for various functions, including acting as scouts, and relaying signals between the flagship and the rest of the fleet. This was necessary because from the flagship, only a small part of the line would be in clear sight. The adoption of line-of-battle tactics had consequences for ship design. The height advantage given by the castles fore and aft was reduced, now that hand-to-hand combat was less essential. The need to manoeuvre in battle made the top weight of the castles more of a disadvantage. So they shrank, making the ship of the line lighter and more manoeuvrable than its forebears for the same combat power. As an added consequence, the hull itself grew larger, allowing the size and number of guns to increase as well. Evolution of design In the 17th century fleets could consist of almost a hundred ships of various sizes, but by the middle of the 18th century, ship-of-the-line design had settled on a few standard types: older two-deckers (i.e., with two complete decks of guns firing through side ports) of 50 guns (which were too weak for the battle line but could be used to escort convoys), two-deckers of between 64 and 90 guns that formed the main part of the fleet, and larger three- or even four-deckers with 98 to 140 guns that served as admirals' command ships. Fleets consisting of perhaps 10 to 25 of these ships, with their attendant supply ships and scouting and messenger frigates, kept control of the sea lanes for major European naval powers whilst restricting the sea-borne trade of enemies. The most common size of sail ship of the line was the "74" (named for its 74 guns), originally developed by France in the 1730s, and later adopted by all battleship navies. Until this time the British had 6 sizes of ship of the line, and they found that their smaller 50- and 60-gun ships were becoming too small for the battle line, while their 80s and over were three-deckers and therefore unwieldy and unstable in heavy seas. Their best were 70-gun three-deckers of about long on the gundeck, while the new French 74s were around . In 1747 the British captured a few of these French ships during the War of Austrian Succession. In the next decade Thomas Slade (Surveyor of the Navy from 1755, along with co-Surveyor William Bately) broke away from the past and designed several new classes of 74s to compete with these French designs, starting with the and classes. Their successors gradually improved handling and size through the 1780s. Other navies ended up building 74s also as they had the right balance between offensive power, cost, and manoeuvrability. Eventually around half of Britain's ships of the line were 74s. Larger vessels were still built, as command ships, but they were more useful only if they could definitely get close to an enemy, rather than in a battle involving chasing or manoeuvring. The 74 remained the favoured ship until 1811, when Seppings's method of construction enabled bigger ships to be built with more stability. In a few ships the design was altered long after the ship was launched and in service. In the Royal Navy, smaller two-deck 74- or 64-gun ships of the line that could not be used safely in fleet actions had their upper decks removed (or razeed), resulting in a very stout, single-gun-deck warship called a razee. The resulting razeed ship could be classed as a frigate and was still much stronger. The most successful razeed ship in the Royal Navy was , commanded by Sir Edward Pellew. The Spanish ship , was a Spanish first-rate ship of the line with 112 guns. This was increased in 1795–96 to 130 guns by closing in the spar deck between the quarterdeck and forecastle, and around 1802 to 140 guns, thus creating what was in effect a continuous fourth gundeck although the extra guns added were actually relatively small. She was the heaviest-armed ship in the world when rebuilt, and bore the most guns of any ship of the line outfitted in the Age of Sail. (1829), ordered by the Ottoman Sultan Mahmud II and built by the Imperial Naval Arsenal on the Golden Horn in Istanbul, was for many years the largest warship in the world. The ship of the line was armed with 128 cannons on three decks and was manned by 1,280 sailors. She participated in the Siege of Sevastopol (1854–1855) during the Crimean War (1854–1856). She was decommissioned in 1874. The second largest sailing three-decker ship of the line ever built in the West and the biggest French ship of the line was the , launched in 1847. She had vertical sides, which increased significantly the space available for upper batteries, but reduced the stability of the ship; wooden stabilisers were added under the waterline to address the issue. Valmy was thought to be the largest sort of sailing ship possible, as larger dimensions made the manoeuvre of riggings impractical with mere manpower. She participated in the Crimean War, and after her return to France later housed the French Naval Academy under the name Borda from 1864 to 1890. Steam power The first major change to the ship-of-the-line concept was the introduction of steam power as an auxiliary propulsion system. The first military uses of steamships came in the 1810s, and in the 1820s a number of navies experimented with paddle steamer warships. Their use spread in the 1830s, with paddle-steamer warships participating in conflicts like the First Opium War alongside ships of the line and frigates. Paddle steamers, however, had major disadvantages. The paddle wheel above the waterline was exposed to enemy fire, while itself preventing the ship from firing broadsides effectively. During the 1840s, the screw propeller emerged as the most likely method of steam propulsion, with both Britain and the US launching screw-propelled warships in 1843. Through the 1840s, the British and French navies launched ever larger and more powerful screw ships, alongside sail-powered ships of the line. In 1845, Viscount Palmerston gave an indication of the role of the new steamships in tense Anglo-French relations, describing the English Channel as a "steam bridge", rather than a barrier to French invasion. It was partly because of the fear of war with France that the Royal Navy converted several old 74-gun ships of the line into 60-gun steam-powered blockships (following the model of Fulton's ), starting in 1845. The blockships were "originally conceived as steam batteries solely for harbour defence, but in September 1845 they were given a reduced [sailing] rig rather than none at all, to make them sea-going ships.… The blockships were to be a cost-effective experiment of great value." They subsequently gave good service in the Crimean War. The French Navy, however, developed the first purpose-built steam battleship with the 90-gun in 1850. She is also considered the first true steam battleship, and the first screw battleship ever. Napoléon was armed as a conventional ship of the line, but her steam engines could give her a speed of , regardless of the wind conditionsa potentially decisive advantage in a naval engagement. Eight sister ships to Napoléon were built in France over a period of ten years, but the United Kingdom soon took the lead in production, in number of both purpose-built and converted units. Altogether, France built 10 new wooden steam battleships and converted 28 from older battleship units, while the United Kingdom built 18 and converted 41. In the end, France and Britain were the only two countries to develop fleets of wooden steam screw battleships, although several other navies made some use of a mixture of screw battleships and paddle-steamer frigates. These included Russia, Turkey, Sweden, Naples, Prussia, Denmark, and Austria. Decline In the Crimean War, six line-of-battle ships and two frigates of the Russian Black Sea Fleet destroyed seven Ottoman frigates and three corvettes with explosive shells at the Battle of Sinop in 1853. In the 1860s unarmoured steam line-of-battle ships were replaced by ironclad warships. In the American Civil War, on March 8, 1862, during the first day of the Battle of Hampton Roads, two unarmoured Union wooden frigates were sunk and destroyed by the Confederate ironclad . However, the power implied by the ship of the line would find its way into the ironclad, which would develop during the next few decades into the concept of the battleship. Several navies still use terms equivalent to the "ship of the line" for battleships, including the German (Linienschiff) and Russian (lineyniy korabl` (лине́йный кора́бль) or linkor (линкор) in short) navies. Combat In the North Sea and Atlantic Ocean, the fleets of the Royal Navy, the Netherlands, France, Spain and Portugal fought numerous battles. In the Baltic, the Scandinavian kingdoms and Russia did likewise, while in the Mediterranean Sea, the Ottoman Empire, Spain, France, Britain and the various Barbary pirates battled. By the eighteenth century, the UK had established itself as the world's preeminent naval power. Attempts by Napoleon to challenge the Royal Navy's dominance at sea proved a colossal failure. During the Napoleonic Wars, Britain defeated French and allied fleets decisively all over the world including in the Caribbean at the Battle of Cape St. Vincent, the Bay of Aboukir off the Egyptian coast at the Battle of the Nile in 1798, near Spain at the Battle of Trafalgar in 1805, and in the second Battle of Copenhagen (1807). The UK emerged from the Napoleonic Wars in 1815 with the largest and most professional navy in the world, composed of hundreds of wooden, sail-powered ships of all sizes and classes. Overwhelming firepower was of no use if it could not be brought to bear which was not always possible against the smaller leaner ships used by Napoleon's privateers, operating from French New World territories. The Royal Navy compensated by deploying numerous Bermuda sloops. Similarly, many of the East India Company's merchant vessels became lightly armed and quite competent in combat during this period, operating a convoy system under an armed merchantman, instead of depending on small numbers of more heavily armed ships which while effective, slowed the flow of commerce. Restorations and preservation The only original ship of the line remaining today is HMS Victory, preserved as a museum in Portsmouth to appear as she was while under Admiral Horatio Nelson at the Battle of Trafalgar in 1805. Although Victory has been in dry dock since the 1920s, she is still a fully commissioned warship in the Royal Navy and is the oldest commissioned warship in any navy worldwide. Regalskeppet Vasa sank in lake Mälaren in 1628 and was lost until 1956. She was then raised intact, in remarkably good condition, in 1961 and is presently on display at the Vasa Museum in Stockholm, Sweden. At the time she was the largest Swedish warship ever built. Today the Vasa Museum is the most visited museum in Sweden. The last ship-of-the-line afloat was the French ship Duguay-Trouin, renamed after being captured by the British, which survived until 1949. The last ship-of-the-line to be sunk by enemy action was , which was sunk by an air raid in 1940, during the Second World War; she was briefly re-floated in 1948 before being broken up. List List of ships of the line of Denmark List of ships of the line of the Dutch Republic List of ships of the line of France List of ships of the line of Spain List of ships of the line of Italy List of ships of the line of Malta List of ships of the line of the Ottoman Empire List of ships of the line of Russia List of ships of the line of the Royal Swedish Navy List of ships of the line of the Royal Navy List of ships of the line of the United States Navy
Technology
Naval warfare
null
93188
https://en.wikipedia.org/wiki/Triple-alpha%20process
Triple-alpha process
The triple-alpha process is a set of nuclear fusion reactions by which three helium-4 nuclei (alpha particles) are transformed into carbon. Triple-alpha process in stars Helium accumulates in the cores of stars as a result of the proton–proton chain reaction and the carbon–nitrogen–oxygen cycle. Nuclear fusion reaction of two helium-4 nuclei produces beryllium-8, which is highly unstable, and decays back into smaller nuclei with a half-life of , unless within that time a third alpha particle fuses with the beryllium-8 nucleus to produce an excited resonance state of carbon-12, called the Hoyle state, which nearly always decays back into three alpha particles, but once in about 2421.3 times releases energy and changes into the stable base form of carbon-12. When a star runs out of hydrogen to fuse in its core, it begins to contract and heat up. If the central temperature rises to 108 K, six times hotter than the Sun's core, alpha particles can fuse fast enough to get past the beryllium-8 barrier and produce significant amounts of stable carbon-12. {| | + → |  (−0.0918 MeV) |- | + → + 2 |  (+7.367 MeV) |} The net energy release of the process is 7.275 MeV. As a side effect of the process, some carbon nuclei fuse with additional helium to produce a stable isotope of oxygen and energy: + → + (+7.162 MeV) Nuclear fusion reactions of helium with hydrogen produces lithium-5, which also is highly unstable, and decays back into smaller nuclei with a half-life of . Fusing with additional helium nuclei can create heavier elements in a chain of stellar nucleosynthesis known as the alpha process, but these reactions are only significant at higher temperatures and pressures than in cores undergoing the triple-alpha process. This creates a situation in which stellar nucleosynthesis produces large amounts of carbon and oxygen, but only a small fraction of those elements are converted into neon and heavier elements. Oxygen and carbon are the main "ash" of helium-4 burning. Primordial carbon The triple-alpha process is ineffective at the pressures and temperatures early in the Big Bang. One consequence of this is that no significant amount of carbon was produced in the Big Bang. Resonances Ordinarily, the probability of the triple-alpha process is extremely small. However, the beryllium-8 ground state has almost exactly the energy of two alpha particles. In the second step, 8Be + 4He has almost exactly the energy of an excited state of 12C. This resonance greatly increases the probability that an incoming alpha particle will combine with beryllium-8 to form carbon. The existence of this resonance was predicted by Fred Hoyle before its actual observation, based on the physical necessity for it to exist, in order for carbon to be formed in stars. The prediction and then discovery of this energy resonance and process gave very significant support to Hoyle's hypothesis of stellar nucleosynthesis, which posited that all chemical elements had originally been formed from hydrogen, the true primordial substance. The anthropic principle has been cited to explain the fact that nuclear resonances are sensitively arranged to create large amounts of carbon and oxygen in the universe. Nucleosynthesis of heavy elements With further increases of temperature and density, fusion processes produce nuclides only up to nickel-56 (which decays later to iron); heavier elements (those beyond Ni) are created mainly by neutron capture. The slow capture of neutrons, the s-process, produces about half of elements beyond iron. The other half are produced by rapid neutron capture, the r-process, which probably occurs in core-collapse supernovae and neutron star mergers. Reaction rate and stellar evolution The triple-alpha steps are strongly dependent on the temperature and density of the stellar material. The power released by the reaction is approximately proportional to the temperature to the 40th power, and the density squared. In contrast, the proton–proton chain reaction produces energy at a rate proportional to the fourth power of temperature, the CNO cycle at about the 17th power of the temperature, and both are linearly proportional to the density. This strong temperature dependence has consequences for the late stage of stellar evolution, the red-giant stage. For lower mass stars on the red-giant branch, the helium accumulating in the core is prevented from further collapse only by electron degeneracy pressure. The entire degenerate core is at the same temperature and pressure, so when its density becomes high enough, fusion via the triple-alpha process rate starts throughout the core. The core is unable to expand in response to the increased energy production until the pressure is high enough to lift the degeneracy. As a consequence, the temperature increases, causing an increased reaction rate in a positive feedback cycle that becomes a runaway reaction. This process, known as the helium flash, lasts a matter of seconds but burns 60–80% of the helium in the core. During the core flash, the star's energy production can reach approximately 1011 solar luminosities which is comparable to the luminosity of a whole galaxy, although no effects will be immediately observed at the surface, as the whole energy is used up to lift the core from the degenerate to normal, gaseous state. Since the core is no longer degenerate, hydrostatic equilibrium is once more established and the star begins to "burn" helium at its core and hydrogen in a spherical layer above the core. The star enters a steady helium-burning phase which lasts about 10% of the time it spent on the main sequence (the Sun is expected to burn helium at its core for about a billion years after the helium flash). In higher mass stars, which evolve along the asymptotic giant branch, carbon and oxygen accumulate in the core as helium is burned, while hydrogen burning shifts to further-out layers, resulting in an intermediate helium shell. However, the boundaries of these shells do not shift outward at the same rate due to differing critical temperatures and temperature sensitivities for hydrogen and helium burning. When the temperature at the inner boundary of the helium shell is no longer high enough to sustain helium burning, the core contracts and heats up, while the hydrogen shell (and thus the star's radius) expand outward. Core contraction and shell expansion continue until the core becomes hot enough to reignite the surrounding helium. This process continues cyclically – with a period on the order of 1000 years – and stars undergoing this process have periodically variable luminosity. These stars also lose material from their outer layers in a stellar wind driven by radiation pressure, which ultimately becomes a superwind as the star enters the planetary nebula phase. Discovery The triple-alpha process is highly dependent on carbon-12 and beryllium-8 having resonances with slightly more energy than helium-4. Based on known resonances, by 1952 it seemed impossible for ordinary stars to produce carbon as well as any heavier element. Nuclear physicist William Alfred Fowler had noted the beryllium-8 resonance, and Edwin Salpeter had calculated the reaction rate for 8Be, 12C, and 16O nucleosynthesis taking this resonance into account. However, Salpeter calculated that red giants burned helium at temperatures of 2·108 K or higher, whereas other recent work hypothesized temperatures as low as 1.1·108 K for the core of a red giant. Salpeter's paper mentioned in passing the effects that unknown resonances in carbon-12 would have on his calculations, but the author never followed up on them. It was instead astrophysicist Fred Hoyle who, in 1953, used the abundance of carbon-12 in the universe as evidence for the existence of a carbon-12 resonance. The only way Hoyle could find that would produce an abundance of both carbon and oxygen was through a triple-alpha process with a carbon-12 resonance near 7.68 MeV, which would also eliminate the discrepancy in Salpeter's calculations. Hoyle went to Fowler's lab at Caltech and said that there had to be a resonance of 7.68 MeV in the carbon-12 nucleus. (There had been reports of an excited state at about 7.5 MeV.) Fred Hoyle's audacity in doing this is remarkable, and initially, the nuclear physicists in the lab were skeptical. Finally, a junior physicist, Ward Whaling, fresh from Rice University, who was looking for a project decided to look for the resonance. Fowler permitted Whaling to use an old Van de Graaff generator that was not being used. Hoyle was back in Cambridge when Fowler's lab discovered a carbon-12 resonance near 7.65 MeV a few months later, validating his prediction. The nuclear physicists put Hoyle as first author on a paper delivered by Whaling at the summer meeting of the American Physical Society. A long and fruitful collaboration between Hoyle and Fowler soon followed, with Fowler even coming to Cambridge. The final reaction product lies in a 0+ state (spin 0 and positive parity). Since the Hoyle state was predicted to be either a 0+ or a 2+ state, electron–positron pairs or gamma rays were expected to be seen. However, when experiments were carried out, the gamma emission reaction channel was not observed, and this meant the state must be a 0+ state. This state completely suppresses single gamma emission, since single gamma emission must carry away at least 1 unit of angular momentum. Pair production from an excited 0+ state is possible because their combined spins (0) can couple to a reaction that has a change in angular momentum of 0. Improbability and fine-tuning Carbon is a necessary component of all known life. 12C, a stable isotope of carbon, is abundantly produced in stars due to three factors: The decay lifetime of a 8Be nucleus is four orders of magnitude larger than the time for two 4He nuclei (alpha particles) to scatter. An excited state of the 12C nucleus exists a little (0.3193 MeV) above the energy level of 8Be + 4He. This is necessary because the ground state of 12C is 7.3367 MeV below the energy of 8Be + 4He; a 8Be nucleus and a 4He nucleus cannot reasonably fuse directly into a ground-state 12C nucleus. However, 8Be and 4He use the kinetic energy of their collision to fuse into the excited 12C (kinetic energy supplies the additional 0.3193 MeV necessary to reach the excited state), which can then transition to its stable ground state. According to one calculation, the energy level of this excited state must be between about 7.3 MeV and 7.9 MeV to produce sufficient carbon for life to exist, and must be further "fine-tuned" to between 7.596 MeV and 7.716 MeV in order to produce the abundant level of 12C observed in nature. The Hoyle state has been measured to be about 7.65 MeV above the ground state of 12C. In the reaction 12C + 4He → 16O, there is an excited state of oxygen which, if it were slightly higher, would provide a resonance and speed up the reaction. In that case, insufficient carbon would exist in nature; almost all of it would have converted to oxygen. Some scholars argue the 7.656 MeV Hoyle resonance, in particular, is unlikely to be the product of mere chance. Fred Hoyle argued in 1982 that the Hoyle resonance was evidence of a "superintellect"; Leonard Susskind in The Cosmic Landscape rejects Hoyle's intelligent design argument. Instead, some scientists believe that different universes, portions of a vast "multiverse", have different fundamental constants: according to this controversial fine-tuning hypothesis, life can only evolve in the minority of universes where the fundamental constants happen to be fine-tuned to support the existence of life. Other scientists reject the hypothesis of the multiverse on account of the lack of independent evidence.
Physical sciences
Stellar astronomy
Astronomy
93817
https://en.wikipedia.org/wiki/Data%20type
Data type
In computer science and computer programming, a data type (or simply type) is a collection or grouping of data values, usually specified by a set of possible values, a set of allowed operations on these values, and/or a representation of these values as machine types. A data type specification in a program constrains the possible values that an expression, such as a variable or a function call, might take. On literal data, it tells the compiler or interpreter how the programmer intends to use the data. Most programming languages support basic data types of integer numbers (of varying sizes), floating-point numbers (which approximate real numbers), characters and Booleans. Concept A data type may be specified for many reasons: similarity, convenience, or to focus the attention. It is frequently a matter of good organization that aids the understanding of complex definitions. Almost all programming languages explicitly include the notion of data type, though the possible data types are often restricted by considerations of simplicity, computability, or regularity. An explicit data type declaration typically allows the compiler to choose an efficient machine representation, but the conceptual organization offered by data types should not be discounted. Different languages may use different data types or similar types with different semantics. For example, in the Python programming language, int represents an arbitrary-precision integer which has the traditional numeric operations such as addition, subtraction, and multiplication. However, in the Java programming language, the type int represents the set of 32-bit integers ranging in value from −2,147,483,648 to 2,147,483,647, with arithmetic operations that wrap on overflow. In Rust this 32-bit integer type is denoted i32 and panics on overflow in debug mode. Most programming languages also allow the programmer to define additional data types, usually by combining multiple elements of other types and defining the valid operations of the new data type. For example, a programmer might create a new data type named "complex number" that would include real and imaginary parts, or a color data type represented by three bytes denoting the amounts each of red, green, and blue, and a string representing the color's name. Data types are used within type systems, which offer various ways of defining, implementing, and using them. In a type system, a data type represents a constraint placed upon the interpretation of data, describing representation, interpretation and structure of values or objects stored in computer memory. The type system uses data type information to check correctness of computer programs that access or manipulate the data. A compiler may use the static type of a value to optimize the storage it needs and the choice of algorithms for operations on the value. In many C compilers the data type, for example, is represented in 32 bits, in accord with the IEEE specification for single-precision floating point numbers. They will thus use floating-point-specific microprocessor operations on those values (floating-point addition, multiplication, etc.). Most data types in statistics have comparable types in computer programming, and vice versa, as shown in the following table: Definition identified five definitions of a "type" that were used—sometimes implicitly—in the literature: Syntactic A type is a purely syntactic label associated with a variable when it is declared. Although useful for advanced type systems such as substructural type systems, such definitions provide no intuitive meaning of the types. Representation A type is defined in terms of a composition of more primitive types—often machine types. Representation and behaviour A type is defined as its representation and a set of operators manipulating these representations. Value space A type is a set of possible values which a variable can possess. Such definitions make it possible to speak about (disjoint) unions or Cartesian products of types. Value space and behaviour A type is a set of values which a variable can possess and a set of functions that one can apply to these values. The definition in terms of a representation was often done in imperative languages such as ALGOL and Pascal, while the definition in terms of a value space and behaviour was used in higher-level languages such as Simula and CLU. Types including behavior align more closely with object-oriented models, whereas a structured programming model would tend to not include code, and are called plain old data structures. Classification Data types may be categorized according to several factors: Primitive data types or built-in data types are types that are built-in to a language implementation. User-defined data types are non-primitive types. For example, Java's numeric types are primitive, while classes are user-defined. A value of an atomic type is a single data item that cannot be broken into component parts. A value of a composite type or aggregate type is a collection of data items that can be accessed individually. For example, an integer is generally considered atomic, although it consists of a sequence of bits, while an array of integers is certainly composite. Basic data types or fundamental data types are defined axiomatically from fundamental notions or by enumeration of their elements. Generated data types or derived data types are specified, and partly defined, in terms of other data types. All basic types are atomic. For example, integers are a basic type defined in mathematics, while an array of integers is the result of applying an array type generator to the integer type. The terminology varies - in the literature, primitive, built-in, basic, atomic, and fundamental may be used interchangeably. Examples Machine data types All data in computers based on digital electronics is represented as bits (alternatives 0 and 1) on the lowest level. The smallest addressable unit of data is usually a group of bits called a byte (usually an octet, which is 8 bits). The unit processed by machine code instructions is called a word (, typically 32 or 64 bits). Machine data types expose or make available fine-grained control over hardware, but this can also expose implementation details that make code less portable. Hence machine types are mainly used in systems programming or low-level programming languages. In higher-level languages most data types are abstracted in that they do not have a language-defined machine representation. The C programming language, for instance, supplies types such as Booleans, integers, floating-point numbers, etc., but the precise bit representations of these types are implementation-defined. The only C type with a precise machine representation is the char type that represents a byte. Boolean type The Boolean type represents the values true and false. Although only two values are possible, they are more often represented as a byte or word rather as a single bit as it requires more machine instructions to store and retrieve an individual bit. Many programming languages do not have an explicit Boolean type, instead using an integer type and interpreting (for instance) 0 as false and other values as true. Boolean data refers to the logical structure of how the language is interpreted to the machine language. In this case a Boolean 0 refers to the logic False. True is always a non zero, especially a one which is known as Boolean 1. Numeric types Almost all programming languages supply one or more integer data types. They may either supply a small number of predefined subtypes restricted to certain ranges (such as short and long and their corresponding unsigned variants in C/C++); or allow users to freely define subranges such as 1..12 (e.g. Pascal/Ada). If a corresponding native type does not exist on the target platform, the compiler will break them down into code using types that do exist. For instance, if a 32-bit integer is requested on a 16 bit platform, the compiler will tacitly treat it as an array of two 16 bit integers. Floating point data types represent certain fractional values (rational numbers, mathematically). Although they have predefined limits on both their maximum values and their precision, they are sometimes misleadingly called reals (evocative of mathematical real numbers). They are typically stored internally in the form (where and are integers), but displayed in familiar decimal form. Fixed point data types are convenient for representing monetary values. They are often implemented internally as integers, leading to predefined limits. For independence from architecture details, a Bignum or arbitrary precision numeric type might be supplied. This represents an integer or rational to a precision limited only by the available memory and computational resources on the system. Bignum implementations of arithmetic operations on machine-sized values are significantly slower than the corresponding machine operations. Enumerations The enumerated type has distinct values, which can be compared and assigned, but which do not necessarily have any particular concrete representation in the computer's memory; compilers and interpreters can represent them arbitrarily. For example, the four suits in a deck of playing cards may be four enumerators named CLUB, DIAMOND, HEART, SPADE, belonging to an enumerated type named suit. If a variable V is declared having suit as its data type, one can assign any of those four values to it. Some implementations allow programmers to assign integer values to the enumeration values, or even treat them as type-equivalent to integers. String and text types Strings are a sequence of characters used to store words or plain text, most often textual markup languages representing formatted text. Characters may be a letter of some alphabet, a digit, a blank space, a punctuation mark, etc. Characters are drawn from a character set such as ASCII or Unicode. Character and string types can have different subtypes according to the character encoding. The original 7-bit wide ASCII was found to be limited, and superseded by 8, 16 and 32-bit sets, which can encode a wide variety of non-Latin alphabets (such as Hebrew and Chinese) and other symbols. Strings may be of either variable length or fixed length, and some programming languages have both types. They may also be subtyped by their maximum size. Since most character sets include the digits, it is possible to have a numeric string, such as "1234". These numeric strings are usually considered distinct from numeric values such as 1234, although some languages automatically convert between them. Union types A union type definition will specify which of a number of permitted subtypes may be stored in its instances, e.g. "float or long integer". In contrast with a record, which could be defined to contain a float and an integer, a union may only contain one subtype at a time. A tagged union (also called a variant, variant record, discriminated union, or disjoint union) contains an additional field indicating its current type for enhanced type safety. Algebraic data types An algebraic data type (ADT) is a possibly recursive sum type of product types. A value of an ADT consists of a constructor tag together with zero or more field values, with the number and type of the field values fixed by the constructor. The set of all possible values of an ADT is the set-theoretic disjoint union (sum), of the sets of all possible values of its variants (product of fields). Values of algebraic types are analyzed with pattern matching, which identifies a value's constructor and extracts the fields it contains. If there is only one constructor, then the ADT corresponds to a product type similar to a tuple or record. A constructor with no fields corresponds to the empty product (unit type). If all constructors have no fields then the ADT corresponds to an enumerated type. One common ADT is the option type, defined in Haskell as . Data structures Some types are very useful for storing and retrieving data and are called data structures. Common data structures include: An array (also called vector, list, or sequence) stores a number of elements and provides random access to individual elements. The elements of an array are typically (but not in all contexts) required to be of the same type. Arrays may be fixed-length or expandable. Indices into an array are typically required to be integers (if not, one may stress this relaxation by speaking about an associative array) from a specific range (if not all indices in that range correspond to elements, it may be a sparse array). Record (also called tuple or struct) Records are among the simplest data structures. A record is a value that contains other values, typically in fixed number and sequence and typically indexed by names. The elements of records are usually called fields or members. An object contains a number of data fields, like a record, and also offers a number of subroutines for accessing or modifying them, called methods. the singly linked list, which can be used to implement a queue and is defined in Haskell as the ADT , and the binary tree, which allows fast searching, and can be defined in Haskell as the ADT Abstract data types An abstract data type is a data type that does not specify the concrete representation of the data. Instead, a formal specification based on the data type's operations is used to describe it. Any implementation of a specification must fulfill the rules given. For example, a stack has push/pop operations that follow a Last-In-First-Out rule, and can be concretely implemented using either a list or an array. Abstract data types are used in formal semantics and program verification and, less strictly, in design. Pointers and references The main non-composite, derived type is the pointer, a data type whose value refers directly to (or "points to") another value stored elsewhere in the computer memory using its address. It is a primitive kind of reference. (In everyday terms, a page number in a book could be considered a piece of data that refers to another one). Pointers are often stored in a format similar to an integer; however, attempting to dereference or "look up" a pointer whose value was never a valid memory address would cause a program to crash. To ameliorate this potential problem, a pointer type is typically considered distinct from the corresponding integer type, even if the underlying representation is the same. Function types Functional programming languages treat functions as a distinct datatype and allow values of this type to be stored in variables and passed to functions. Some multi-paradigm languages such as JavaScript also have mechanisms for treating functions as data. Most contemporary type systems go beyond JavaScript's simple type "function object" and have a family of function types differentiated by argument and return types, such as the type Int -> Bool denoting functions taking an integer and returning a Boolean. In C, a function is not a first-class data type but function pointers can be manipulated by the program. Java and C++ originally did not have function values but have added them in C++11 and Java 8. Type constructors A type constructor builds new types from old ones, and can be thought of as an operator taking zero or more types as arguments and producing a type. Product types, function types, power types and list types can be made into type constructors. Quantified types Universally-quantified and existentially-quantified types are based on predicate logic. Universal quantification is written as or forall x. f x and is the intersection over all types x of the body f x, i.e. the value is of type f x for every x. Existential quantification written as or exists x. f x and is the union over all types x of the body f x, i.e. the value is of type f x for some x. In Haskell, universal quantification is commonly used, but existential types must be encoded by transforming exists a. f a to forall r. (forall a. f a -> r) -> r or a similar type. Refinement types A refinement type is a type endowed with a predicate which is assumed to hold for any element of the refined type. For instance, the type of natural numbers greater than 5 may be written as Dependent types A dependent type is a type whose definition depends on a value. Two common examples of dependent types are dependent functions and dependent pairs. The return type of a dependent function may depend on the value (not just type) of one of its arguments. A dependent pair may have a second value of which the type depends on the first value. Intersection types An intersection type is a type containing those values that are members of two specified types. For example, in Java the class implements both the and the interfaces. Therefore, an object of type is a member of the type . Considering types as sets of values, the intersection type is the set-theoretic intersection of and . It is also possible to define a dependent intersection type, denoted , where the type may depend on the term variable . Meta types Some programming languages represent the type information as data, enabling type introspection and reflective programming (reflection). In contrast, higher order type systems, while allowing types to be constructed from other types and passed to functions as values, typically avoid basing computational decisions on them. Convenience types For convenience, high-level languages and databases may supply ready-made "real world" data types, for instance times, dates, and monetary values (currency). These may be built-in to the language or implemented as composite types in a library.
Technology
Programming languages
null
93827
https://en.wikipedia.org/wiki/Human%20nutrition
Human nutrition
Human nutrition deals with the provision of essential nutrients in food that are necessary to support human life and good health. Poor nutrition is a chronic problem often linked to poverty, food security, or a poor understanding of nutritional requirements. Malnutrition and its consequences are large contributors to deaths, physical deformities, and disabilities worldwide. Good nutrition is necessary for children to grow physically and mentally, and for normal human biological development. Recommended Dietary Allowances The Recommended Dietary Allowances (RDAs) are scientifically determined levels of essential nutrient intake, deemed sufficient by the Food and Nutrition Board to meet the nutritional needs of nearly all healthy individuals. The first RDAs were published in 1943, during World War II, with the aim of setting standards for optimal nutrition. The initial editions outlined daily nutrient recommendations for various age groups, reflecting the latest scientific insights at the time (NRC, 1943). The history and evolution of the RDAs have been extensively detailed by the chair of the first Committee on Recommended Dietary Allowances (Roberts, 1958). Over the years, the RDAs have been periodically updated, with the current version being the tenth edition. Originally intended to address nutrition issues related to national defense, the RDAs now serve multiple roles, including guiding food supply planning for population groups, interpreting dietary intake data, establishing standards for food assistance programs, assessing the nutritional adequacy of food supplies, designing nutrition education initiatives, aiding in the development of new food products, and setting guidelines for food labeling. However, the data underpinning these nutrient requirement estimates are often limited. The nutritional requirements system adopted by the United States and Canada refers to Dietary Reference Intake (DRI). The DRI is a set of nutritional guidelines developed by the National Academy of Medicine (NAM), part of the National Academies in the United States. Established in 1997, the DRI was created to expand upon the previous standards known as the Recommended Dietary Allowances (RDAs). Unlike the RDAs, the DRI encompasses a broader range of nutritional recommendations. The DRI values are distinct from those found on food and dietary supplement labels in the U.S. and Canada, which use Reference Daily Intakes (RDIs) and Daily Values (%). These labeling standards were originally based on RDAs from 1968 but were updated in 2016. Dietary Reference Values (DRVs) represent the nutritional standards set by the United Kingdom's Department of Health and the European Food Safety Authority (EFSA) for assessing and planning dietary intakes. The UK's Department of Health introduced these guidelines in 1991 with the publication of Dietary Reference Values for Food Energy and Nutrients for the United Kingdom. This document provides recommended nutrient intakes for the UK population, offering a framework for ensuring adequate nutrition. DRVs are categorized into three main types: Reference Nutrient Intake (RNI), which covers the nutritional needs of 95% of the population; Estimated Average Requirement (EAR), meeting the needs of 50%; and Lower Recommended Nutritional Intake (LRNI), which addresses the requirements of 5% of the population. These categories help to tailor dietary recommendations to different segments of the population, ensuring a more personalized approach to nutrition. Nutrients The seven major classes of nutrients are carbohydrates, fats, fiber, minerals, proteins, vitamins, and water. Nutrients can be grouped as either macronutrients or micronutrients (needed in small quantities). Carbohydrates, fats, and proteins are macronutrients, and provide energy. Water and fiber are macronutrients, but do not provide energy. The micronutrients are minerals and vitamins. The macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built), and energy. Some of the structural material can also be used to generate energy internally, and in either case it is measured in joules or kilocalories (often called "Calories" and written with a capital 'C' to distinguish them from little 'c' calories). Carbohydrates and proteins provide 17 kJ approximately (4 kcal) of energy per gram, while fats provide 37 kJ (9 kcal) per gram. However, the net energy derived from the macronutrients depends on such factors as absorption and digestive effort, which vary substantially from instance to instance. Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. A third class of dietary material, fiber (i.e., nondigestible material such as cellulose), seems also to be required, for both mechanical and biochemical reasons, though the exact reasons remain unclear. For all age groups, males on average need to consume higher amounts of macronutrients than females. In general, intakes increase with age until the second or third decade of life. Some nutrients can be stored – the fat-soluble vitamins – while others are required more or less continuously. Poor health can be caused by a lack of required nutrients, or for some vitamins and minerals, too much of a required nutrient. Essential nutrients cannot be synthesized by the body, and must be obtained from food. Molecules of carbohydrates and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose, fructose, galactose) to complex polysaccharides (starch, glycogen). Fats are triglycerides, made of assorted fatty acid monomers bound to a glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental components of protein are nitrogen-containing amino acids, some of which are essential in the sense that humans cannot make them internally. Some of the amino acids can be converted (with the expenditure of energy) to glucose and can be used for energy production just as ordinary glucose, in a process known as gluconeogenesis. By breaking down existing protein, some glucose can be produced internally; the remaining amino acids are discarded, primarily as urea in urine. This occurs naturally when atrophy takes place, or during periods of starvation. The list of nutrients that people are known to require is, in the words of Marion Nestle, "almost certainly incomplete". Carbohydrates Carbohydrates may be classified as monosaccharides, disaccharides or polysaccharides depending on the number of monomer (sugar) units they contain. They are a diverse group of substances, with a range of chemical, physical and physiological properties. They make up a large part of foods such as rice, noodles, bread, and other grain-based products, but they are not an essential nutrient, meaning a human does not need to eat carbohydrates. Monosaccharides contain one sugar unit, disaccharides two, and polysaccharides three or more. Monosaccharides include glucose, fructose and galactose. Disaccharides include sucrose, lactose, and maltose; purified sucrose, for instance, is used as table sugar. Polysaccharides, which include starch and glycogen, are often referred to as 'complex' carbohydrates because they are typically long multiple-branched chains of sugar units. Traditionally, simple carbohydrates were believed to be absorbed quickly, and therefore raise blood-glucose levels more rapidly than complex carbohydrates. This is inaccurate. Some simple carbohydrates (e.g., fructose) follow different metabolic pathways (e.g., fructolysis) that result in only a partial catabolism to glucose, while, in essence, many complex carbohydrates may be digested at the same rate as simple carbohydrates. The World Health Organization recommends that added sugars should represent no more than 10% of total energy intake. The most common plant carbohydrate nutrient starch varies in its absorption. Starches have been classified as rapidly digestible starch, slowly digestible starch and resistant starch. Starches in plants are resistant to digestion (resistant starch), but cooking the starch in the presence of water can break down the starch granule and releases the glucose chains, making them more easily digestible by human digestive enzymes. Historically, food was less processed and starches were contained within the food matrix, making them less digestible. Modern food processing has shifted carbohydrate consumption from less digestible and resistant starch to much more rapidly digestible starch. For instance, the resistant starch content of a traditional African diet was 38 grams/day. The resistant starch consumption from countries with high starch intakes has been estimated to be 30-40 grams/day. In contrast, the average consumption of resistant starch in the United States was estimated to be 4.9 grams/day (range 2.8-7.9 grams of resistant starch/day). Fat A molecule of dietary fat typically consists of several fatty acids (containing long chains of carbon and hydrogen atoms), bonded to a glycerol. They are typically found as triglycerides (three fatty acids attached to one glycerol backbone). Fats may be classified as saturated or unsaturated depending on the chemical structure of the fatty acids involved. Saturated fats have all of the carbon atoms in their fatty acid chains bonded to hydrogen atoms, whereas unsaturated fats have some of these carbon atoms double-bonded, so their molecules have relatively fewer hydrogen atoms than a saturated fatty acid of the same length. Unsaturated fats may be further classified as monounsaturated (one double-bond) or polyunsaturated (many double-bonds). Furthermore, depending on the location of the double-bond in the fatty acid chain, unsaturated fatty acids are classified as omega-3 or omega-6 fatty acids. Trans fats are a type of unsaturated fat with trans-isomer bonds; these are rare in nature and in foods from natural sources; they are typically created in an industrial process called (partial) hydrogenation. There are nine kilocalories in each gram of fat. Fatty acids such as conjugated linoleic acid, catalpic acid, eleostearic acid and punicic acid, in addition to providing energy, represent potent immune modulatory molecules. Saturated fats (typically from animal sources) have been a staple in many world cultures for millennia. Unsaturated fats (e. g., vegetable oil) are considered healthier, while trans fats are to be avoided. Saturated and some trans fats are typically solid at room temperature (such as butter or lard), while unsaturated fats are typically liquids (such as olive oil or flaxseed oil). Trans fats are very rare in nature, and have been shown to be highly detrimental to human health, but have properties useful in the food processing industry, such as rancidity resistance. Essential fatty acids Most fatty acids are non-essential, meaning the body can produce them as needed, generally from other fatty acids and always by expending energy to do so. However, in humans, at least two fatty acids are essential and must be included in the diet. An appropriate balance of essential fatty acids—omega-3 and omega-6 fatty acids—seems also important for health, although definitive experimental demonstration has been elusive. Both of these "omega" long-chain polyunsaturated fatty acids are substrates for a class of eicosanoids known as prostaglandins, which have roles throughout the human body. The omega-3 eicosapentaenoic acid (EPA), which can be made in the human body from the omega-3 essential fatty acid alpha-linolenic acid (ALA), or taken in through marine food sources, serves as a building block for series 3 prostaglandins (e.g., weakly inflammatory PGE3). The omega-6 dihomo-gamma-linolenic acid (DGLA) serves as a building block for series 1 prostaglandins (e.g. anti-inflammatory PGE1), whereas arachidonic acid (AA) serves as a building block for series 2 prostaglandins (e.g. pro-inflammatory PGE 2). Both DGLA and AA can be made from the omega-6 linoleic acid (LA) in the human body, or can be taken in directly through food. An appropriately balanced intake of omega-3 and omega-6 partly determines the relative production of different prostaglandins. In industrialized societies, people typically consume large amounts of processed vegetable oils, which have reduced amounts of the essential fatty acids along with too much of omega-6 fatty acids relative to omega-3 fatty acids. The conversion rate of omega-6 DGLA to AA largely determines the production of the prostaglandins PGE1 and PGE2. Omega-3 EPA prevents AA from being released from membranes, thereby skewing prostaglandin balance away from pro-inflammatory PGE2 (made from AA) toward anti-inflammatory PGE1 (made from DGLA). The conversion (desaturation) of DGLA to AA is controlled by the enzyme delta-5-desaturase, which in turn is controlled by hormones such as insulin (up-regulation) and glucagon (down-regulation). Fiber Dietary fiber is a carbohydrate, specifically a polysaccharide, which is incompletely absorbed in humans and in some animals. Fiber slows down the absorption of sugar in the gut. The microbiome converts fiber into signals that stimulate gut hormones, which in turn control how quickly the stomach empties, regulate blood sugar levels, and influence feelings of hunger. Like all carbohydrates, when fiber is digested, it can produce four calories (kilocalories) of energy per gram, but in most circumstances, it accounts for less than that because of its limited absorption and digestibility. The two subcategories are insoluble and soluble fiber. Insoluble dietary fiber Includes cellulose, a large carbohydrate polymer that is indigestible by humans, because humans do not have the required enzymes to break it down, and the human digestive system does not harbor enough of the types of microbes that can do so. Includes resistant starch, an insoluble starch that resists digestion either because it is protected by a shell or food matrix (Type 1 resistant starch, RS1), maintains the natural starch granule (Type 2 resistant starch, RS2), is retrograded and partially crystallized (Type 3 resistant starch, RS3), has been chemically modified (Type 4 resistant starch, RS4) or has complexed with a lipid (Type 5 resistant starch, RS5). Natural sources of resistant starch (RS1, RS2 and RS3) are fermented by the microbes in the human digestive system to produce short-chain fatty acids which are utilized as food for the colonic cells or absorbed. Soluble dietary fiber Comprises a variety of oligosaccharides, waxes, esters, and other carbohydrates that dissolve or gelatinize in water. Many of these soluble fibers can be fermented or partially fermented by microbes in the human digestive system to produce short-chain fatty acids which are absorbed and therefore introduce some caloric content. Whole grains, beans, and other legumes, fruits (especially plums, prunes, and figs), and vegetables are good sources of dietary fiber. Fiber has three primary mechanisms, which in general determine their health impact: bulking, viscosity and fermentation. Fiber provides bulk to the intestinal contents, and insoluble fiber facilitates peristalsis – the rhythmic muscular contractions of the intestines which move contents along the digestive tract. Some soluble and insoluble fibers produce a solution of high viscosity; this is essentially a gel, which slows the movement of food through the intestines. Fermentable fibers are used as food by the microbiome, mildly increasing bulk, and producing short-chain fatty acids and other metabolites, including vitamins, hormones, and glucose. One of these metabolites, butyrate, is important as an energy source for colon cells, and may improve metabolic syndrome. In 2016, the U.S. FDA approved a qualified health claim stating that resistant starch might reduce the risk of type 2 diabetes, but with qualifying language for product labels that only limited scientific evidence exists to support this claim. The FDA requires specific labeling language, such as the guideline concerning resistant starch: "High-amylose maize resistant starch may reduce the risk of type 2 diabetes. FDA has concluded that there is limited scientific evidence for this claim." Amino acids Proteins are the basis of many animal body structures (e.g. muscles, skin, and hair) and form the enzymes that control chemical reactions throughout the body. Each protein molecule is composed of amino acids which contain nitrogen and sometimes sulphur (these components are responsible for the distinctive smell of burning protein, such as the keratin in hair). The body requires amino acids to produce new proteins (protein retention) and to replace damaged proteins (maintenance). Amino acids are soluble in the digestive juices within the small intestine, where they are absorbed into the blood. Once absorbed, they cannot be stored in the body, so they are either metabolized as required or excreted in the urine. Proteins consist of amino acids in different proportions. The most important aspect and defining characteristic of protein from a nutritional standpoint is its amino acid composition. For all animals, some amino acids are essential (an animal cannot produce them internally so they must be eaten) and some are non-essential (the animal can produce them from other nitrogen-containing compounds). About twenty amino acids are found in the human body, and about ten of these are essential. The synthesis of some amino acids can be limited under special pathophysiological conditions, such as prematurity in the infant or individuals in severe catabolic distress, and those are called conditionally essential. A diet that contains adequate amounts of amino acids (especially those that are essential) is particularly important in some situations: during early development and maturation, pregnancy, lactation, or injury (a burn, for instance). A complete protein source contains all the essential amino acids; an incomplete protein source lacks one or more of the essential amino acids. It is possible with protein combinations of two incomplete protein sources (e.g., rice and beans) to make a complete protein source, and characteristic combinations are the basis of distinct cultural cooking traditions. However, complementary sources of protein do not need to be eaten at the same meal to be used together by the body. Excess amino acids from protein can be converted into glucose and used for fuel through a process called gluconeogenesis. There is an ongoing debate about the differences in nutritional quality and adequacy of protein from vegan, vegetarian and animal sources, though many studies and institutions have found that a well-planned vegan or vegetarian diet contains enough high-quality protein to support the protein requirements of both sedentary and active people at all stages of life. Water Water is excreted from the body in multiple forms; including urine and feces, sweating, and by water vapour in the exhaled breath. Therefore, it is necessary to adequately rehydrate to replace lost fluids. Early recommendations for the quantity of water required for maintenance of good health suggested that six to eight glasses of water daily is the minimum to maintain proper hydration. However, the notion that a person should consume eight glasses of water per day cannot be traced to a credible scientific source. The original water intake recommendation in 1945 by the Food and Nutrition Board of the National Research Council read: "An ordinary standard for diverse persons is 1 milliliter for each calorie of food. Most of this quantity is contained in prepared foods." More recent comparisons of well-known recommendations on fluid intake have revealed large discrepancies in the volumes of water we need to consume for good health. Therefore, to help standardize guidelines, recommendations for water consumption are included in two recent European Food Safety Authority (EFSA) documents (2010): (i) Food-based dietary guidelines and (ii) Dietary reference values for water or adequate daily intakes (ADI). These specifications were provided by calculating adequate intakes from measured intakes in populations of individuals with "desirable osmolarity values of urine and desirable water volumes per energy unit consumed". For healthful hydration, the current EFSA guidelines recommend total water intakes of 2.0 L/day for adult females and 2.5 L/day for adult males. These reference values include water from drinking water, other beverages, and from food. About 80% of our daily water requirement comes from the beverages we drink, with the remaining 20% coming from food. Water content varies depending on the type of food consumed, with fruit and vegetables containing more than cereals, for example. These values are estimated using country-specific food balance sheets published by the Food and Agriculture Organisation of the United Nations. The EFSA panel also determined intakes for different populations. Recommended intake volumes in the elderly are the same as for adults as despite lower energy consumption, the water requirement of this group is increased due to a reduction in renal concentrating capacity. Pregnant and breastfeeding women require additional fluids to stay hydrated. The EFSA panel proposes that pregnant women should consume the same volume of water as non-pregnant women, plus an increase in proportion to the higher energy requirement, equal to 300 mL/day. To compensate for additional fluid output, breastfeeding women require an additional 700 mL/day above the recommended intake values for non-lactating women. Dehydration and over-hydration – too little and too much water, respectively – can have harmful consequences. Drinking too much water is one of the possible causes of hyponatremia, i.e., low serum sodium. Minerals Dietary minerals are inorganic chemical elements required by living organisms, other than the four elements carbon, hydrogen, nitrogen, and oxygen that are present in nearly all organic molecules. Some have roles as cofactors, while others are electrolytes. The term "mineral" is archaic, since the intent is to describe simply the less common elements in the diet. Some are heavier than the four just mentioned – including several metals, which often occur as ions in the body. Some dietitians recommend that these be supplied from foods in which they occur naturally, or at least as complex compounds, or sometimes even from natural inorganic sources (such as calcium carbonate from ground oyster shells). Some are absorbed much more readily in the ionic forms found in such sources. On the other hand, minerals are often artificially added to the diet as supplements; the most well-known is likely iodine in iodized salt which prevents goiter. Macrominerals Elements with recommended dietary allowance (RDA) greater than 150 mg/day are, in alphabetical order: Calcium (Ca2+) is vital to the health of the muscular, circulatory, and digestive systems; is indispensable to the building of bone; and supports the synthesis and function of blood cells. For example, calcium is used to regulate the contraction of muscles, nerve conduction, and the clotting of blood. It can play this role because the Ca2+ ion forms stable coordination complexes with many organic compounds, especially proteins; it also forms compounds with a wide range of solubility, enabling the formation of the skeleton. Food sources include yogurt, milk, cheese, leafy greens, tofu, and fortified beverages. Chlorine as chloride ions; electrolyte; see sodium, below. Magnesium, required for processing ATP and related reactions (builds bone, causes strong peristalsis, increases flexibility, increases alkalinity). Approximately 50% is in bone, the remaining 50% is almost all inside body cells, with only about 1% located in extracellular fluid. Food sources include oats, buckwheat, tofu, nuts, caviar, green leafy vegetables, legumes, and chocolate. Phosphorus, required component of bones; essential for energy processing. Approximately 80% is found in the inorganic portion of bones and teeth. Phosphorus is a component of every cell, as well as important metabolites, including DNA, RNA, ATP, and phospholipids. Also important in pH regulation. It is an important electrolyte in the form of phosphate. Food sources include cheese, egg yolk, milk, meat, fish, poultry, whole-grain cereals, and many others. Potassium, an electrolyte (heart and nerve function). With sodium, potassium is involved in maintaining normal water balance, osmotic equilibrium, and acid-base balance. In addition to calcium, it is important in the regulation of neuromuscular activity. Food sources include bananas, avocados, nuts, vegetables, potatoes, legumes, fish, and mushrooms. Sodium, a common food ingredient and electrolyte, found in most foods and manufactured consumer products, typically as sodium chloride (salt). Excessive sodium consumption can deplete calcium and magnesium. Sodium has a role in the etiology of hypertension demonstrated from studies showing that a reduction of table salt intake may reduce blood pressure. Trace minerals Many elements are required in smaller amounts (microgram quantities), usually because they play a catalytic role in enzymes. Some trace mineral elements (RDA < 200 mg/day) are, in alphabetical order: Cobalt as a component of the vitamin B12 family of coenzymes Copper required component of many redox enzymes, including cytochrome c oxidase (see Copper in health) Chromium required for sugar metabolism Iodine required not only for the biosynthesis of thyroxin, but probably, for other important organs as breast, stomach, salivary glands, thymus etc. (see Iodine deficiency); for this reason iodine is needed in larger quantities than others in this list, and sometimes classified with the macrominerals; Nowadays it is most easily found in iodized salt, but there are also natural sources such as Kombu. Iron required for many enzymes, and for hemoglobin and some other proteins Manganese (processing of oxygen) Molybdenum required for xanthine oxidase and related oxidases Selenium required for peroxidase (antioxidant proteins) Zinc required for several enzymes such as carboxypeptidase, liver alcohol dehydrogenase, carbonic anhydrase Ultratrace minerals Ultratrace minerals are an as yet unproven aspect of human nutrition, and may be required at amounts measured in very low ranges of μg/day. Many ultratrace elements have been suggested as essential, but such claims have usually not been confirmed. Definitive evidence for efficacy comes from the characterization of a biomolecule containing the element with an identifiable and testable function. These include: Bromine Arsenic Nickel Fluorine Boron Lithium Strontium Silicon Vanadium Vitamins Except for vitamin D, vitamins are essential nutrients, necessary in the diet for good health. Vitamin D can be synthesized in the skin in the presence of UVB radiation. (Many animal species can synthesize vitamin C, but humans cannot.) Certain vitamin-like compounds that are recommended in the diet, such as carnitine, are thought useful for survival and health, but these are not "essential" dietary nutrients because the human body has some capacity to produce them from other compounds. Moreover, thousands of different phytochemicals have recently been discovered in food (particularly in fresh vegetables), which may have desirable properties including antioxidant activity (see below); experimental demonstration has been suggestive but inconclusive. Other essential nutrients not classed as vitamins include essential amino acids (see above), essential fatty acids (see above), and the minerals discussed in the preceding section. Vitamin deficiencies may result in disease conditions: goiter, scurvy, osteoporosis, impaired immune system, disorders of cell metabolism, certain forms of cancer, symptoms of premature aging, and poor psychological health (including eating disorders), among many others. Excess levels of some vitamins are also dangerous to health. The Food and Nutrition Board of the Institute of Medicine has established Tolerable Upper Intake Levels (ULs) for seven vitamins. Malnutrition The term malnutrition addresses 3 broad groups of conditions: Undernutrition, which includes wasting (low weight-for-height), stunting (low height-for-age) and underweight (low weight-for-age) Micronutrient-related malnutrition, which includes micronutrient deficiencies or insufficiencies (a lack of important vitamins and minerals) or micronutrient excess Overweight, obesity and diet-related noncommunicable diseases (such as heart disease, stroke, diabetes and some cancers). In developed countries, the diseases of malnutrition are most often associated with nutritional imbalances or excessive consumption; there are more people in the world who are malnourished due to excessive consumption. According to the United Nations World Health Organization, the greatest challenge in developing nations today is not starvation, but insufficient nutrition – the lack of nutrients necessary for the growth and maintenance of vital functions. The causes of malnutrition are directly linked to inadequate macronutrient consumption and disease, and are indirectly linked to factors like "household food security, maternal and child care, health services, and the environment". Insufficient The U.S. Food and Nutrition Board sets Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for vitamins and minerals. EARs and RDAs are part of Dietary Reference Intakes. The DRI documents describe nutrient deficiency signs and symptoms. Excessive The U.S. Food and Nutrition Board sets Tolerable Upper Intake Levels (known as ULs) for vitamins and minerals when evidence is sufficient. ULs are set a safe fraction below amounts shown to cause health problems. ULs are part of Dietary Reference Intakes. The European Food Safety Authority also reviews the same safety questions and set its own ULs. Unbalanced When too much of one or more nutrients is present in the diet to the exclusion of the proper amount of other nutrients, the diet is said to be unbalanced. High calorie food ingredients such as vegetable oils, sugar and alcohol are referred to as "empty calories" because they displace from the diet foods that also contain protein, vitamins, minerals and fiber. Illnesses caused by underconsumption and overconsumption Other substances Alcohol (ethanol) Pure ethanol provides 7 calories per gram. For distilled spirits, a standard serving in the United States is 1.5 fluid ounces, which at 40% ethanol (80 proof), would be 14 grams and 98 calories. Wine and beer contain a similar range of ethanol for servings of 5 ounces and 12 ounces, respectively, but these beverages also contain non-ethanol calories. A 5-ounce serving of wine contains 100 to 130 calories. A 12-ounce serving of beer contains 95 to 200 calories. According to the U.S. Department of Agriculture, based on NHANES 2013–2014 surveys, women ages 20 and up consume on average 6.8 grams/day and men consume on average 15.5 grams/day. Ignoring the non-alcohol contribution of those beverages, the average ethanol calorie contributions are 48 and 108 cal/day. Alcoholic beverages are considered empty calorie foods because other than calories, these contribute no essential nutrients. Phytochemicals Phytochemicals such as polyphenols are compounds produced naturally in plants (phyto means "plant" in Greek). In general, the term identifies compounds that are prevalent in plant foods, but are not proven to be essential for human nutrition, as of 2018. There is no conclusive evidence in humans that polyphenols or other non-nutrient compounds from plants confer health benefits, mainly because these compounds have poor bioavailability, i.e., following ingestion, they are digested into smaller metabolites with unknown functions, then are rapidly eliminated from the body. Intestinal microbiome The intestines contain a large population of gut flora. In humans, the four dominant phyla are Bacillota, Bacteroidota, Actinomycetota, and Pseudomonadota. They are essential to digestion and are also affected by food that is consumed. Bacteria are essential for metabolizing food substrates and thereby increasing energy output, and produce a great variety of metabolites, including vitamins and short-chain fatty acids that contribute to the metabolism in a wide variety of ways. These metabolites are responsible for stimulating cell growth, repressing the growth of harmful bacteria, priming the immune system to respond only to pathogens, helping to maintain a healthy gut barrier, control gene expression by epigenetic regulation and defending against some infectious diseases. Global nutrition challenges The challenges facing global nutrition are disease, child malnutrition, obesity, and vitamin deficiency. Disease The most common non-infectious diseases worldwide, that contribute most to the global mortality rate, are cardiovascular diseases, various cancers, diabetes, and chronic respiratory problems, all of which are linked to poor nutrition. Nutrition and diet are closely associated with the leading causes of death, including cardiovascular disease and cancer. Obesity and high sodium intake can contribute to ischemic heart disease, while consumption of fruits and vegetables can decrease the risk of developing cancer. Food-borne and infectious diseases can result in malnutrition, and malnutrition exacerbates infectious disease. Poor nutrition leaves children and adults more susceptible to contracting life-threatening diseases such as diarrheal infections and respiratory infections. According to the WHO, in 2011, 6.9 million children died of infectious diseases like pneumonia, diarrhea, malaria, and neonatal conditions, of which at least one third were associated with undernutrition. Child malnutrition According to UNICEF, in 2011, 101 million children across the globe were underweight and one in four children, 165 million, were stunted in growth. Simultaneously, there are 43 million children under five who are overweight or obese. Nearly 20 million children under five suffer from severe acute malnutrition, a life-threatening condition requiring urgent treatment. According to estimations at UNICEF, hunger will be responsible for 5.6 million deaths of children under the age of five this year. These all represent significant public health emergencies. This is because proper maternal and child nutrition has immense consequences for survival, acute and chronic disease incidence, normal growth, and economic productivity of individuals. Childhood malnutrition is common and contributes to the global burden of disease. Childhood is a particularly important time to achieve good nutrition status, because poor nutrition has the capability to lock a child in a vicious cycle of disease susceptibility and recurring sickness, which threatens cognitive and social development. Undernutrition and bias in access to food and health services leaves children less likely to attend or perform well in school. Undernutrition UNICEF defines undernutrition "as the outcome of insufficient food intake (hunger) and repeated infectious diseases. Undernutrition includes being underweight for one's age, too short for one's age (stunted growth), dangerously thin (muscle wasting), and deficient in vitamins and minerals (micronutrient malnutrition). Under nutrition causes 53% of deaths of children under five across the world. It has been estimated that undernutrition is the underlying cause for 35% of child deaths. The Maternal and Child Nutrition Study Group estimate that under nutrition, "including fetal growth restriction, stunting, wasting, deficiencies of vitamin A and zinc along with suboptimum breastfeeding—is a cause of 3.1 million child deaths and infant mortality, or 45% of all child deaths in 2011". When humans are undernourished, they no longer maintain normal bodily functions, such as growth, resistance to infection, or have insufficient drive for every everyday tasks and unsatisfactory performance in school or work. Major causes of under nutrition in young children include lack of proper breast feeding for infants and illnesses such as diarrhea, pneumonia, malaria, and HIV/AIDS. According to UNICEF 146 million children across the globe, that one out of four under the age of five, are underweight. The number of underweight children has decreased since 1990, from 33 percent to 28 percent between 1990 and 2004. Underweight and stunted children are more susceptible to infection, more likely to fall behind in academics and develop non-infectious diseases, ultimately affecting their livelihood. Therefore, undernutrition can result in an accumulation of afflictions and health deficiencies which results in less productivity individually and as a community. Many children are born with the inherent disadvantage of low birth weight, often caused by intrauterine growth restriction and poor maternal nutrition, which results in affected growth, development and health throughout the course of their lifetime. Children born at low birth weight (less than 5.5 pounds or 2.5 kg), are less likely to be healthy and are more susceptible to disease and early death. Those born at low birth weight also are likely to have a depressed immune system, which can increase their chances of heart disease and diabetes later on in life. Because 96% of low birth weight occurs in the developing world, low birth weight has been associated with childbirth in impoverished areas where the birth mother typically exhibits poor nutritional status under harsh and demanding living conditions. Stunting and other forms of undernutrition reduces a child's chance of survival and hinders their optimal growth and health. Stunting has demonstrated association with poor brain development, which reduces cognitive ability, academic performance and future earning potential. Important determinants of stunting include the quality and frequency of infant and child feeding, infectious disease susceptibility, and the mother's nutrition and health status. Undernourished mothers are more likely to birth stunted children, perpetuating a cycle of undernutrition and poverty. Stunted children are more likely to develop obesity and chronic diseases upon reaching adulthood. Therefore, malnutrition resulting in stunting can further worsen the obesity epidemic, especially in low and middle income countries. This creates even new economic and social challenges for vulnerable impoverished groups. Data on global and regional food supply shows that consumption rose from 2011 to 2012 in all regions. Diets became more diverse, with a decrease in consumption of cereals and roots and an increase in fruits, vegetables, and meat products. However, this increase masks the discrepancies between nations, where Africa, in particular, saw a decrease in food consumption over the same years. This information is derived from food balance sheets that reflect national food supplies, however, this does not necessarily reflect the distribution of micronutrients and macronutrients. Often inequality in food access leaves distribution which uneven, resulting in undernourishment for some and obesity for others. Undernourishment, or hunger, according to the Food and Agriculture Organization (FAO), is dietary intake below the minimum daily energy requirement. The amount of undernourishment is calculated utilizing the average amount of food available for consumption, the size of the population, the relative disparities in access to the food, and the minimum calories required for each individual. According to FAO, 868 million people (12% of the global population) were undernourished in 2012. This has decreased across the world since 1990, in all regions except for Africa, where undernourishment has steadily increased. However, the rates of decrease are not sufficient to meet the first Millennium Development Goal of halving hunger between 1990 and 2015. The global financial, economic, and food price crisis in 2008 drove many people to hunger, especially women and children. The spike in food prices prevented many people from escaping poverty, because the poor spend a larger proportion of their income on food and farmers are net consumers of food. High food prices cause consumers to have less purchasing power and to substitute more-nutritious foods with low-cost alternatives. Adult overweight and obesity Malnutrition in Industrialized nations is primarily due to non-nutritious carbohydrates sources resulting in excess caloric intake, which has contributed to the obesity epidemic affecting both developed and certain developing nations. In 2008, 35% of adults above the age of 20 years were overweight (BMI ≥ 25 kg/m2), a prevalence that has doubled worldwide between 1980 and 2008. Also 10% of men and 14% of women were obese, with a body mass index (BMI) greater than 30. Rates of overweight and obesity vary across the globe, with the highest prevalence in the Americas, followed by European nations, where over 50% of the population is overweight or obese. Obesity is more prevalent among upper-middle to high income groups compared to lower income divisions. Women are more likely than men to be obese, where the rate of obesity in women doubled from 8% to 14% between 1980 and 2008. Being overweight as a child has become an increasingly important statistic as an indicator for later development of obesity and non-infectious diseases such as cardiovascular disease. In several western European nations, the prevalence of overweight and obese children rose by 10% from 1980 to 1990, a rate that has begun to accelerate recently. Vitamin and mineral malnutrition Vitamins and minerals are essential to the proper functioning and maintenance of the human body. There are 20 trace elements and minerals that are essential in small quantities to body function and overall human health. Iron deficiency is the most common inadequate nutrient worldwide, affecting approximately 2 billion people. Globally, anemia affects 1.6 billion people, and represents a public health emergency in mothers and children under five. The World Health Organization estimates that there exists 469 million women of reproductive age and approximately 600 million preschool and school-age children worldwide who are anemic. Anemia, especially iron-deficient anemia, is a critical problem for cognitive developments in children, and its presence leads to maternal deaths and poor brain and motor development in children. The development of anemia affects mothers and children more because infants and children have higher iron requirements for growth. Health consequences for iron deficiency in young children include increased perinatal mortality, delayed mental and physical development, negative behavioral consequences, reduced auditory and visual function, and impaired physical performance. The harm caused by iron deficiency during child development cannot be reversed and result in reduced academic performance, poor physical work capacity, and decreased productivity in adulthood. Mothers are also very susceptible to iron-deficient anemia because women lose iron during menstruation, and rarely supplement it in their diet. Maternal iron deficiency anemia increases the chances of maternal mortality, contributing to at least 18% of maternal deaths in low and middle income countries. Vitamin A plays an essential role in developing the immune system in children, therefore, it is considered an essential micronutrient that can greatly affect health. However, because of the expense of testing for deficiencies, many developing nations have not been able to fully detect and address vitamin A deficiency, leaving vitamin A deficiency considered a silent hunger. According to estimates, subclinical vitamin A deficiency, characterized by low retinol levels, affects 190 million pre-school children and 19 million mothers worldwide. The WHO estimates that 5.2 million of these children under five are affected by night blindness, which is considered clinical vitamin A deficiency. Severe vitamin A deficiency (VAD) for developing children can result in visual impairments, anemia and weakened immunity, and increase their risk of morbidity and mortality from infectious disease. This also presents a problem for women, with WHO estimating that 9.8 million women are affected by night blindness. Clinical vitamin A deficiency is particularly common among pregnant women, with prevalence rates as high as 9.8% in South-East Asia. Estimates say that 28.5% of the global population is iodine deficient, representing 1.88 billion individuals. Although salt iodization programs have reduced the prevalence of iodine deficiency, this is still a public health concern in 32 nations. Moderate deficiencies are common in Europe and Africa, and over consumption is common in the Americas. Iodine-deficient diets can interfere with adequate thyroid hormone production, which is responsible for normal growth in the brain and nervous system. This ultimately leads to poor school performance and impaired intellectual capabilities. Infant and young child feeding Improvement of breast feeding practices, like early initiation and exclusive breast feeding for the first two years of life, could save the lives of 1.5 million children annually. Nutrition interventions targeted at infants aged 0–5 months first encourages early initiation of breastfeeding. Though the relationship between early initiation of breast feeding and improved health outcomes has not been formally established, a recent study in Ghana suggests a causal relationship between early initiation and reduced infection-caused neo-natal deaths. Also, experts promote exclusive breastfeeding, rather than using formula, which has shown to promote optimal growth, development, and health of infants. Exclusive breastfeeding often indicates nutritional status because infants that consume breast milk are more likely to receive all adequate nourishment and nutrients that will aid their developing body and immune system. This leaves children less likely to contract diarrheal diseases and respiratory infections. Besides the quality and frequency of breastfeeding, the nutritional status of mothers affects infant health. When mothers do not receive proper nutrition, it threatens the wellness and potential of their children. Well-nourished women are less likely to experience risks of birth and are more likely to deliver children who will develop well physically and mentally. Maternal undernutrition increases the chances of low-birth weight, which can increase the risk of infections and asphyxia in fetuses, increasing the probability of neonatal deaths. Growth failure during intrauterine conditions, associated with improper mother nutrition, can contribute to lifelong health complications. Approximately 13 million children are born with intrauterine growth restriction annually. Anorexia nervosa Anorexia nervosa stands out as the psychiatric disorder with the highest mortality rate. It affects approximately 0.3% of young women and is especially common among teenage girls, with the average onset at around 15 years old. The disorder predominantly impacts females, with 80-90% of those diagnosed being women. Anorexia is the leading cause of significant weight loss in young women and is the primary reason for their admission to child and adolescent hospital services. In most cases, a clear diagnosis of weight loss driven by psychological factors can be made without resorting to a series of complex tests. Basic medical evaluations, including blood tests, electrocardiograms, and tracking the patient's weight and measurements, not only help in identifying underlying issues but also provide a reason for the patient to return for follow-up discussions. These follow-ups can often reveal psychological challenges. When weight loss is hidden, symptoms such as depression, obsessive behaviors, infertility, or amenorrhea may be the first signs that there is cause for concern. Although relatively uncommon, eating disorders can negatively affect menstruation, fertility, and maternal and fetal well-being. Among infertile women with amenorrhea or oligomenorrhea due to eating disorders, 58% had menstrual irregularities, according to preliminary research in 1990. Recent research has shown no significant difference in fertility between women with a history of anorexia nervosa and those without, suggesting that despite experiencing high rates of menstrual irregularities, women with anorexia nervosa are still achieving pregnancy. Nutrition literacy The findings of the 2003 National Assessment of Adult Literacy (NAAL), conducted by the US Department of Education, provide a basis upon which to frame the nutrition literacy problem in the U.S. NAAL introduced the first-ever measure of "the degree to which individuals have the capacity to obtain, process and understand basic health information and services needed to make appropriate health decisions" – an objective of Healthy People 2010 and of which nutrition literacy might be considered an important subset. On a scale of below basic, basic, intermediate and proficient, NAAL found 13 percent of adult Americans have proficient health literacy, 44% have intermediate literacy, 29 percent have basic literacy and 14 percent have below basic health literacy. The study found that health literacy increases with education and people living below the level of poverty have lower health literacy than those above it. Another study examining the health and nutrition literacy status of residents of the lower Mississippi Delta found that 52 percent of participants had a high likelihood of limited literacy skills. While a precise comparison between the NAAL and Delta studies is difficult, primarily because of methodological differences, Zoellner et al. suggest that health literacy rates in the Mississippi Delta region are different from the U.S. general population and that they help establish the scope of the problem of health literacy among adults in the Delta region. For example, only 12 percent of study participants identified the MyPyramid graphic two years after it had been launched by the United States Department of Agriculture (USDA). The study also found significant relationships between nutrition literacy and income level and nutrition literacy and educational attainment further delineating priorities for the region. These statistics point to the complexities surrounding the lack of health/nutrition literacy and reveal the degree to which they are embedded in the social structure and interconnected with other problems. Among these problems are the lack of information about food choices, a lack of understanding of nutritional information and its application to individual circumstances, limited or difficult access to healthful foods, and a range of cultural influences and socioeconomic constraints such as low levels of education and high levels of poverty that decrease opportunities for healthful eating and living. The links between low health literacy and poor health outcomes has been widely documented and there is evidence that some interventions to improve health literacy have produced successful results in the primary care setting. More must be done to further our understanding of nutrition literacy specific interventions in non-primary care settings in order to achieve better health outcomes. International food insecurity and malnutrition According to UNICEF, South Asia has the highest levels of underweight children under five, followed by sub-Saharan Africans nations, with Industrialized countries and Latin nations having the lowest rates. Industrialized countries According to UNICEF, the Commonwealth of Independent States has the lowest rates of stunting and wasting, at 14 percent and 3 percent. The nations of Estonia, Finland, Iceland, Lithuania and Sweden have the lowest prevalence of low birthweight children in the world- at 4%. Proper prenatal nutrition is responsible for this small prevalence of low birthweight infants. However, low birthweight rates are increasing, due to the use of fertility drugs, resulting in multiple births, women bearing children at an older age, and the advancement of technology allowing more pre-term infants to survive. Industrialized nations more often face malnutrition in the form of over-nutrition from excess calories and non-nutritious carbohydrates, which has contributed greatly to the public health epidemic of obesity. Disparities, according to gender, geographic location and socio-economic position, both within and between countries, represent the biggest threat to child nutrition in industrialized countries. These disparities are a direct product of social inequalities and social inequalities are rising throughout the industrialized world, particularly in Europe. North America United States In the United States, 2% of children are underweight, with under 1% stunted and 6% are wasting. Dietitians are registered (RD) or licensed (LD) with the Commission for Dietetic Registration and the American Dietetic Association, and are only able to use the title "dietitian", as described by the business and professions codes of each respective state, when they have met specific educational and experiential prerequisites and passed a national registration or licensure examination, respectively. Anyone may call themselves a nutritionist, including unqualified dietitians, as this term is unregulated. Some states, such as the State of Florida, have begun to include the title "nutritionist" in state licensure requirements. Most governments provide guidance on nutrition, and some also impose mandatory disclosure/labeling requirements for processed food manufacturers and restaurants to assist consumers in complying with such guidance. Nutritional standards and recommendations are established jointly by the US Department of Agriculture and US Department of Health and Human Services. Dietary and physical activity guidelines from the USDA are presented in the concept of a plate of food which in 2011 superseded the MyPyramid food pyramid that had replaced the Food Guide Pyramid. The United States Senate Committee on Agriculture, Nutrition, and Forestry is currently responsible for oversight of the USDA. The U.S. Department of Health and Human Services provides a sample week-long menu which fulfills the nutritional recommendations of the government. Canada Canada's Food Guide is an evidence-based education and policy tool provided by Health Canada that is designed to promote healthy eating. South Asia South Asia has the highest percentage and number of underweight children under five in the world, at approximately 78 million children. Patterns of stunting and wasting are similar, where 44% have not reached optimal height and 15% are wasted, rates much higher than any other regions. This region of the world has extremely high rates of underweight children. According to a 2006 UNICEF study, 46% of its child population under five is underweight. The same study indicates India, Bangladesh, and Pakistan combined account for half the globe's underweight child population. South Asian nations have made progress towards the MDGs, considering the rate has decreased from 53% since 1990, however, a 1.7% decrease of underweight prevalence per year will not be sufficient to meet the 2015 goal. Some nations, such as Afghanistan, Bangladesh, and Sri Lanka, on the other hand, have made significant improvements, all decreasing their prevalence by half in ten years. While India and Pakistan have made modest improvements, Nepal has made no significant improvement in underweight child prevalence. Other forms of undernutrition have continued to persist with high resistance to improvement, such as the prevalence of stunting and wasting, which has not changed significantly in the past 10 years. Causes of this poor nutrition include energy-insufficient diets, poor sanitation conditions, and the gender disparities in educational and social status. Girls and women face discrimination especially in nutrition status, where South Asia is the only region in the world where girls are more likely to be underweight than boys. In South Asia, 60% of children in the lowest quintile are underweight, compared to only 26% in the highest quintile, and the rate of reduction of underweight is slower amongst the poorest. Eastern and Southern Africa The Eastern and Southern African nations have shown no improvement since 1990 in the rate of underweight children under five. They have also made no progress in halving hunger by 2015, the most prevalent Millennium Development Goal. This is due primarily to the prevalence of famine, declined agricultural productivity, food emergencies, drought, conflict, and increased poverty. This, along with HIV/AIDS, has inhibited the nutrition development of nations such as Lesotho, Malawi, Mozambique, Swaziland, Zambia and Zimbabwe. Botswana has made remarkable achievements in reducing underweight prevalence, dropping 4% in 4 years, despite its place as the second leader in HIV prevalence amongst adults in the globe. South Africa, the wealthiest nation in this region, has the second-lowest proportion of underweight children at 12%, but has been steadily increasing in underweight prevalence since 1995. Almost half of Ethiopian children are underweight, and along with Nigeria, they account for almost one-third of the underweight under five in all of Sub-Saharan Africa. West and Central Africa West and Central Africa has the highest rate of children under five underweight in the world. Of the countries in this region, the Congo has the lowest rate at 14%, while the nations of Democratic Republic of the Congo, Ghana, Guinea, Mali, Nigeria, Senegal and Togo are improving slowly. In Gambia, rates decreased from 26% to 17% in four years, and their coverage of vitamin A supplementation reaches 91% of vulnerable populations. This region has the next highest proportion of wasted children, with 10% of the population under five not at optimal weight. Little improvement has been made between the years of 1990 and 2004 in reducing the rates of underweight children under five, whose rate stayed approximately the same. Sierra Leone has the highest child under five mortality rate in the world, due predominantly to its extreme infant mortality rate, at 238 deaths per 1000 live births. Other contributing factors include the high rate of low birthweight children (23%) and low levels of exclusive breast feeding (4%). Anemia is prevalent in these nations, with unacceptable rates of iron deficient anemia. The nutritional status of children is further indicated by its high (10%) rate of child wasting. Wasting is a significant problem in Sahelian countries – Burkina Faso, Chad, Mali, Mauritania and Niger – where rates fall between 11% and 19% of under fives, affecting more than 1 million children. In Mali, the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) and the Aga Khan Foundation trained women's groups to make equinut, a healthy and nutritional version of the traditional recipe di-dèguè (comprising peanut paste, honey and millet or rice flour). The aim was to boost nutrition and livelihoods by producing a product that women could make and sell, and which would be accepted by the local community because of its local heritage. Middle East and North Africa Six countries in the Middle East and North Africa region are on target to meet goals for reducing underweight children by 2015, and 12 countries have prevalence rates below 10%. However, the nutrition of children in the region as a whole has degraded for the past ten years due to the increasing portion of underweight children in three populous nations – Iraq, Sudan, and Yemen. Forty six percent of all children in Yemen are underweight, a percentage that has worsened by 4% since 1990. In Yemen, 53% of children under five are stunted and 32% are born at low birth weight. Sudan has an underweight prevalence of 41%, and the highest proportion of wasted children in the region at 16%. One percent of households in Sudan consume iodized salt. Iraq has also seen an increase in child underweight since 1990. Djibouti, Jordan, the Occupied Palestinian Territory (OPT), Oman, the Syrian Arab Republic and Tunisia are all projected to meet minimum nutrition goals, with OPT, Syrian AR, and Tunisia the fastest improving regions. This region demonstrates that undernutrition does not always improve with economic prosperity, where the United Arab Emirates, for example, despite being a wealthy nation, has similar child death rates due to malnutrition to those seen in Yemen. East Asia and the Pacific The East Asia and Pacific region has reached its goals on nutrition, in part due to the improvements contributed by China, the region's most populous country. China has reduced its underweight prevalence from 19 percent to 8 percent between 1990 and 2002. China played the largest role in the world in decreasing the rate of children under five underweight between 1990 and 2004, halving the prevalence. This reduction of underweight prevalence has aided in the lowering of the under 5 mortality rate from 49 to 31 of 1000. They also have a low birthweight rate at 4%, a rate comparable to industrialized countries, and over 90% of households receive adequate iodized salts. However, large disparities exist between children in rural and urban areas, where 5 provinces in China leave 1.5 million children iodine deficient and susceptible to diseases. Singapore, Vietnam, Malaysia, and Indonesia are all projected to reach nutrition MDGs. Singapore has the lowest under five mortality rate of any nation, besides Iceland, in the world, at 3%. Cambodia has the highest rate of child mortality in the region (141 per 1,000 live births), while still its proportion of underweight children increased by 5 percent to 45% in 2000. Further nutrient indicators show that only 12 per cent of Cambodian babies are exclusively breastfed and only 14 per cent of households consume iodized salt. Latin America and the Caribbean This region has undergone the fastest progress in decreasing poor nutrition status of children in the world. The Latin American region has reduced underweight children prevalence by 3.8% every year between 1990 and 2004, with a current rate of 7% underweight. They also have the lowest rate of child mortality in the developing world, with only 31 per 1000 deaths, and the highest iodine consumption. Cuba has seen improvement from 9 to 4 percent underweight under 5 between 1996 and 2004. The prevalence has also decreased in the Dominican Republic, Jamaica, Peru, and Chile. Chile has a rate of underweight under 5, at merely 1%. The most populous nations, Brazil and Mexico, mostly have relatively low rates of underweight under 5, with only 6% and 8%. Guatemala has the highest percentage of underweight and stunted children in the region, with rates above 45%. There are disparities amongst different populations in this region. For example, children in rural areas have twice the prevalence of underweight at 13%, compared to urban areas at 5%. Nutrition access disparities Occurring throughout the world, lack of proper nutrition is both a consequence and cause of poverty. Impoverished individuals are less likely to have access to nutritious food and to escape from poverty than those who have healthy diets. Disparities in socioeconomic status, both between and within nations, provide the largest threat to child nutrition in industrialized nations, where social inequality is on the rise. According to UNICEF, children living in the poorest households are twice as likely to be underweight as those in the richest. Those in the lowest wealth quintile and whose mothers have the least education demonstrate the highest rates of child mortality and stunting. Throughout the developing world, socioeconomic inequality in childhood malnutrition is more severe than in upper income brackets, regardless of the general rate of malnutrition. According to UNICEF, children in rural locations are more than twice as likely to be underweight as compared to children under five in urban areas. In Latin American/Caribbean nations, "Children living in rural areas in Bolivia, Honduras, Mexico and Nicaragua are more than twice as likely to be underweight as children living in urban areas. That likelihood doubles to four times in Peru." Concurrently, the greatest increase in childhood obesity has been seen in the lower middle income bracket. In the United States, the incidence of low birthweight is on the rise among all populations, but particularly among minorities. According to UNICEF, boys and girls have almost identical rates as underweight children under age 5 across the world, except in South Asia. Nutrition policy Nutrition interventions Nutrition directly influences progress towards meeting the Millennium Development Goals of eradicating hunger and poverty through health and education. Therefore, nutrition interventions take a multi-faceted approach to improve the nutrition status of various populations. Policy and programming must target both individual behavioral changes and policy approaches to public health. While most nutrition interventions focus on delivery through the health-sector, non-health sector interventions targeting agriculture, water and sanitation, and education are important as well. Global nutrition micronutrient deficiencies often receive large-scale solution approaches by deploying large governmental and non-governmental organizations. For example, in 1990, iodine deficiency was particularly prevalent, with one in five households, or 1.7 billion people, not consuming adequate iodine, leaving them at risk to develop associated diseases. Therefore, a global campaign to iodize salt to eliminate iodine deficiency successfully boosted the rate to 69% of households in the world consuming adequate amounts of iodine. Emergencies and crises often exacerbate undernutrition, due to the aftermath of crises that include food insecurity, poor health resources, unhealthy environments, and poor healthcare practices. Therefore, the repercussions of natural disasters and other emergencies can exponentially increase the rates of macro and micronutrient deficiencies in populations. Disaster relief interventions often take a multi-faceted public health approach. UNICEF's programming targeting nutrition services amongst disaster settings include nutrition assessments, measles immunization, vitamin A supplementation, provision of fortified foods and micronutrient supplements, support for breastfeeding and complementary feeding for infants and young children, and therapeutic and supplementary feeding. For example, during Nigeria's food crisis of 2005, 300,000 children received therapeutic nutrition feeding programs through the collaboration of UNICEF, the Niger government, the World Food Programme, and 24 NGOs utilizing community and facility based feeding schemes. Interventions aimed at pregnant women, infants, and children take a behavioral and program-based approach. Behavioral intervention objectives include promoting proper breast-feeding, the immediate initiation of breastfeeding, and its continuation through 2 years and beyond. UNICEF recognizes that to promote these behaviors, healthful environments must be established conducive to promoting these behaviors, like healthy hospital environments, skilled health workers, support in the public and workplace, and removing negative influences. Finally, other interventions include provisions of adequate micro and macro nutrients such as iron, anemia, and vitamin A supplements and vitamin-fortified foods and ready-to-use products. Programs addressing micronutrient deficiencies, such as those aimed at anemia, have attempted to provide iron supplementation to pregnant and lactating women. However, because supplementation often occurs too late, these programs have had little effect. Interventions such as women's nutrition, early and exclusive breastfeeding, appropriate complementary food and micronutrient supplementation have proven to reduce stunting and other manifestations of undernutrition. A Cochrane review of community-based maternal health packages showed that this community-based approach improved the initiation of breastfeeding within one hour of birth. Some programs have had adverse effects. One example is the "Formula for Oil" relief program in Iraq, which resulted in the replacement of breastfeeding for formula, which has negatively affected infant nutrition. Implementation and delivery platforms In April 2010, the World Bank and the IMF released a policy briefing entitled "Scaling up Nutrition (SUN): A Framework for action" that represented a partnered effort to address the Lancet's Series on under nutrition, and the goals it set out for improving under nutrition. They emphasized the 1000 days after birth as the prime window for effective nutrition intervention, encouraging programming that was cost-effective and showed significant cognitive improvement in populations, as well as enhanced productivity and economic growth. This document was labeled the SUN framework, and was launched by the UN General Assembly in 2010 as a road map encouraging the coherence of stakeholders like governments, academia, UN system organizations and foundations in working towards reducing under nutrition. The SUN framework has initiated a transformation in global nutrition- calling for country-based nutrition programs, increasing evidence based and cost–effective interventions, and "integrating nutrition within national strategies for gender equality, agriculture, food security, social protection, education, water supply, sanitation, and health care". Government often plays a role in implementing nutrition programs through policy. For instance, several East Asian nations have enacted legislation to increase iodization of salt to increase household consumption. Political commitment in the form of evidence-based effective national policies and programs, trained skilled community nutrition workers, and effective communication and advocacy can all work to decrease malnutrition. Market and industrial production can play a role as well. For example, in the Philippines, improved production and market availability of iodized salt increased household consumption. While most nutrition interventions are delivered directly through governments and health services, other sectors, such as agriculture, water and sanitation, and education, are vital for nutrition promotion as well. Advice and guidance Government policies Canada's Food Guide is an example of a government-run nutrition program. Produced by Health Canada, the guide advises food quantities, provides education on balanced nutrition, and promotes physical activity in accordance with government-mandated nutrient needs. Like other nutrition programs around the world, Canada's Food Guide divides nutrition into four main food groups: vegetables and fruit, grain products, milk and alternatives, and meat and alternatives. Unlike its American counterpart, the Canadian guide references and provides alternative to meat and dairy, which can be attributed to the growing vegan and vegetarian movements. In the US, nutritional standards and recommendations are established jointly by the US Department of Agriculture and US Department of Health and Human Services (HHS) and these recommendations are published as the Dietary Guidelines for Americans. Dietary and physical activity guidelines from the USDA are presented in the concept of MyPlate, which superseded the food pyramid, which replaced the Four Food Groups. The Senate committee currently responsible for oversight of the USDA is the Agriculture, Nutrition and Forestry Committee. Committee hearings are often televised on C-SPAN. The U.S. HHS provides a sample week-long menu that fulfills the nutritional recommendations of the government. Government programs Governmental organisations have been working on nutrition literacy interventions in non-primary health care settings to address the nutrition information problem in the U.S. Some programs include: The Family Nutrition Program (FNP) is a free nutrition education program serving low-income adults around the U.S. This program is funded by the Food Nutrition Service's (FNS) branch of the United States Department of Agriculture (USDA) usually through a local state academic institution that runs the program. The FNP has developed a series of tools to help families participating in the Food Stamp Program stretch their food dollar and form healthful eating habits including nutrition education. Expanded Food and Nutrition Education Program (ENFEP) is a unique program that currently operates in all 50 states and in American Samoa, Guam, Micronesia, Northern Mariana Islands, Puerto Rico, and the Virgin Islands. It is designed to assist limited-resource audiences in acquiring the knowledge, skills, attitudes, and changed behavior necessary for nutritionally sound diets, and to contribute to their personal development and the improvement of the total family diet and nutritional well-being. An example of a state initiative to promote nutrition literacy is Smart Bodies, a public-private partnership between the state's largest university system and largest health insurer, Louisiana State Agricultural Center and Blue Cross and Blue Shield of Louisiana Foundation. Launched in 2005, this program promotes lifelong healthful eating patterns and physically active lifestyles for children and their families. It is an interactive educational program designed to help prevent childhood obesity through classroom activities that teach children healthful eating habits and physical exercise. Education Nutrition is taught in schools in many countries. In England and Wales, the Personal and Social Education and Food Technology curricula include nutrition, stressing the importance of a balanced diet and teaching how to read nutrition labels on packaging. In many schools, a Nutrition class will fall within the Family and Consumer Science (FCS) or Health departments. In some American schools, students are required to take a certain number of FCS or Health related classes. Nutrition is offered at many schools, and, if it is not a class of its own, nutrition is included in other FCS or Health classes such as: Life Skills, Independent Living, Single Survival, Freshmen Connection, Health etc. In many Nutrition classes, students learn about the food groups, the food pyramid, Daily Recommended Allowances, calories, vitamins, minerals, malnutrition, physical activity, healthful food choices, portion sizes, and how to live a healthy life. A 1985 US National Research Council report entitled Nutrition Education in US Medical Schools concluded that nutrition education in medical schools was inadequate. Only 20% of the schools surveyed taught nutrition as a separate, required course. A 2006 survey found that this number had risen to 30%. Membership by physicians in leading professional nutrition societies such as the American Society for Nutrition has generally declined from the 1990s. Professional organizations In the US, Registered dietitian nutritionists (RDs or RDNs) are health professionals qualified to provide safe, evidence-based dietary advice which includes a review of what is eaten, a thorough review of nutritional health, and a personalized nutritional treatment plan through dieting. They also provide preventive and therapeutic programs at work places, schools and similar institutions. Certified Clinical Nutritionists or CCNs, are trained health professionals who also offer dietary advice on the role of nutrition in chronic disease, including possible prevention or remediation by addressing nutritional deficiencies before resorting to drugs. Government regulation especially in terms of licensing, is currently less universal for the CCN than that of RD or RDN. Another advanced Nutrition Professional is a Certified Nutrition Specialist or CNS. These Board Certified Nutritionists typically specialize in obesity and chronic disease. In order to become board certified, potential CNS candidate must pass an examination, much like Registered Dieticians. This exam covers specific domains within the health sphere including; Clinical Intervention and Human Health. The National Board of Physician Nutrition Specialists offers board certification for physicians practicing nutrition medicine. Nutrition for special populations Sports nutrition The protein requirement for each individual differs, as do opinions about whether and to what extent physically active people require more protein. The 2005 Recommended Dietary Allowances (RDA), aimed at the general healthy adult population, provide for an intake of 0.8 grams of protein per kilogram of body weight. A review panel stating that "no additional dietary protein is suggested for healthy adults undertaking resistance or endurance exercise". The main fuel used by the body during exercise is carbohydrates, which is stored in muscle as glycogen – a form of sugar. During exercise, muscle glycogen reserves can be used up, especially when activities last longer than 90 min. Maternal nutrition Maternal nutrition is crucial during pregnancy and the child's first 1,000 days of life, encompassing the period from conception to the second birthday. During the first six months, infants rely exclusively on breast milk, which remains nutritionally sufficient despite maternal nutritional challenges. However, the mother's overall health and diet directly impact the child's well-being. The importance of maternal nutrition is a critical influence on a child's development during this pivotal period, as supported by recent studies. The child's growth is divided into four key stages: (1) pregnancy, from conception to birth; (2) breastfeeding, from birth to six months; (3) the introduction of solid foods, from six to 12 months; and (4) the transition to a family diet after 12 months, with each stage requiring specific nutritional considerations for optimal development. Additionally, there is a significant connection between nutrition, overall health, and learning, with proper nutritional intake being vital for maintaining healthy body weight and supporting normal growth during infancy, childhood, and adolescence. Given the rapid growth during infancy, this phase demands the highest relative energy and nutrient intake compared to other stages of development. Proper nutrition during pregnancy plays a vital role in the development of the brain, requiring essential nutrients such as specific lipids, protein, folate, zinc, iodine, iron, and copper. Ensuring that children receive adequate nutrition during the first 1,000 days—from conception to the second birthday—significantly increases their chances of being born at a healthy weight. Additionally, it lowers the risk of various health conditions, including obesity and type 2 diabetes, while also fostering better learning abilities, fewer behavioral issues during early childhood, and improved overall health and economic stability in the long term. Pediatric nutrition Adequate nutrition is essential for the growth of children from infancy right through until adolescence. Some nutrients are specifically required for growth on top of nutrients required for normal body maintenance, in particular calcium and iron metabolism. Childhood dietary patterns are influenced by various factors, including feeding challenges and nutritional needs, with significant long-term consequences. During the first year, an infant's birth weight triples, and by age five, their birth length doubles. Brain volume doubles within the first 12 months and triples by 36 months. To support this rapid growth, solid foods are introduced after six months to supplement breast milk or infant formula. As children begin to consume more table foods in their second year, they are exposed to the same diet as their caregivers, which, along with more complex food combinations, shapes their dietary habits by 24 months. Imbalances in diet during this critical period can lead to malnutrition, with the highest risk occurring around the time of weaning, typically at 12 months in the U.S. and later in the second year globally. As a child transitions from breast milk or formula, dairy milk often becomes a key nutritional source, making the quality of the diet essential for continued growth and development. Various feeding challenges can increase the risk of malnutrition in young children. These include individual factors like food neophobia, temperament, and sensitivity to bitter tastes, as well as family-related factors such as education, income, food insecurity, and cultural norms. Young children tend to accept foods that are familiar and routine, as preferences are shaped through repeated exposure. Successful food acceptance requires caregivers to be patient, persistent, and willing to offer previously rejected foods multiple times. However, when caregivers label their child as "picky" or selective, they often stop offering rejected foods after just 3-5 attempts, mistakenly attributing limited food acceptance to genetics rather than learned behavior. Bribing or pressuring children to eat, along with a permissive feeding style that caters to the child's preferences, can lead to food rejection. It's common for young children to experience "food jags" (repeatedly wanting the same food) and to have shifting food preferences. While some children may exhibit a strong aversion to new foods, these reactions are usually not permanent. To address these challenges, providing a variety of nutrient-rich foods at every meal and snack is essential, allowing children to explore and develop their preferences. The concept of "responsive feeding", which involves a reciprocal relationship between the child and the caregiver during meals, is widely recommended. This approach is also supported by the U.S Dietary Guidelines for Americans and the Centers for Disease Control and Prevention. Elderly nutrition Malnutrition in older adults is a significant health concern, linked to increased mortality, morbidity, and physical decline, which adversely impacts daily activities and overall quality of life. This condition is common among the elderly and can also contribute to the development of geriatric syndromes. In older adults, malnutrition is typically indicated by unintentional weight loss or a low body mass index, though hidden deficiencies, such as those involving micronutrients, are often harder to detect and frequently go unnoticed, especially in community-dwelling seniors. This is generally higher among the elderly, but has different aspects in developed and undeveloped countries. In developed countries, the most common cause of malnutrition is illness, as both acute and chronic conditions can lead to or worsen nutritional deficits. As age increases the likelihood of disease, older adults are at the highest risk for nutritional challenges or malnutrition. The causes of malnutrition are complex and multifaceted, with aging processes further contributing to its development. The concerns faced with nutritional markers for the elderly are highlighted by the prevalence and determinants of malnutrition in adults over 65, encompassing factors from age-related changes to disease-related risks. The challenges in addressing, understanding, identifying, and treating malnutrition is key, noting that in some cases, targeted supplementation of macro- and micronutrients may be necessary when diet alone does not meet age-specific nutritional needs. The World Health Organization (WHO) has identified healthy aging as a key priority from 2016 to 2030, developing a policy framework that advocates for action across multiple sectors. The program aims to help older adults (those aged 65 and over) maintain functional ability, ensuring their well-being and active participation in society. Older adults are the fastest-growing age group, and United Nations projections indicate that by 2050, their numbers will double those of children under five and exceed the population of adolescents aged 15 to 24. By 2050, global life expectancy, which was 72.6 years in 2019, is expected to increase by approximately five years. Maintaining good nutritional status and adequate nutrient intake is essential for health, quality of life, and overall well-being in older age, and it plays a crucial role in healthy aging as defined by the WHO. Elderly Nutrition: Protein While energy needs decrease with age, the demand for protein and certain nutrients actually rises to support normal bodily functions. Deficiencies in specific nutrients are also linked to cognitive decline, a common issue among older adults. Reduced daily food intake in the elderly often leads to insufficient protein consumption, contributing to sarcopenia, a condition marked by the loss of muscle mass. Approximately 30% of those aged 60 and above, and over 50% of individuals aged 80 and older, are affected by this condition. The inability to meet protein needs exacerbates health issues, including chronic muscle wasting and bone health deterioration, leading to functional decline and frailty. To mitigate this, older adults are advised to evenly distribute protein intake across meals—breakfast, lunch, and dinner. As aging diminishes the body's ability to synthesize muscle protein, consuming adequate essential amino acids, especially leucine, is crucial. A leucine intake of at least 3 g per meal, achieved through 25-30g of high-quality protein, is necessary for effective muscle protein synthesis. Data from the National Health and Nutrition Examination Survey III indicates that the average protein intake among the elderly is 0.9g/kg of body weight per day, with half of this intake occurring at dinner. This uneven distribution can lead to sub-optimal protein synthesis and increased use of dietary amino acids for other processes like fat storage. Therefore, evenly distributing 30 g of protein throughout the day is recommended to enhance protein turnover and prevent muscle loss. Older adults, particularly those with acute or chronic illnesses, may require higher protein intake, ranging from 1.2 to 1.5g/kg per day, due to a reduced anabolic response. Some studies suggest that an intake of 1 g/kg per day is sufficient, while others recommend 1.3 to 1.73g/kg per day for better health outcomes. Research shows that muscle mass preservation is more effectively supported by animal protein, which has a higher essential amino acid content, than by plant protein. The timing of protein intake, protein source, and amino acid content are key factors in optimizing protein absorption in the elderly. Elderly Nutrition: Zinc Zinc is a vital micronutrient that plays a crucial role in enzymatic catabolism, immune cell function, DNA synthesis, and various micronutrient metabolisms. In the elderly, low serum zinc levels have been reported, which weakens the immune system, making them more susceptible to infections and increasing their risk of morbidity. Aging impairs T cell function, particularly due to zinc deficiency, and the reduced synthesis of metallothionein disrupts zinc balance in the gut and other tissues. This deficiency is primarily due to inadequate dietary zinc intake, compounded by factors such as poor mastication, oral health issues, medication use that interferes with absorption, and psychosocial factors that limit food intake. Additionally, epigenetic changes like DNA methylation may impair zinc transporters, leading to decreased zinc absorption as people age. Structural changes in the gut, including altered villus shape, mitochondrial changes, crypt elongation, collagen alterations, and increased cell replication time in the crypts, also significantly affect zinc absorption in the elderly. The recommended daily allowance of zinc is 11 mg for older men and 8 mg for older women, with an upper tolerable limit of 25–40 mg per day, including both dietary and supplemental sources. However, individuals over 60 often consume less than 50% of the recommended zinc intake, which is crucial for proper body function. Data from the Third Health and Nutrition Survey in the United States revealed that only 42.5% of adults over 71 years old met adequate zinc intake levels, with many suffering from zinc deficiency. To reach the upper tolerable limit of 40 mg per day, zinc intake from both food and supplements must be considered to help normalize serum zinc levels in deficient elderly individuals. Dietary sources such as seafood, poultry, red meat, beans, fortified cereals, whole grains, nuts, and dairy products are beneficial for maintaining adequate zinc levels, though absorption is higher from animal proteins than plant-based sources. Elderly Nutrition: Vitamin-B Complex The Vitamin-B complex, which includes eight water-soluble vitamins, plays a crucial role in maintaining cellular function and preventing brain atrophy. Among the elderly, deficiencies in vitamins B12, B6, and folate are linked to cognitive decline and depressive symptoms. The Recommended Dietary Allowance (RDA) for vitamin B12 is 0.9-2.4 μg/day, while the estimated average requirement in the U.S. and Canada is 0.7-2 μg/day. Elderly individuals with plasma vitamin B12 levels below 148 pmol/L are considered severely deficient, and those with levels between 148 and 221 pmol/L are marginally deficient. A deficiency in these B-vitamins, particularly B6, B12, and folate, is associated with elevated homocysteine levels, which increase the risk of alzheimer's disease and dementia. Increased intake of these vitamins can lower homocysteine levels and reduce the risk of these conditions. According to the National Health and Nutrition Examination Survey, about 6% of elderly Americans over 70 are severely deficient in vitamin B12, and more than 20% of those over 60 are mildly deficient. This deficiency is often due to insufficient food intake and malabsorption caused by degenerative digestive conditions, as indicated by elevated plasma gastrin levels in older adults. The deficiency of vitamin B6 among institutionalized elderly in Europe ranges from below 1% to 75%. B-vitamins are primarily found in animal-based foods, making deficiencies more common among those with limited animal food intake due to cultural, religious, or economic reasons. For vegetarians, fortified foods can be a viable alternative to ensure adequate vitamin B12 levels, especially when reducing laxative use to improve absorption. Elderly Nutrition: Calcium & Vitamin D3 Aging is often marked by a decline in bone mineral density, leading to an increased risk of osteoporotic fractures and reduced mobility, especially among elderly women. Women experience greater bone loss, around 2-3% per year, particularly after menopause due to estrogen deficiency. This deficiency reduces intestinal calcium absorption, decreases calcium reabsorption by the kidneys, and increases parathyroid hormone secretion, all contributing to bone resorption. Additionally, vitamin D3 deficiency, common in older adults due to reduced skin synthesis and limited sun exposure, further disrupts calcium homeostasis by decreasing intestinal absorption of calcium. As kidney function declines with age, the conversion of vitamin D3 to its active form is impaired, exacerbating the deficiency. Serum 25(OH)D levels below 50 nmol/L are linked to muscle weakness and reduced physical function, while levels below 25-30 nmol/L increase the risk of falls and fractures. Older adults typically consume less calcium, around 600 mg/day, which heightens their susceptibility to fractures. For optimal bone health, a calcium intake of 1000–1200 mg/day is recommended, along with 800 IU/day of vitamin D3 for those with adequate sun exposure, and up to 2000 IU/day for those with limited sun exposure or obesity. However, dietary factors like phytates, oxalates, tannins, and high sodium can impair calcium absorption and retention, underscoring the need to maintain sufficient levels of both calcium and vitamin D3 through diet or supplementation to reduce the risk of pathologic fractures. Elderly Nutrition: Iron Iron deficiency is prevalent among the elderly and is a significant contributor to anemia in this population. As people age, the body's ability to balance iron storage and supply diminishes, leading to this condition. Multiple factors contribute to iron deficiency in older adults, including reduced food intake, frequent medication use, gastrointestinal malabsorption, and occult bleeding. Malabsorption can also result in excessive iron accumulation, further complicating the issue. Age-related anemia may also be linked to increased levels of hepcidin, a hormone that reduces iron absorption in the intestine, leading to low iron levels. The recommended daily intake of iron for both men and women is 8 mg, with an upper limit of 45 mg/day. According to the World Health Organization, hemoglobin levels below 12 g/dl in women and 13 mg/dl in men indicate anemia. The NHANES III survey found that anemia affects 10.2% of women and 11% of men over 65, with prevalence increasing with age. Low iron levels not only decrease quality of life but are also associated with depression, fatigue, cognitive impairment, and muscle wasting. Dietary components significantly influence iron absorption; tannins and polyphenols in tea and coffee inhibit it, while Vitamin C enhances it. However, the interaction between iron and vitamin C can generate free radicals, particularly in cases of iron overload. In iron deficiency, vitamin C aids absorption. Aspirin use in the elderly, often for cardiovascular disease, is linked to lower serum ferritin levels. Iron deficiency can be managed through an iron-rich diet or supplementation. Severe iron deficiency anemia may require oral iron therapy, typically with 300 mg of ferrous sulfate containing 60 mg of elemental iron. For those who do not respond to oral treatment, intravenous iron infusion or iron chelation for iron overload may be necessary. Clinical nutrition On admission to intensive care unit, energy and protein requirements are calculated to determine the targets of nutritional therapy. Enteral nutrition (administering nutrition using a feeding tube) is started within 24 to 48 hours of admission with feeding targets increased every week. The risk of aspiration (inhalation of fluid or food particles while drinking or eating) can be reduced by elevating the head, using prokinetic agent, and using a chlorhexidine mouthwash. Although the presence of bowel sounds and the amount of gastric residual volume aspirated after feeding can be used to monitor the functionality of the gastrointestinal tract before feeding is started; starting nutritional therapy at this stage regardless of the functional status is feasible and safe within 36 to 48 hours of admission. Parenteral nutrition (administering of nutrition intravenously) should be started when enteral nutrition is not possible or sufficient or in high-risk subjects. Before undergoing surgery, a subject should avoid long periods of fasting. Oral feeding should be established as soon as possible after surgery. Other aspects of nutrition such as control of glucose, reduction in risk factors that causes stress-related catabolism or impairment of gastrointestinal functions, and encourage early physical activity to encourage protein synthesis and muscle functions. History of human nutrition Early human nutrition was largely determined by the availability and palatability of foods. Humans evolved as omnivorous hunter-gatherers, though the diet of humans has varied significantly depending on location and climate. The diet in the tropics tended to depend more heavily on plant foods, while the diet at higher latitudes tended more towards animal products. Analyses of postcranial and cranial remains of humans and animals from the Neolithic, along with detailed bone-modification studies, have shown that cannibalism also occurred among prehistoric humans. Agriculture developed at different times in different places, starting about 11,500 years ago, providing some cultures with a more abundant supply of grains (such as wheat, rice and maize) and potatoes; and originating staples such as bread, pasta dough, and tortillas. The domestication of animals provided some cultures with milk and dairy products. In 2020, archeological research discovered a frescoed thermopolium (a fast-food counter) in an exceptional state of preservation from 79 in Pompeii, including 2,000-year-old foods available in some of the deep terra cotta jars. Nutrition in antiquity During classical antiquity, diets consisted of simple fresh or preserved whole foods that were either locally grown or transported from neighboring areas during times of crisis. 18th century until today: food processing and nutrition Since the Industrial Revolution in the 18th and 19th century, the food processing industry has invented many technologies that both help keep foods fresh longer and alter the fresh state of food as they appear in nature. Cooling and freezing are primary technologies used to maintain freshness, whereas many more technologies have been invented to allow foods to last longer without becoming spoiled. These latter technologies include pasteurisation, autoclavation, drying, salting, and separation of various components, all of which appearing to alter the original nutritional contents of food. Pasteurisation and autoclavation (heating techniques) have no doubt improved the safety of many common foods, preventing epidemics of bacterial infection. Modern separation techniques such as milling, centrifugation, and pressing have enabled concentration of particular components of food, yielding flour, oils, juices, and so on, and even separate fatty acids, amino acids, vitamins, and minerals. Inevitably, such large-scale concentration changes the nutritional content of food, saving certain nutrients while removing others. Heating techniques may also reduce the content of many heat-labile nutrients such as certain vitamins and phytochemicals, and possibly other yet-to-be-discovered substances. Because of reduced nutritional value, processed foods are often enriched or fortified with some of the most critical nutrients (usually certain vitamins) that were lost during processing. Nonetheless, processed foods tend to have an inferior nutritional profile compared to whole, fresh foods, regarding content of both sugar and high GI starches, potassium/sodium, vitamins, fiber, and of intact, unoxidized (essential) fatty acids. In addition, processed foods often contain potentially harmful substances such as oxidized fats and trans fatty acids. A dramatic example of the effect of food processing on a population's health is the history of epidemics of beri-beri in people subsisting on polished rice. Removing the outer layer of rice by polishing it removes with it the essential vitamin thiamine, causing beri-beri. Another example is the development of scurvy among infants in the late 19th century in the United States. It turned out that the vast majority of those affected were being fed milk that had been heat-treated (as suggested by Pasteur) to control bacterial disease. Pasteurisation was effective against bacteria, but it destroyed the vitamin C. Research of nutrition and nutritional science Antiquity: Start of scientific research on nutrition Around 3000 BC the Vedic texts made mention of scientific research on nutrition. The first recorded dietary advice, carved into a Babylonian stone tablet in about 2500 BC, cautioned those with pain inside to avoid eating onions for three days. Scurvy, later found to be a vitamin C deficiency, was first described in 1500 BC in the Ebers Papyrus. According to Walter Gratzer, the study of nutrition probably began during the 6th century BC. In China, the concept of qi developed, a spirit or "wind" similar to what Western Europeans later called pneuma. Food was classified into "hot" (for example, meats, blood, ginger, and hot spices) and "cold" (green vegetables) in China, India, Malaya, and Persia. Humours developed perhaps first in China alongside qi. Ho the Physician concluded that diseases are caused by deficiencies of elements (Wu Xing: fire, water, earth, wood, and metal), and he classified diseases as well as prescribed diets. About the same time in Italy, Alcmaeon of Croton (a Greek) wrote of the importance of equilibrium between what goes in and what goes out, and warned that imbalance would result in disease marked by obesity or emaciation. Around 475 BC, Anaxagoras wrote that food is absorbed by the human body and, therefore, contains "homeomerics" (generative components), suggesting the existence of nutrients. Around 400 BC, Hippocrates, who recognized and was concerned with obesity, which may have been common in southern Europe at the time, said, "Let food be your medicine and medicine be your food." The works that are still attributed to him, Corpus Hippocraticum, called for moderation and emphasized exercise. Salt, pepper and other spices were prescribed for various ailments in various preparations for example mixed with vinegar. In the 2nd century BC, Cato the Elder believed that cabbage (or the urine of cabbage-eaters) could cure digestive diseases, ulcers, warts, and intoxication. Living about the turn of the millennium, Aulus Celsus, an ancient Roman doctor, believed in "strong" and "weak" foods (bread for example was strong, as were older animals and vegetables). The Book of Daniel, dated to the second century BC, contains a description of a comparison in health of captured people following Jewish dietary laws versus the diet of the soldiers of the king of Babylon. (The story may be legendary rather than historical.) 1st to 17th century Galen was physician to gladiators in Pergamon, and in Rome, physician to Marcus Aurelius and the three emperors who succeeded him. In use from his life in the 1st century AD until the 17th century, it was heresy to disagree with the teachings of Galen for 1500 years. Most of Galen's teachings were gathered and enhanced in the late 11th century by Benedictine monks at the School of Salerno in Regimen sanitatis Salernitanum, which still had users in the 17th century. Galen believed in the bodily humours of Hippocrates, and he taught that pneuma is the source of life. Four elements (earth, air, fire and water) combine into "complexion", which combines into states (the four temperaments: sanguine, phlegmatic, choleric, and melancholic). The states are made up of pairs of attributes (hot and moist, cold and moist, hot and dry, and cold and dry), which are made of four humours: blood, phlegm, green (or yellow) bile, and black bile (the bodily form of the elements). Galen thought that for a person to have gout, kidney stones, or arthritis was scandalous, which Gratzer likens to Samuel Butler's Erehwon (1872) where sickness is a crime. In the 1500s, Paracelsus was probably the first to criticize Galen publicly. Also in the 16th century, scientist and artist Leonardo da Vinci compared metabolism to a burning candle. Leonardo did not publish his works on this subject, but he was not afraid of thinking for himself and he definitely disagreed with Galen. Ultimately, 16th century works of Andreas Vesalius, sometimes called the father of modern human anatomy, overturned Galen's ideas. He was followed by piercing thought amalgamated with the era's mysticism and religion sometimes fueled by the mechanics of Newton and Galileo. Jan Baptist van Helmont, who discovered several gases such as carbon dioxide, performed the first quantitative experiment. Robert Boyle advanced chemistry. Sanctorius measured body weight. Physician Herman Boerhaave modeled the digestive process. Physiologist Albrecht von Haller worked out the difference between nerves and muscles. 18th and 19th century: Lind, Lavoisier and modern science Sometimes forgotten during his life, James Lind, a physician in the British navy, performed the first scientific nutrition experiment in 1747. Lind discovered that lime juice saved sailors that had been at sea for years from scurvy, a deadly and painful bleeding disorder. Between 1500 and 1800, an estimated two million sailors had died of scurvy. The discovery was ignored for forty years, but after about 1850, British sailors became known as "limeys" due to the carrying and consumption of limes aboard ship. The essential vitamin C within citrus fruits would not be identified by scientists until 1932. Around 1770, Antoine Lavoisier discovered the details of metabolism, demonstrating that the oxidation of food is the source of body heat. Called the most fundamental chemical discovery of the 18th century, Lavoisier discovered the principle of conservation of mass. His ideas made the phlogiston theory of combustion obsolete. In 1790, George Fordyce recognized calcium as necessary for the survival of fowl. In the early 19th century, the elements carbon, nitrogen, hydrogen, and oxygen were recognized as the primary components of food, and methods to measure their proportions were developed. In 1816, François Magendie discovered that dogs fed only carbohydrates (sugar), fat (olive oil), and water died evidently of starvation, but dogs also fed protein survived – identifying protein as an essential dietary component. William Prout in 1827 was the first person to divide foods into carbohydrates, fat, and protein. In 1840, Justus von Liebig discovered the chemical makeup of carbohydrates (sugars), fats (fatty acids) and proteins (amino acids). During the 19th century, Jean-Baptiste Dumas and von Liebig quarrelled over their shared belief that animals get their protein directly from plants (animal and plant protein are the same and that humans do not create organic compounds). With a reputation as the leading organic chemist of his day but with no credentials in animal physiology, von Liebig grew rich making food extracts like beef bouillon and infant formula that were later found to be of questionable nutritious value. In the early 1880s, Kanehiro Takaki observed that Japanese sailors (whose diets consisted almost entirely of white rice) developed beriberi (or endemic neuritis, a disease causing heart problems and paralysis), but British sailors and Japanese naval officers did not. Adding various types of vegetables and meats to the diets of Japanese sailors prevented the disease. (This was not because of the increased protein as Takaki supposed, but because it introduced a few parts per million of thiamine to the diet.)). In the 1860s, Claude Bernard discovered that body fat can be synthesized from carbohydrate and protein, showing that the energy in blood glucose can be stored as fat or as glycogen. In 1896, Eugen Baumann observed iodine in thyroid glands. In 1897, Christiaan Eijkman worked with natives of Java, who also had beriberi. Eijkman observed that chickens fed the native diet of white rice developed the symptoms of beriberi but remained healthy when fed unprocessed brown rice with the outer bran intact. His assistant, Gerrit Grijns correctly identified and described the anti-beriberi substance in rice. Eijkman cured the natives by feeding them brown rice, discovering that food can cure disease. Over two decades later, nutritionists learned that the outer rice bran contains vitamin B1, also known as thiamine. Early 20th century In the early 20th century, Carl von Voit and Max Rubner independently measured caloric energy expenditure in different species of animals, applying principles of physics in nutrition. In 1906, Edith G. Willcock and Frederick Hopkins showed that the amino acid tryptophan aids the well-being of mice but it did not assure their growth. In the middle of twelve years of attempts to isolate them, Hopkins said in a 1906 lecture that "unsuspected dietetic factors", other than calories, protein, and minerals, are needed to prevent deficiency diseases. In 1907, Stephen M. Babcock and Edwin B. Hart started the cow feeding, single-grain experiment, which took nearly four years to complete. In 1912 Casimir Funk coined the term vitamin to label a vital factor in the diet: from the words "vital" and "amine", because these unknown substances preventing scurvy, beriberi, and pellagra, and were thought then to derive from ammonia. In 1913 Elmer McCollum discovered the first vitamins, fat-soluble vitamin A and water-soluble vitamin B (in 1915; later identified as a complex of several water-soluble vitamins) and named vitamin C as the then-unknown substance preventing scurvy. Lafayette Mendel (1872–1935) and Thomas Osborne (1859–1929) also performed pioneering work on vitamins A and B. In 1919, Sir Edward Mellanby incorrectly identified rickets as a vitamin A deficiency because he could cure it in dogs with cod liver oil. In 1922, McCollum destroyed the vitamin A in cod liver oil, but found that it still cured rickets. Also in 1922, H.M. Evans and L.S. Bishop discover vitamin E as essential for rat pregnancy, originally calling it "food factor X" until 1925. In 1925 Hart discovered that iron absorption requires trace amounts of copper. In 1927 Adolf Otto Reinhold Windaus synthesized vitamin D, for which he won the Nobel Prize in Chemistry in 1928. In 1928 Albert Szent-Györgyi isolated ascorbic acid, and in 1932 proved that it is vitamin C by preventing scurvy. In 1935 he synthesized it, and in 1937 won a Nobel Prize for his efforts. Szent-Györgyi concurrently elucidated much of the citric acid cycle. In the 1930s, William Cumming Rose identified essential amino acids, necessary protein components that the body cannot synthesize. In 1935 Eric Underwood and Hedley Marston independently discovered the necessity of cobalt. In 1936, Eugene Floyd DuBois showed that work and school performance are related to caloric intake. In 1938, Erhard Fernholz discovered the chemical structure of vitamin E. It was synthesised the same year by Paul Karrer. Oxford University closed down its nutrition department after World War II because the subject seemed to have been completed between 1912 and 1944. Institutionalization of nutritional science in the 1950s Nutritional science as a separate, independent science discipline was institutionalized in the 1950s. At the instigation of the British physiologist John Yudkin at the University of London, the degrees Bachelor of Science and Master of Science in nutritional science were established. The first students were admitted in 1953, and in 1954 the Department of Nutrition was officially opened. In Germany, institutionalization followed in November 1956, when Hans-Diedrich Cremer was appointed to the chair for human nutrition in Giessen. Over time, seven other universities with similar institutions followed in Germany. From the 1950s to 1970s, a focus of nutritional science was on dietary fat and sugar. From the 1970s to the 1990s, attention was put on diet-related chronic diseases and supplementation.
Biology and health sciences
Health and fitness: General
Health
93829
https://en.wikipedia.org/wiki/Agricultural%20biotechnology
Agricultural biotechnology
Agricultural biotechnology, also known as agritech, is an area of agricultural science involving the use of scientific tools and techniques, including genetic engineering, molecular markers, molecular diagnostics, vaccines, and tissue culture, to modify living organisms: plants, animals, and microorganisms. Crop biotechnology is one aspect of agricultural biotechnology which has been greatly developed upon in recent times. Desired trait are exported from a particular species of Crop to an entirely different species. These transgene crops possess desirable characteristics in terms of flavor, color of flowers, growth rate, size of harvested products and resistance to diseases and pests. History Farmers have manipulated plants and animals through selective breeding for decades of thousands of years in order to create desired traits. In the 20th century, a surge in technology resulted in an increase in agricultural biotechnology through the selection of traits like the increased yield, pest resistance, drought resistance, and herbicide resistance. The first food product produced through biotechnology was sold in 1990, and by 2003, 7 million farmers were utilizing biotech crops. More than 85% of these farmers were located in developing countries. Crop modification techniques Traditional breeding Traditional crossbreeding has been used for centuries to improve crop quality and quantity. Crossbreeding mates two sexually compatible species to create a new and special variety with the desired traits of the parents. For example, the honeycrisp apple exhibits a specific texture and flavor due to the crossbreeding of its parents. In traditional practices, pollen from one plant is placed on the female part of another, which leads to a hybrid that contains genetic information from both parent plants. Plant breeders select the plants with the traits they're looking to pass on and continue to breed those plants. Note that crossbreeding can only be utilized within the same or closely related species. Mutagenesis Mutations can occur randomly in the DNA of any organism. In order to create variety within crops, scientists can randomly induce mutations within plants. Mutagenesis uses radioactivity to induce random mutations in the hopes of stumbling upon the desired trait. Scientists can use mutating chemicals such as ethyl methanesulfonate, or radioactivity to create random mutations within the DNA. Atomic gardens are used to mutate crops. A radioactive core is located in the center of a circular garden and raised out of the ground to radiate the surrounding crops, generating mutations within a certain radius. Mutagenesis through radiation was the process used to produce ruby red grapefruits. Polyploidy Polyploidy can be induced to modify the number of chromosomes in a crop in order to influence its fertility or size. Usually, organisms have two sets of chromosomes, otherwise known as a diploidy. However, either naturally or through the use of chemicals, that number of chromosomes can change, resulting in fertility changes or size modification within the crop. Seedless watermelons are created in this manner; a 4-set chromosome watermelon is crossed with a 2-set chromosome watermelon to create a sterile (seedless) watermelon with three sets of chromosomes. Protoplast fusion Protoplast fusion is the joining of cells or cell components to transfer traits between species. For example, the trait of male sterility is transferred from radishes to red cabbages by protoplast fusion. This male sterility helps plant breeders make hybrid crops. RNA interference RNA interference (RNAIi) is the process in which a cell's RNA to protein mechanism is turned down or off in order to suppress genes. This method of genetic modification works by interfering with messenger RNA to stop the synthesis of proteins, effectively silencing a gene. Transgenics Transgenics involves the insertion of one piece of DNA into another organism's DNA in order to introduce new genes into the original organism. This addition of genes into an organism's genetic material creates a new variety with desired traits. The DNA must be prepared and packaged in a test tube and then inserted into the new organism. New genetic information can be inserted with gene guns/biolistics. An example of a gene gun transgenic is the rainbow papaya, which is modified with a gene that gives it resistance to the papaya ringspot virus. Genome editing Genome editing is the use of an enzyme system to modify the DNA directly within the cell. Genome editing is used to develop herbicide resistant canola to help farmers control weeds. Improved nutritional content Agricultural biotechnology has been used to improve the nutritional content of a variety of crops in an effort to meet the needs of an increasing population. Genetic engineering can produce crops with a higher concentration of vitamins. For example, golden rice contains three genes that allow plants to produce compounds that are converted to vitamin A in the human body. This nutritionally improved rice is designed to combat the world's leading cause of blindness—vitamin A deficiency. Similarly, the Banana 21 project has worked to improve the nutrition in bananas to combat micronutrient deficiencies in Uganda. By genetically modifying bananas to contain vitamin A and iron, Banana 21 has helped foster a solution to micronutrient deficiencies through the vessel of a staple food and major starch source in Africa. Additionally, crops can be engineered to reduce toxicity or to produce varieties with removed allergens. Genes and traits of interest for crops Agronomic traits Insect resistance One highly sought after trait is insect resistance. This trait increases a crop's resistance to pests and allows for a higher yield. An example of this trait are crops that are genetically engineered to make insecticidal proteins originally discovered in (Bacillus thuringiensis). Bacillus thuringiensis is a bacterium that produces insect repelling proteins that are non-harmful to humans. The genes responsible for this insect resistance have been isolated and introduced into many crops. Bt corn and cotton are now commonplace, and cowpeas, sunflower, soybeans, tomatoes, tobacco, walnut, sugar cane, and rice are all being studied in relation to Bt. Herbicide tolerance Weeds have proven to be an issue for farmers for thousands of years; they compete for soil nutrients, water, and sunlight and prove deadly to crops. Biotechnology has offered a solution in the form of herbicide tolerance. Chemical herbicides are sprayed directly on plants in order to kill weeds and therefore competition, and herbicide resistant crops have to the opportunity to flourish. Disease resistance Often, crops are afflicted by disease spread through insects (like aphids). Spreading disease among crop plants is incredibly difficult to control and was previously only managed by completely removing the affected crop. The field of agricultural biotechnology offers a solution through genetically engineering virus resistance. Developing GE disease-resistant crops now include cassava, maize, and sweet potato. Temperature tolerance Agricultural biotechnology can also provide a solution for plants in extreme temperature conditions. In order to maximize yield and prevent crop death, genes can be engineered that help to regulate cold and heat tolerance. For example, tobacco plants have been genetically modified to be more tolerant to hot and cold conditions, with genes originally found in Carica papaya. Other traits include water use efficiency, nitrogen use efficiency and salt tolerance. Quality traits Quality traits include increased nutritional or dietary value, improved food processing and storage, or the elimination of toxins and allergens in crop plants. Common GMO crops Currently, only a small number of genetically modified crops are available for purchase and consumption in the United States. The USDA has approved soybeans, corn, canola, sugar beets, papaya, squash, alfalfa, cotton, apples, and potatoes. GMO apples (arctic apples) are non-browning apples and eliminate the need for anti-browning treatments, reduce food waste, and bring out flavor. The production of Bt cotton has skyrocketed in India, with 10 million hectares planted for the first time in 2011, resulting in a 50% insecticide application reduction. In 2014, Indian and Chinese farmers planted more than 15 million hectares of Bt cotton. Safety testing and government regulations Agricultural biotechnology regulation in the US falls under three main government agencies: The Department of Agriculture (USDA), the Environmental Protection Agency (EPA), and the Food and Drug Administration (FDA). The USDA must approve the release of any new GMOs, EPA controls the regulation of insecticide, and the FDA evaluates the safety of a particular crop sent to market. On average, it takes nearly 13 years and $130 million of research and development for a genetically modified organism to come to market. The regulation process takes up to 8 years in the United States. The safety of GMOs has become a topic of debate worldwide, but scientific articles are being conducted to test the safety of consuming GMOs in addition to the FDA's work. In one such article, it was concluded that Bt rice did not adversely affect digestion and did not induce horizontal gene transfer.
Technology
Biotechnology
null
94102
https://en.wikipedia.org/wiki/Solid%20angle
Solid angle
In geometry, a solid angle (symbol: ) is a measure of the amount of the field of view from some particular point that a given object covers. That is, it is a measure of how large the object appears to an observer looking from that point. The point from which the object is viewed is called the apex of the solid angle, and the object is said to subtend its solid angle at that point. In the International System of Units (SI), a solid angle is expressed in a dimensionless unit called a steradian (symbol: sr), which is equal to one square radian, sr = rad2. One steradian corresponds to one unit of area (of any shape) on the unit sphere surrounding the apex, so an object that blocks all rays from the apex would cover a number of steradians equal to the total surface area of the unit sphere, . Solid angles can also be measured in squares of angular measures such as degrees, minutes, and seconds. A small object nearby may subtend the same solid angle as a larger object farther away. For example, although the Moon is much smaller than the Sun, it is also much closer to Earth. Indeed, as viewed from any point on Earth, both objects have approximately the same solid angle (and therefore apparent size). This is evident during a solar eclipse. Definition and properties The magnitude of an object's solid angle in steradians is equal to the area of the segment of a unit sphere, centered at the apex, that the object covers. Giving the area of a segment of a unit sphere in steradians is analogous to giving the length of an arc of a unit circle in radians. Just as the magnitude of a plane angle in radians at the vertex of a circular sector is the ratio of the length of its arc to its radius, the magnitude of a solid angle in steradians is the ratio of the area covered on a sphere by an object to the square of the radius of the sphere. The formula for the magnitude of the solid angle in steradians is where is the area (of any shape) on the surface of the sphere and is the radius of the sphere. Solid angles are often used in astronomy, physics, and in particular astrophysics. The solid angle of an object that is very far away is roughly proportional to the ratio of area to squared distance. Here "area" means the area of the object when projected along the viewing direction. The solid angle of a sphere measured from any point in its interior is 4 sr. The solid angle subtended at the center of a cube by one of its faces is one-sixth of that, or 2/3  sr. The solid angle subtended at the corner of a cube (an octant) or spanned by a spherical octant is /2  sr, one-eight of the solid angle of a sphere. Solid angles can also be measured in square degrees (1 sr = 2 square degrees), in square arc-minutes and square arc-seconds, or in fractions of the sphere (1 sr = fractional area), also known as spat (1 sp = 4 sr). In spherical coordinates there is a formula for the differential, where is the colatitude (angle from the North Pole) and is the longitude. The solid angle for an arbitrary oriented surface subtended at a point is equal to the solid angle of the projection of the surface to the unit sphere with center , which can be calculated as the surface integral: where is the unit vector corresponding to , the position vector of an infinitesimal area of surface with respect to point , and where represents the unit normal vector to . Even if the projection on the unit sphere to the surface is not isomorphic, the multiple folds are correctly considered according to the surface orientation described by the sign of the scalar product . Thus one can approximate the solid angle subtended by a small facet having flat surface area , orientation , and distance from the viewer as: where the surface area of a sphere is . Practical applications Defining luminous intensity and luminance, and the correspondent radiometric quantities radiant intensity and radiance Calculating spherical excess of a spherical triangle The calculation of potentials by using the boundary element method (BEM) Evaluating the size of ligands in metal complexes, see ligand cone angle Calculating the electric field and magnetic field strength around charge distributions Deriving Gauss's Law Calculating emissive power and irradiation in heat transfer Calculating cross sections in Rutherford scattering Calculating cross sections in Raman scattering The solid angle of the acceptance cone of the optical fiber The computation of nodal densities in meshes. Solid angles for common objects Cone, spherical cap, hemisphere The solid angle of a cone with its apex at the apex of the solid angle, and with apex angle 2, is the area of a spherical cap on a unit sphere For small such that this reduces to , the area of a circle. The above is found by computing the following double integral using the unit surface element in spherical coordinates: This formula can also be derived without the use of calculus. Over 2200 years ago Archimedes proved that the surface area of a spherical cap is always equal to the area of a circle whose radius equals the distance from the rim of the spherical cap to the point where the cap's axis of symmetry intersects the cap. In the above coloured diagram this radius is given as In the adjacent black & white diagram this radius is given as "t". Hence for a unit sphere the solid angle of the spherical cap is given as When = , the spherical cap becomes a hemisphere having a solid angle 2. The solid angle of the complement of the cone is This is also the solid angle of the part of the celestial sphere that an astronomical observer positioned at latitude can see as the Earth rotates. At the equator all of the celestial sphere is visible; at either pole, only one half. The solid angle subtended by a segment of a spherical cap cut by a plane at angle from the cone's axis and passing through the cone's apex can be calculated by the formula For example, if , then the formula reduces to the spherical cap formula above: the first term becomes , and the second . Tetrahedron Let OABC be the vertices of a tetrahedron with an origin at O subtended by the triangular face ABC where are the vector positions of the vertices A, B and C. Define the vertex angle to be the angle BOC and define , correspondingly. Let be the dihedral angle between the planes that contain the tetrahedral faces OAC and OBC and define , correspondingly. The solid angle subtended by the triangular surface ABC is given by This follows from the theory of spherical excess and it leads to the fact that there is an analogous theorem to the theorem that "The sum of internal angles of a planar triangle is equal to ", for the sum of the four internal solid angles of a tetrahedron as follows: where ranges over all six of the dihedral angles between any two planes that contain the tetrahedral faces OAB, OAC, OBC and ABC. A useful formula for calculating the solid angle of the tetrahedron at the origin O that is purely a function of the vertex angles , , is given by L'Huilier's theorem as where Another interesting formula involves expressing the vertices as vectors in 3 dimensional space. Let be the vector positions of the vertices A, B and C, and let , , and be the magnitude of each vector (the origin-point distance). The solid angle subtended by the triangular surface ABC is: where denotes the scalar triple product of the three vectors and denotes the scalar product. Care must be taken here to avoid negative or incorrect solid angles. One source of potential errors is that the scalar triple product can be negative if , , have the wrong winding. Computing the absolute value is a sufficient solution since no other portion of the equation depends on the winding. The other pitfall arises when the scalar triple product is positive but the divisor is negative. In this case returns a negative value that must be increased by . Pyramid The solid angle of a four-sided right rectangular pyramid with apex angles and (dihedral angles measured to the opposite side faces of the pyramid) is If both the side lengths ( and ) of the base of the pyramid and the distance () from the center of the base rectangle to the apex of the pyramid (the center of the sphere) are known, then the above equation can be manipulated to give The solid angle of a right -gonal pyramid, where the pyramid base is a regular -sided polygon of circumradius , with a pyramid height is The solid angle of an arbitrary pyramid with an -sided base defined by the sequence of unit vectors representing edges can be efficiently computed by: where parentheses (* *) is a scalar product and square brackets [* * *] is a scalar triple product, and is an imaginary unit. Indices are cycled: and . The complex products add the phase associated with each vertex angle of the polygon. However, a multiple of is lost in the branch cut of and must be kept track of separately. Also, the running product of complex phases must scaled occasionally to avoid underflow in the limit of nearly parallel segments. Latitude-longitude rectangle The solid angle of a latitude-longitude rectangle on a globe is where and are north and south lines of latitude (measured from the equator in radians with angle increasing northward), and and are east and west lines of longitude (where the angle in radians increases eastward). Mathematically, this represents an arc of angle swept around a sphere by radians. When longitude spans 2 radians and latitude spans radians, the solid angle is that of a sphere. A latitude-longitude rectangle should not be confused with the solid angle of a rectangular pyramid. All four sides of a rectangular pyramid intersect the sphere's surface in great circle arcs. With a latitude-longitude rectangle, only lines of longitude are great circle arcs; lines of latitude are not. Celestial objects By using the definition of angular diameter, the formula for the solid angle of a celestial object can be defined in terms of the radius of the object, , and the distance from the observer to the object, : By inputting the appropriate average values for the Sun and the Moon (in relation to Earth), the average solid angle of the Sun is steradians and the average solid angle of the Moon is steradians. In terms of the total celestial sphere, the Sun and the Moon subtend average fractional areas of % () and % (), respectively. As these solid angles are about the same size, the Moon can cause both total and annular solar eclipses depending on the distance between the Earth and the Moon during the eclipse. Solid angles in arbitrary dimensions The solid angle subtended by the complete ()-dimensional spherical surface of the unit sphere in -dimensional Euclidean space can be defined in any number of dimensions . One often needs this solid angle factor in calculations with spherical symmetry. It is given by the formula where is the gamma function. When is an integer, the gamma function can be computed explicitly. It follows that This gives the expected results of 4 steradians for the 3D sphere bounded by a surface of area and 2 radians for the 2D circle bounded by a circumference of length . It also gives the slightly less obvious 2 for the 1D case, in which the origin-centered 1D "sphere" is the interval and this is bounded by two limiting points. The counterpart to the vector formula in arbitrary dimension was derived by Aomoto and independently by Ribando. It expresses them as an infinite multivariate Taylor series: Given unit vectors defining the angle, let denote the matrix formed by combining them so the th column is , and . The variables form a multivariable . For a "congruent" integer multiexponent define . Note that here = non-negative integers, or natural numbers beginning with 0. The notation for means the variable , similarly for the exponents . Hence, the term means the sum over all terms in in which l appears as either the first or second index. Where this series converges, it converges to the solid angle defined by the vectors.
Mathematics
Three-dimensional space
null
94124
https://en.wikipedia.org/wiki/Muzzleloader
Muzzleloader
A muzzleloader is any firearm in which the user loads the projectile and the propellant charge into the muzzle end of the gun (i.e., from the forward, open end of the gun's barrel). This is distinct from the modern designs of breech-loading firearms, in which user loads the ammunition into the breech end of the barrel. The term "muzzleloader" applies to both rifled and smoothbore type muzzleloaders, and may also refer to the marksman who specializes in the shooting of such firearms. The firing methods, paraphernalia and mechanism further divide both categories as do caliber (from cannons to small-caliber palm guns). Modern muzzleloading firearms range from reproductions of sidelock, flintlock and percussion long guns, to in-line rifles that use modern inventions such as a closed breech, sealed primer and fast rifling to allow for considerable accuracy at long ranges. Modern mortars use a shell with the propelling charge and primer attached at the base. Unlike older muzzleloading mortars, which were loaded the same way as muzzleloading cannon, the modern mortar is fired by dropping the shell down the barrel where a pin fires the primer, igniting the main propelling charge. Both the modern mortar and the older mortar were used for high angle fire. However, the fact that the mortar is not loaded in separate steps may make its definition as a muzzleloader a matter of opinion. Muzzleloading can apply to anything from cannons to pistols but in modern parlance the term most commonly applies to black powder small arms. It usually, but not always, involves the use of a loose propellant (i.e., gunpowder) and projectile, as well as a separate method of ignition or priming. Loading In general, the sequence of loading is to put in first gunpowder, by pouring in a measured amount of loose powder, historically mostly by using a powder flask (or powder horn), or by inserting a pre-measured bag or paper packet of gunpowder (called a cartridge) or by inserting solid propellant pellets. The gunpowder used is typically black powder or black powder substitutes like Pyrodex. Sometimes two types of gunpowder (and two flasks) were used consisting of finer priming powder for the flash pan and coarser powder for the main charge behind the ball. This was particularly the case with earlier muzzleloaders like matchlocks but appear to have been less common with flintlocks and was irrelevant with percussion locks since they used percussion caps rather than priming powder. Wadding is made from felt, paper, cloth or card and has several different uses. In shotguns, a card wad or other secure wadding is used between the powder and the shot charge to prevent pellets from dropping into the powder charge and on top of the shot charge to hold it in place in the barrel. In smooth bore muskets and most rifles used prior to cartridges being introduced in the mid-to late nineteenth century, wadding was used primarily to hold the powder in place. On most naval cannons, one piece of wadding was used to hold the powder in place and served the purpose of creating a better seal around the shot. Another was used to act as a plug to stop the shot rolling out because of the swaying of the ship. The use of cartridges with both gunpowder charge and ball, made up in batches by the shooter or a servant, was known from very early on, but until roughly around 1800 loading using a powder flask and a bag of balls was more common outside of the military. The measuring stage for the barrel charge of gunpowder could be avoided by carrying a number of pre-measured charges in small containers of wood, metal or cloth, often carried on a bandolier. These were known by various names, including "chargers" or "apostles" as 12 were often carried. For most of the time muzzleloaders were in use, a round ball and pre-measured powder charge could be carried in a paper or cloth wrapping. The shooter would bite off the end of the paper cartridge with his teeth and pour the powder into the barrel followed by the ball encased in the paper wrapping. The projectiles and wads were then pushed down into the breech with a ramrod until they were firmly seated on the propellant charge. Priming powder could be carried in a separate priming flask and poured into the priming pan or a little powder from the cartridge was used, and the frizzen was pushed down to hold the priming powder in place. After the gunpowder and projectile or shot charge were placed in the barrel a ramrod was used to firmly pack everything down at the base of the barrel. Then either a priming charge was placed in the priming pan or a percussion cap was placed on the nipple, the firing mechanism initiated; the cock or hammer was then cocked to make the firearm ready to fire. Projectile types and history Muzzleloading firearms generally use round balls, cylindrical conical projectiles, and shot charges. In some types of rifles firing round ball, a lubricated patch (see Kentucky rifle) of fabric is wrapped around a ball which is slightly smaller than the barrel diameter. In other types of round ball firing rifles, a ramrod and hammer is used to force the round ball down through the rifling. When fired, either the lead ball or the wrapping grips the rifling and imparts spin to the ball which usually gives improved accuracy. In rifles firing Minié balls, the patch, often the paper wrapping from the cartridge, is used as an initial seal and to hold powder in place during loading. The Minié ball replaced the round ball in most firearms, especially for military use, in the 1840s and 1850s. It has a hollow base which expands to grip the rifling. The combination of the spinning Minié ball and the consistent velocity provided by the improved seal gave far better accuracy than the smoothbore muzzleloaders that it replaced. Modern usage When aiming for great accuracy, muzzle-loaders are usually cleaned ("swabbed") before reloading, so that there is no residue left in the barrel to reduce accuracy, though in competitions run by the international governing body, the MLAIC, this is prohibited for military rifle and musket events. However, in small arms muzzleloading rifles, swabbing is only done after every 5-10 shots. Large caliber muzzle-loaders such as cannons are always swabbed between shots to prevent accidents caused by live sparks igniting the fresh charge of powder as it is being loaded. Muzzleloading Muzzleloading is the sport or pastime of firing muzzleloading guns. Muzzleloading guns, both antique and reproduction, are used for target shooting, hunting, historical re-enactment and historical research. The sport originated in the United States in the 1930s, just as the last original users and makers of muzzleloading arms were dying out. The sport received a tremendous boost in the 1960s and 1970s. The Muzzle Loaders Associations International Committee (www.MLAIC.org) was formed in 1970 and held its first World Championship in 1971. Since then a flourishing industry manufacturing working reproductions of historic firearms now exists in the United States and Europe, particularly in northern Italy, for example at Gardone Val Trompia, in the Province of Brescia. In the United States muzzleloading guns are, subject to a number of qualifications, generally not considered firearms. Subject to state law they may be possessed by persons who might otherwise not be legally allowed to own a firearm. The American National Muzzle Loading Rifle Association holds two national tournaments a year in Friendship, Indiana as well as the Western National Shoot Event held in Phoenix, Arizona. The Muzzle Loaders Associations International Committee (MLAIC) governs international competition with muzzle-loading arms. The MLAIC holds a Short Range World Championship in even-numbered years and a Long Range World Championship () on odd numbered years (South Africa has won the last 5 Long Range World Championships). Modern use Driven by demand for muzzleloaders for special extended primitive hunting seasons, firearms manufacturers have developed in-line muzzleloading rifles with designs similar to modern breech-loading centerfire designs. Knight Rifles pioneered the in-line muzzleloader in the mid-1980s, manufacturing and selling them to this day. Savage Arms has created the 10ML-II, which can be used with smokeless powder, reducing the cleaning required. However, Savage has discontinued the production of smokeless muzzleloaders. Remington Firearms also have a muzzleloader in production, the model "700 Ultimate" or "700 SL Ultimate". There are several custom gun makers that are currently building smokeless muzzleloaders on new or donor bolt actions.
Technology
Mechanisms_2
null
94139
https://en.wikipedia.org/wiki/Rifling
Rifling
Rifling is the term for helical grooves machined into the internal surface of a firearms's barrel for imparting a spin to a projectile to improve its aerodynamic stability and accuracy. It is also the term (as a verb) for creating such grooves. Rifling is measured in twist rate, the distance the rifling takes to complete one full revolution, expressed as a ratio with 1 as its base (e.g., 1:). A shorter distance/lower ratio indicates a faster twist, generating a higher spin rate (and greater projectile stability). The combination of length, weight, and shape of a projectile determines the twist rate needed to gyroscopically stabilize it: barrels intended for short, large-diameter projectiles such as spherical lead balls require a very low twist rate, such as 1 turn in 48 inches (122 cm). Barrels intended for long, small-diameter projectiles, such as the ultra-low-drag 80-grain 0.223 inch bullets (5.2 g, 5.56 mm), use twist rates of 1 turn in 8 inches (20 cm) or faster. Rifling which increases the twist rate from breech to muzzle is called a gain or progressive twist; a rate which decreases down the length of a barrel is undesirable because it cannot reliably stabilize the projectile as it travels down the bore. An extremely long projectile, such as a flechette, requires impractically high twist rates to stabilize; it is often stabilized aerodynamically instead. An aerodynamically stabilized projectile can be fired from a smoothbore barrel without a reduction in accuracy. History Muskets are smoothbore, large caliber weapons using ball-shaped ammunition fired at relatively low velocity. Due to the high cost, great difficulty of precision manufacturing, and the need to load readily and speedily from the muzzle, musket balls were generally a loose fit in the barrels. Consequently, on firing the balls would often bounce off the sides of the barrel when fired and the final destination after leaving the muzzle was less predictable. This was countered when accuracy was more important, for example when hunting, by using a tighter-fitting combination of a closer-to-bore-sized ball and a patch. The accuracy was improved, but still not reliable for precision shooting over long distances. Like the invention of gunpowder itself, the inventor of barrel rifling is not yet definitely known. Straight grooving had been applied to small arms since at least 1480, originally intended as "soot grooves" to collect gunpowder residue. Some of the earliest recorded European attempts of spiral-grooved musket barrels were of Gaspard Kollner, a gunsmith of Vienna in 1498 and Augustus Kotter of Nuremberg in 1520. Some scholars allege that Kollner's works at the end of the 15th century only used straight grooves, and it was not until he received help from Kotter that a working spiral-grooved firearm was made. There may have been attempts even earlier than this, as the main inspiration of rifled firearms came from archers and crossbowmen who realized that their projectiles flew far faster and more accurately when they imparted rotation through twisted fletchings. Though true rifling dates from the 16th century, it had to be engraved by hand and consequently did not become commonplace until the mid-19th century. Due to the laborious and expensive manufacturing process involved, early rifled firearms were primarily used by wealthy recreational hunters, who did not need to fire their weapons many times in rapid succession and appreciated the increased accuracy. Rifled firearms were not popular with military users since they were difficult to clean, and loading projectiles presented numerous challenges. If the bullet was of sufficient diameter to take up the rifling, a large mallet was required to force it down the bore. If, on the other hand, it was of reduced diameter to assist in its insertion, the bullet would not fully engage the rifling and accuracy was reduced. The first practical military weapons using rifling with black powder were breech loaders such as the Queen Anne pistol. Twist rate For best performance, the barrel should have a twist rate sufficient to spin stabilize any bullet that it would reasonably be expected to fire, but not significantly more. Large diameter bullets provide more stability, as the larger radius provides more gyroscopic inertia, while long bullets are harder to stabilize, as they tend to be very backheavy and the aerodynamic pressures have a longer arm ("lever") to act on. The slowest twist rates are found in muzzle-loading firearms meant to fire a round ball; these will have twist rates as low as 1 in , or slightly longer, although for a typical multi-purpose muzzleloader rifle, a twist rate of 1 in is very common. The M16A2 rifle, which is designed to fire the 5.56×45mm NATO SS109 ball and L110 tracer bullets, has a 1 in or 32 calibers twist. Civilian AR-15 rifles are commonly found with 1 in or 54.8 calibers for older rifles and 1 in or 41.1 calibers for most newer rifles, although some are made with 1 in or 32 calibers twist rates, the same as used for the M16 rifle. Rifles, which generally fire longer, smaller diameter bullets, will in general have higher twist rates than handguns, which fire shorter, larger diameter bullets. There are three methods in use to describe the twist rate: The, traditionally speaking, most common method expresses the twist rate in terms of the 'travel' (length) required to complete one full projectile revolution in the rifled barrel. This method does not give an easy or straightforward understanding of whether a twist rate is relatively slow or fast when bores of different diameters are compared. The second method describes the 'rifled travel' required to complete one full projectile revolution in calibers or bore diameters: where is the twist rate expressed in bore diameters; is the twist length required to complete one full projectile revolution (in mm or in); and is the bore diameter (diameter of the lands, in mm or in). The twist travel and the bore diameter must be expressed in a consistent unit of measure, i.e. metric (mm) or imperial (in). The third method simply reports the angle of the grooves relative to the bore axis, measured in degrees. The latter two methods have the inherent advantage of expressing twist rate as a ratio and give an easy understanding if a twist rate is relatively slow or fast even when comparing bores of differing diameters. In 1879, George Greenhill, a professor of mathematics at the Royal Military Academy (RMA) at Woolwich, London, UK developed a rule of thumb for calculating the optimal twist rate for lead-core bullets. This shortcut uses the bullet's length, needing no allowances for weight or nose shape. The eponymous Greenhill Formula, still used today, is: where is 150 (use 180 for muzzle velocities higher than 2,800 f/s); is the bullet's diameter in inches; is the bullet's length in inches; and is the bullet's specific gravity (10.9 for lead-core bullets, which cancels out the second half of the equation). The original value of was 150, which yields a twist rate in inches per turn, when given the diameter and the length of the bullet in inches. This works to velocities of about 840 m/s (2800 ft/s); above those velocities, a of 180 should be used. For instance, with a velocity of 600 m/s (2000 ft/s), a diameter of and a length of , the Greenhill formula would give a value of 25, which means 1 turn in . Improved formulas for determining stability and twist rates include the Miller Twist Rule and the McGyro program developed by Bill Davis and Robert McCoy. If an insufficient twist rate is used, the bullet will begin to yaw and then tumble; this is usually seen as "keyholing", where bullets leave elongated holes in the target as they strike at an angle. Once the bullet starts to yaw, any hope of accuracy is lost, as the bullet will begin to veer off in random directions as it precesses. Conversely, too high a rate of twist can also cause problems. The excessive twist can cause accelerated barrel wear, and coupled with high velocities also induce a very high spin rate which can cause projectile jacket ruptures causing high velocity spin stabilized projectiles to disintegrate in flight. Projectiles made out of mono metals cannot practically achieve flight and spin velocities such that they disintegrate in flight due to their spin rate. Smokeless powder can produce muzzle velocities of approximately for spin stabilized projectiles and more advanced propellants used in smoothbore tank guns can produce muzzle velocities of approximately . A higher twist than needed can also cause more subtle problems with accuracy: Any inconsistency within the bullet, such as a void that causes an unequal distribution of mass, may be magnified by the spin. Undersized bullets also have problems, as they may not enter the rifling exactly concentric and coaxial to the bore, and excess twist will exacerbate the accuracy problems this causes. A bullet fired from a rifled barrel can spin at over 300,000 rpm (5 kHz), depending on the bullet's muzzle velocity and the barrel's twist rate. The general definition of the spin of an object rotating around a single axis can be written as: where is the linear velocity of a point in the rotating object (in units of distance/time) and refers to the circumference of the circle that this measuring point performs around the axis of rotation. A bullet that matches the rifling of the firing barrel will exit that barrel with a spin: where is the muzzle velocity and is the twist rate. For example, an M4 Carbine with a twist rate of 1 in and a muzzle velocity of will give the bullet a spin of 930 m/s / 0.1778 m = 5.2 kHz (314,000 rpm). Excessive rotational speed can exceed the bullet's designed limits and the resulting centrifugal force can cause the bullet to disintegrate radially during flight. Design A barrel of circular bore cross-section is not capable of imparting a spin to a projectile, so a rifled barrel has a non-circular cross-section. Typically the rifled barrel contains one or more grooves that run down its length, giving it a cross-section resembling an internal gear, though it can also take the shape of a polygon, usually with rounded corners. Since the barrel is not circular in cross-section, it cannot be accurately described with a single diameter. Rifled bores may be described by the bore diameter (the diameter across the lands or high points in the rifling), or by groove diameter (the diameter across the grooves or low points in the rifling). Differences in naming conventions for cartridges can cause confusion; for example, the projectiles of the .303 British are actually slightly larger in diameter than the projectiles of the .308 Winchester, because the ".303" refers to the bore diameter in inches (bullet is .312), while the ".308" refers to the bullet diameter in inches (7.92 mm and 7.82 mm, respectively). Despite differences in form, the common goal of rifling is to deliver the projectile accurately to the target. In addition to imparting the spin to the bullet, the barrel must hold the projectile securely and concentrically as it travels down the barrel. This requires that the rifling meet a number of tasks: It must be sized so that the projectile will swage or obturate upon firing to fill the bore. The diameter should be consistent, and must not increase towards the muzzle. The rifling should be consistent down the length of the bore, without changes in cross-section, such as variations in groove width or spacing. It should be smooth, with no scratches lying perpendicular to the bore, so it does not abrade material from the projectile. The chamber and crown must smoothly transition the projectile into and out of the rifling. Rifling may not begin immediately forward of the chamber. There may be an unrifled throat ahead of the chamber so a cartridge may be chambered without pushing the bullet into the rifling. This reduces the force required to load a cartridge into the chamber, and prevents leaving a bullet stuck in the rifling when an unfired cartridge is removed from the chamber. The specified diameter of the throat may be somewhat greater than groove diameter, and may be enlarged by use if hot powder gas melts the interior barrel surface when the rifle is fired. Freebore is a groove-diameter length of smoothbore barrel without lands forward of the throat. Freebore allows the bullet to transition from static friction to sliding friction and gain linear momentum prior to encountering the resistance of increasing rotational momentum. Freebore may allow more effective use of propellants by reducing the initial pressure peak during the minimum volume phase of internal ballistics before the bullet starts moving down the barrel. Barrels with freebore length exceeding the rifled length have been known by a variety of trade names including paradox. Manufacture An early method of introducing rifling to a pre-drilled barrel was to use a cutter mounted on a square-section rod, accurately twisted into a spiral of the desired pitch, mounted in two fixed square-section holes. As the cutter was advanced through the barrel it twisted at a uniform rate governed by the pitch. The first cut was shallow. The cutter points were gradually expanded as repeated cuts were made. The blades were in slots in a wooden dowel which were gradually packed out with slips of paper until the required depth was obtained. The process was finished off by casting a slug of molten lead into the barrel, withdrawing it and using it with a paste of emery and oil to smooth the bore. Most rifling is created by either: Cutting one groove at a time with a tool (cut rifling or single point cut rifling); Cutting all grooves in one pass with a special progressive broaching bit (broached rifling); Pressing all grooves at once with a tool called a "button" that is pushed or pulled down the barrel (button rifling); Forging the barrel over a mandrel containing a reverse image of the rifling, and often the chamber as well (hammer forging); Flow forming the barrel preform over a mandrel containing a reverse image of the rifling (rifling by flow forming) Using non-contact forces such as chemical reaction or heat from laser source to etch the rifling pattern (etching rifling) Machining the rifling grooves texture on a thin metal plate, then folding the plate into the inner bore of the barrel (liner rifling) The grooves are the spaces that are cut out, and the resulting ridges are called lands. These lands and grooves can vary in number, depth, shape, direction of twist (right or left), and twist rate. The spin imparted by rifling significantly improves the stability of the projectile, improving both range and accuracy. Typically rifling is a constant rate down the barrel, usually measured by the length of travel required to produce a single turn. Occasionally firearms are encountered with a gain twist, where the rate of spin increases from chamber to muzzle. While intentional gain twists are rare, due to manufacturing variance, a slight gain twist is in fact fairly common. Since a reduction in twist rate is very detrimental to accuracy, gunsmiths who are machining a new barrel from a rifled blank will often measure the twist carefully so they may put the faster rate, no matter how minute the difference is, at the muzzle end. Projectiles The original firearms were loaded from the muzzle by forcing a ball from the muzzle to the chamber. Whether using a rifled or smooth bore, a good fit was needed to seal the bore and provide the best possible accuracy from the gun. To ease the force required to load the projectile, these early guns used an undersized ball, and a patch made of cloth, paper, or leather to fill the windage (the gap between the ball and the walls of the bore). The patch acted as a wadding and provided some degree of pressure sealing, kept the ball seated on the charge of black powder, and kept the ball concentric to the bore. In rifled barrels, the patch also provided a means to transfer the spin from the rifling to the bullet, as the patch is engaged rather than the ball. Until the advent of the hollow-based Minié ball, which expands and obturates upon firing to seal the bore and engage the rifling, the patch provided the best means of getting the projectile to engage the rifling. In breech-loading firearms, the task of seating the projectile into the rifling is handled by the throat of the chamber. Next is the freebore, which is the portion of the throat down which the projectile travels before the rifling starts. The last section of the throat is the throat angle, where the throat transitions into the rifled barrel. The throat is usually sized slightly larger than the projectile, so the loaded cartridge can be inserted and removed easily, but the throat should be as close as practical to the groove diameter of the barrel. Upon firing, the projectile expands under the pressure from the chamber, and obturates to fit the throat. The bullet then travels down the throat and engages the rifling, where it is engraved, and begins to spin. Engraving the projectile requires a significant amount of force, and in some firearms there is a significant amount of freebore, which helps keep chamber pressures low by allowing the propellant gases to expand before being required to engrave the projectile. Minimizing freebore improves accuracy by decreasing the chance that a projectile will distort before entering the rifling. When the projectile is swaged into the rifling, it takes on a mirror image of the rifling, as the lands push into the projectile in a process called engraving. Engraving takes on not only the major features of the bore, such as the lands and grooves, but also minor features, like scratches and tool marks. The relationship between the bore characteristics and the engraving on the projectile are often used in forensic ballistics. Recent developments The grooves most commonly used in modern rifling have fairly sharp edges. More recently, polygonal rifling, a throwback to the earliest types of rifling, has become popular, especially in handguns. Polygonal barrels tend to have longer service lives because the reduction of the sharp edges of the land (the grooves are the spaces that are cut out, and the resulting ridges are called lands) reduces erosion of the barrel. Supporters of polygonal rifling also claim higher velocities and greater accuracy. Polygonal rifling is currently seen on pistols from CZ, Heckler & Koch, Glock, Tanfoglio, and the Kahr Arms (P series only), as well as the Desert Eagle. For field artillery pieces, the extended range, full bore (ERFB) concept developed in early 1970s by Dennis Hyatt Jenkins and Luis Palacio of Gerald Bull's Space Research Corporation for the GC-45 howitzer replaces the bourrelet with small nubs, which both tightly fit into lands of the barrel. Guns capable of firing these projectiles have achieved significant increases in range, but this is compensated with a significantly (3–4 times) decreased accuracy, due to which they were not adopted by NATO militaries. Unlike a shell narrower than the gun's bore with a sabot, ERFB shells use the full bore, permitting a larger payload. Examples include the South African G5 and the German PzH 2000. ERFB may be combined with base bleed. Variable pitch rifling A gain-twist or progressive rifling begins with a slow twist rate that gradually increases down the bore, resulting in very little initial change in the projectile's angular momentum during the first few inches of bullet travel after it enters the throat. This enables the bullet to remain essentially undisturbed and trued to the case mouth. After engaging the rifling at the throat, the bullet is progressively subjected to accelerated angular momentum as it is propelled down the barrel. The theoretical advantage is that by gradually increasing the spin rate, torque is imparted along a much longer bore length, allowing thermomechanical stress to be spread over a larger area rather than being focused predominantly at the throat, which typically wears out much faster than other parts of the barrel. Gain-twist rifling was used prior to and during the American Civil War (1861–65). Colt Army and Navy revolvers both employed gain-twist rifling. Gain-twist rifling, however, is more difficult to produce than uniform rifling, and therefore is more expensive. The military has used gain-twist rifling in a variety of weapons such as the M61 Vulcan Gatling gun used in some current fighter jets and the larger GAU-8 Avenger Gatling gun used in the A10 Thunderbolt II close air support jet. In these applications it allows lighter construction of the barrels by decreasing chamber pressures through the use of low initial twist rates but ensuring the projectiles have sufficient stability once they leave the barrel. It is seldom used in commercially available products, though notably on the Smith & Wesson Model 460 (X-treme Velocity Revolver).
Technology
Mechanisms_2
null
94166
https://en.wikipedia.org/wiki/Bolt%20action
Bolt action
Bolt-action is a type of manual firearm action that is operated by directly manipulating the bolt via a bolt handle, most commonly placed on the right-hand side of the firearm (as most users are right-handed). The majority of bolt-action firearms are rifles, but there are also some variants of shotguns and handguns that are bolt-action. Bolt-action firearms are generally repeating firearms, but many single-shot designs are available particularly in shooting sports where single-shot firearms are mandated, such as most Olympic and ISSF rifle disciplines. From the late 19th century all the way through both World Wars, bolt-action rifles were the standard infantry service weapons for most of the world's military forces, with the exception of the United States Armed Forces, who used the M1 Garand Semi-automatic rifle. In modern military and law enforcement after the Second World War, bolt-action firearms have been largely replaced by semi-automatic and selective-fire firearms, and have remained only as sniper rifles due to the design's inherent potential for superior accuracy and precision, as well as ruggedness and reliability compared to self-loading designs. Most bolt-action firearms use a rotating bolt operation, where the handle must first be rotated upward to unlock the bolt from the receiver, then pulled back to open the breech and allowing any spent cartridge case to be extracted and ejected. This also cocks the striker within the bolt (either on opening or closing of the bolt depending on the gun design) and engages it against the sear. When the bolt is returned to the forward position, a new cartridge (if available) is pushed out of the magazine and into the barrel chamber, and finally the breech is closed tight by rotating the handle down so the bolt head relocks on the receiver. A less common bolt-action type is the straight-pull mechanism, where no upward handle-turning is needed and the bolt unlocks automatically when the handle is pulled rearwards by the user's hand. History The first bolt-action rifle was produced in 1824 by Johann Nikolaus von Dreyse, following work on breechloading rifles that dated to the 18th century. Von Dreyse would perfect his Nadelgewehr (Needle Rifle) by 1836, and it was adopted by the Prussian Army in 1841. While it saw limited service in the German Revolutions of 1848, it was not fielded widely until the 1864 victory over Denmark. In 1850 a metallic centerfire bolt-action breechloader was patented by Béatus Beringer. In 1852 another metallic centerfire bolt-action breechloader was patented by Joseph Needham and improved upon in 1862 with another patent. Two different systems for primers –the mechanism to ignite a metallic cartridge's powder charge – were invented in the 1860s as well, the Berdan and the Boxer systems. The United States purchased 900 Greene rifles (an under hammer, percussion capped, single-shot bolt-action that used paper cartridges and an ogival bore rifling system) in 1857, which saw service at the Battle of Antietam in 1862, during the American Civil War; however, this weapon was ultimately considered too complicated for issue to soldiers and was supplanted by the Springfield Model 1861, a conventional muzzle loading rifle. During the American Civil War, the bolt-action Palmer carbine was patented in 1863, and by 1865, 1000 were purchased for use as cavalry weapons. The French Army adopted its first bolt-action rifle, the Chassepot rifle, in 1866 and followed with the metallic cartridge bolt-action Gras rifle in 1874. European armies continued to develop bolt-action rifles through the latter half of the 19th century, first adopting tubular magazines as on the Kropatschek rifle and the Lebel rifle. The first bolt-action repeating rifle was patented in Britain in 1855 by an unidentified inventor through the patent agent Auguste Edouard Loradoux Bellford using a gravity-operated tubular magazine in the stock. Another more well-known bolt-action repeating rifle was the Vetterli rifle of 1867 and the first bolt-action repeating rifle to use centerfire cartridges was the weapon designed by the Viennese gunsmith Ferdinand Fruwirth in 1871. Ultimately, the military turned to bolt-action rifles using a box magazine; the first of its kind was the M1885 Remington–Lee, but the first to be generally adopted was the British 1888 Lee–Metford. World War I marked the height of the bolt-action rifle's use, with all of the nations in that war fielding troops armed with various bolt-action designs. During the buildup prior to World War II, the military bolt-action rifle began to be superseded by semi-automatic rifles and later fully automatic rifles, though bolt-action rifles remained the primary weapon of most of the combatants for the duration of the war; and many American units, especially the USMC, used bolt-action M1903 Springfield rifles until sufficient numbers of M1 Garand rifles were made available. The bolt-action is still common today among many sniper rifles, as the design has the potential for superior accuracy, reliability, reduced weight, and the ability to control loading over the faster rate of fire that all semi-automatic rifle alternatives allow. There are, however, many semi-automatic rifle designs used especially in the designated marksman role. Today, bolt-action rifles are chiefly used as hunting and target rifles. These rifles can be used to hunt anything from vermin to deer and to large game, especially big game caught on a safari, as they are adequate to deliver a single lethal shot from a safe distance. Target shooters favour single-shot bolt actions for their simplicity of design, reliability, and accuracy. Bolt-action shotguns are considered a rarity among modern firearms but were formerly a commonly used action for .410 entry-level shotguns, as well as for low-cost 12-gauge shotguns. The M26 Modular Accessory Shotgun System (MASS) is the most recent and advanced example of a bolt-action shotgun, albeit one designed to be attached to an M16 rifle or M4 carbine using an underbarrel mount (although with the standalone kit, the MASS can become a standalone weapon). Mossberg 12-gauge bolt-action shotguns were briefly popular in Australia after the 1997 changes to firearms laws, but the shotguns themselves were awkward to operate and had only a three-round magazine, thus offering no practical or real advantages over a conventional double-barreled shotgun. Some pistols use a bolt-action system, although this is uncommon, and such examples are typically specialized hunting and target handguns. Major bolt-action systems Rotating bolt Most of the bolt-action designs use a rotating bolt (or "turn pull") design, which involves the shooter doing an upward "rotating" movement of the handle to unlock the bolt from the breech and cock the firing pin, followed by a rearward "pull" to open the breech, extract the spent cartridge case, then reverse the whole process to chamber the next cartridge and relock the breech. There are four major turn bolt-action designs: the Remington M-700, possibly the single most numerous produced rifle in history which is now also used as basis for most custom competition rifle actions, along with the Mauser system, the Lee–Enfield system, and the Mosin–Nagant system. All four differ in the way the bolt fits into the receiver, how the bolt rotates as it is being operated, the number of locking lugs holding the bolt in place as the gun is fired, and whether the action is cocked on the opening of the bolt (as in both the Mauser system and the Mosin Nagant system) or the closing of the bolt (as in the Lee–Enfield system). The vast majority of modern bolt-action rifles were made for the commercial market post-war, numbering in the tens of millions by Remington in the unique, and most accurate Model 700, two of the others use the Mauser system, with other designs such as the Lee–Enfield system and the Mosin Nagant system, of only limited usage. Mauser The Mauser bolt-action system is based on 19th-century Mauser bolt-action rifle designs and was finalized in the Gewehr 98 designed by Paul Mauser. It is the most common bolt-action system in the world, being in use in nearly all modern hunting rifles and the majority of military bolt-action rifles until the middle of the 20th century. The Mauser system is stronger than that of the Lee–Enfield system, due to two locking lugs just behind the bolt head, which make it better able to handle higher-pressure cartridges (i.e. magnum cartridges). The 9.3×64mm Brenneke and 8×68mm S magnum rifle cartridge "families" were designed for the Mauser M 98 bolt-action. A novel safety feature was the introduction of a third locking lug present at the rear of the bolt that normally did not lock the bolt, since it would introduce asymmetrical locking forces. The Mauser system features "cock on opening", meaning the upward rotation of the bolt when the rifle is opened cocks the action. A drawback of the Mauser M 98 system is that it cannot be cheaply mass-produced very easily. Many Mauser M 98-inspired derivatives feature technical alterations, such as omitting the third safety locking lug, to simplify production. The controlled-feed on the Mauser M 98 bolt-action system is simple, strong, safe, and well-thought-out design that has inspired other military and sporting rifle designs that became available during the 20th century, including the: Gewehr 98/Standardmodell/Karabiner 98k M24 series vz. 24/vz. 33 Type 24 rifle M1903 Springfield Pattern 1914 Enfield M1917 Enfield Arisaka Type 38/Type 99 M48 Mauser Kb wz. 98a/Karabinek wz. 1929 FR 7/FR 8 modern hunting/sporting rifles like the CZ 550, Heym Express Magnum, Winchester Model 70 and the Mauser M 98 modern sniper rifles like the Sako TRG, Accuracy International Arctic Warfare and GOL Sniper Magnum Versions of the Mauser action designed prior to the Gewehr 98's introduction, such as that of the Swedish Mauser rifles and carbines, lack the third locking lug and feature a "cock on closing" operation. Lee–Enfield The Lee–Enfield bolt-action system was introduced in 1889 with the Lee–Metford and later Lee–Enfield rifles (the bolt system is named after the designer James Paris Lee and the barrel rifling after the Royal Small Arms Factory in the London Borough of Enfield), and is a "cock on closing" action in which the forward thrust of the bolt cocks the action. This enables a shooter to keep eyes on sights and targets uninterrupted when cycling the bolt. The ability of the bolt to flex between the lugs and chamber, which also keeps the shooter safer in case of a catastrophic chamber overpressure failure. The disadvantage of the rearward-located bolt lugs is that a larger part of the receiver, between chamber and lugs, must be made stronger and heavier to resist stretching forces. Also, the bolt ahead of the lugs may flex on firing which, although a safety advantage with repeated firing over time, this may lead to a stretched receiver and excessive headspacing, which if perceived as a problem can be remedied by changing the removable bolt head to a larger sized one (the Lee–Enfield bolt manufacture involved a mass production method where at final assembly the bolt body was fitted with one of three standard size bolt heads for correct headspace). In the years leading up to World War II, the Lee–Enfield bolt system was used in numerous commercial sporting and hunting rifles manufactured by such firms in the United Kingdom as BSA, LSA, and Parker–Hale, as well as by SAF Lithgow in Australia. Vast numbers of ex-military SMLE Mk III rifles were sporterised post WWII to create cheap, effective hunting rifles, and the Lee–Enfield bolt system is used in the M10 and No 4 Mk IV rifles manufactured by Australian International Arms. Rifle Factory Ishapore of India manufactures a hunting and sporting rifle chambered in .315 which also employs the Lee Enfield action. Lee–Enfield (all marks and models) Ishapore 2A1 Various hunting/sporting rifles manufactured by BSA, LSA, SAF Lithgow, and Parker Hale Australian International Arms M10 and No 4 Mk IV hunting/sporting rifles Rifle Factory Ishapore's hunting Lee Enfield rifle in .315 Mosin–Nagant The Mosin–Nagant action, created in 1891 and named after the designers Sergei Mosin and Léon Nagant, differs significantly from the Mauser and Lee–Enfield bolt-action designs. The Mosin–Nagant design has a separate bolthead that rotates with the bolt and the bearing lugs, in contrast to the Mauser system where the bolthead is a non-removable part of the bolt. The Mosin–Nagant is also unlike the Lee–Enfield system where the bolthead remains stationary and the bolt body itself rotates. The Mosin–Nagant bolt is a somewhat complicated affair, but is extremely rugged and durable; like the Mauser, it uses a "cock on open" system. Although this bolt system has been rarely used in commercial sporting rifles (the Vostok brand target rifles being the most recognized) and has never been exported outside of Russia, although large numbers of military surplus Mosin–Nagant rifles have been sporterized for use as hunting rifles in the following years since the end of World War II. Swing The Swing was developed in 1970 in the United Kingdom as a purpose-built target rifle for use in NRA competition. Fullbore target rifle competitions historically used accurised examples of the prevailing service rifle, but it was felt these had reached the end of their development potential. The Swing bolt featured four lugs on the bolt head, at 45 degrees when closed - splitting the difference between the vertically locking Mauser and horizontally locking Enfield bolt designs. Supplied with Schultz & Larsen barrels and a trigger derived from the Finnish Mantari, the Swing was commercially successful, with the basic design reused in the Paramount, RPA Quadlock and Millenium rifles. Other designs The Vetterli rifle was the first bolt-action repeating rifle introduced by an army. It was used by the Swiss army from 1869 to circa 1890. Modified Vetterlis were also used by the Italian Army. Another notable design is the Norwegian Krag–Jørgensen, which was used by Norway, Denmark, and briefly the United States. It is unusual among bolt-action rifles in that is loaded through a gate on the right side of the receiver, and thus can be reloaded without opening the bolt. The Norwegian and Danish versions of the Krag have two locking lugs, while the American version has only one. In all versions, the bolt handle itself serves as an emergency locking lug. The Krag's major disadvantage compared to other bolt-action designs is that it is usually loaded by hand, one round at a time, although a box-like device was made that could drop five rounds into the magazine, all at once via a stripper or en bloc clip. This made it slower to reload than other designs which used stripper or en bloc clips. Another historically important bolt-action system was the Gras system, used on the French Mle 1874 Gras rifle, Mle 1886 Lebel rifle (which was the first to introduce ammunition loaded with nitrocellulose-based smokeless powder), and the Berthier series of rifles. Straight pull Straight-pull bolt-actions differ from conventional turn-pull bolt-action mechanisms in that the bolt can be cycled back and forward without rotating the handle and thus only a linear motion is required, as opposed to a traditional bolt-action, where the user has to axially rotate the bolt in addition to the linear motions to perform chambering and primary extraction. The bolt locking of a straight pull action is achieved differently without needing manual inputs, therefore the entire operating cycle needs the shooter to perform only two movements (pull back and push forward), instead of four movements (rotate up, pull back, push forward, and rotate down), this greatly increases the rate of fire of the gun. In 1993, the German Blaser company introduced the Blaser R93, a new straight pull action where locking is achieved by a series of concentric "claws" that protrude/retract from the bolthead, a design that is referred to as Radialbundverschluss ("radial connection"). As of 2017 the Rifle Shooter magazine listed its successor Blaser R8 as one of the three most popular straight pull rifles together with Merkel Helix and Browning Maral. Some other notable modern straight pull rifles are made by Beretta, C.G. Haenel, Chapuis, Heym, Lynx, Rößler, Savage Arms, Strasser, and Steel Action. Most straight bolt rifles have a firing mechanism without a hammer, but there are some hammer-fired models, such as the Merkel Helix. Firearms using a hammer usually have a comparably longer lock time than hammerless mechanisms. In the sport of biathlon, because shooting speed is an important performance factor and semi-automatic guns are illegal for race use, straight pull actions are quite common and are used almost exclusively in the Biathlon World Cup. The first company to make the straight pull action for .22 caliber was J. G. Anschütz; Peter Fortner junior designed the "Fortner Action", which was incorporated into the Anschütz 1827 Fortner. The Fortner action is specifically the straight-pull ball bearing lock action, which features spring-loaded ball bearings on the side of the bolt which lock into a groove inside the bolt's housing. With the new design came a new dry fire method; instead of the bolt being turned up slightly, the action is locked back to catch the firing pin. The action was later used in the centre-fire Heym SR 30. Operating the bolt Typically, the bolt consists of a tube of metal inside of which the firing mechanism is housed, and which has at the front or rear of the tube several metal knobs, or "lugs", which serve to lock the bolt in place. The operation can be done via a rotating bolt, a lever, cam action, a locking piece, or a number of systems. Straight pull designs have seen a great deal of use, though manual turn bolt designs are what is most commonly thought of in reference to a bolt-action design due to the type ubiquity. As a result, the bolt-action term is often reserved for more modern types of rotating bolt designs when talking about a specific weapon's type of action. However, both straight pull and rotating bolt rifles are types of bolt-action rifles. Lever-action and pump-action weapons must still operate the bolt, but they are usually grouped separately from bolt-actions that are operated by a handle directly attached to a rotating bolt. Early bolt-action designs, such as the Dreyse needle gun and the Mauser Model 1871, locked by dropping the bolt handle or bolt guide rib into a notch in the receiver, this method is still used in .22 rimfire rifles. The most common locking method is a rotating bolt with two lugs on the bolt head, which was used by the Lebel Model 1886 rifle, Model 1888 Commission Rifle, Mauser M 98, Mosin–Nagant and most bolt-action rifles. The Lee–Enfield has a lug and guide rib, which lock on the rear end of the bolt into the receiver. Bolt knob The bolt knob is the part of the bolt handle that the user grips when loading and reloading the firearm and thereby acts as a cocking handle. On many older firearms, the bolt knob is welded to the bolt handle, and as such becoming an integral part of the bolt handle itself. On many newer firearms, the bolt knob is instead threaded onto the handle, allowing the user to change the original bolt knob for an aftermarket one, either for aesthetical reasons, achieving better grip or similar. The type of threads used vary between firearms. European firearms often use either M6 1 or M8 1.25 threads, for example M6 is used on the SIG Sauer 200 STR, Blaser R93, Blaser R8, CZ 457 and Bergara rifles, while M8 is used on the Sako TRG and SIG Sauer 404. Many American firearms instead use 1/4" 28 TPI (6.35 0.907 mm) or 5/16" 24 TPI (7.9375 1.058 mm) threads. Some other thread types are also used, for example, No. 10 32 TPI (4.826 0.794 mm) as used by Mausingfield. There also exists aftermarket slip-on bolt handle covers which are mounted without having to remove the existing bolt handle. These are often made of either rubber or plastic. Reloading Most bolt-action firearms are fed by an internal magazine loaded by hand, by en bloc, or by stripper clips, though a number of designs have had a detachable magazine or independent magazine, or even no magazine at all, thus requiring that each round be independently loaded. Generally, the magazine capacity is limited to between two and ten rounds, as it can permit the magazine to be flush with the bottom of the rifle, reduce the weight, or prevent mud and dirt from entering. A number of bolt-actions have a tube magazine, such as along the length of the barrel. In weapons other than large rifles, such as pistols and cannons, there were some manually operated breech-loading weapons. However, the Dreyse Needle fire rifle was the first breech loader to use a rotating bolt design. Johann Nicholas von Dreyse's rifle of 1838 was accepted into service by Prussia in 1841, which was in turn developed into the Prussian Model in 1849. The design was a single shot breech-loader and had the now familiar arm sticking out from the side of the bolt, to turn and open the chamber. The entire reloading sequence was a more complex procedure than later designs, however, as the firing pin had to be independently primed and activated, and the lever was used only to move the bolt.
Technology
Mechanisms_2
null
3329659
https://en.wikipedia.org/wiki/Font
Font
In metal typesetting, a font (American English) or fount (Commonwealth English) is a particular size, weight and style of a typeface, defined as the set of fonts that share an overall design. For instance, the typeface Bauer Bodoni (shown in the figure) includes fonts "Roman" (or "regular"), "" and ""; each of these exists in a variety of sizes. In the digital description of fonts (computer fonts), the terms "font" and "typeface" are often used interchangeably. For example, when used in computers, each style is stored in a separate digital font file. In both traditional typesetting and computing, the word "font" refers to the delivery mechanism of an instance of the typeface. In traditional typesetting, the font would be made from metal or wood type: to compose a page may require multiple fonts from the typeface or even multiple typefaces. Spelling and etymology The word font (US) or fount (traditional UK; in any case pronounced ) derives from Middle French fonte, meaning "cast iron". The term refers to the process of casting metal type at a type foundry. The spelling font is mainly used in the United States, whereas fount was historically used in most Commonwealth countries. Metal type In a manual printing (letterpress) house the word "font" would refer to a complete set of metal type that would be used to typeset an entire page. Upper- and lowercase letters get their names because of which case the metal type was located in for manual typesetting: the more distant upper case or the closer lower case. The same distinction is also referred to with the terms majuscule and minuscule. Unlike a digital typeface, a metal font would not include a single definition of each character, but commonly used characters (such as vowels and periods) would have more physical type-pieces included. A font when bought new would often be sold as (for example in a Roman alphabet) 12pt 14A 34a, meaning that it would be a size 12-point font containing 14 uppercase "A"s, and 34 lowercase "a". The rest of the characters would be provided in quantities appropriate for the distribution of letters in that language. Some metal type characters required in typesetting, such as dashes, spaces and line-height spacers, were not part of a specific font, but were generic pieces that could be used with any font. Line spacing is still often called "leading", because the strips used for line spacing were made of lead (rather than the harder alloy used for other pieces). This spacing strip was made from lead because lead was a softer metal than the traditional forged metal type pieces (which was part lead, antimony and tin) and would compress more easily when "locked up" in the printing "chase" (i.e. a carrier for holding all the type together). In the 1880s–1890s, "hot lead" typesetting was invented, in which type was cast as it was set, either piece by piece (as in the Monotype technology) or in entire lines of type at one time (as in the Linotype technology). Characteristics In addition to the character height, when using the mechanical sense of the term, there are several characteristics which may distinguish fonts, though they would also depend on the script(s) that the typeface supports. In European alphabetic scripts, i.e. Latin, Cyrillic, and Greek, the main such properties are the stroke width, called weight, the style or angle and the character width. The regular or standard font is sometimes labeled roman, both to distinguish it from bold or thin and from italic or oblique. The keyword for the default, regular case is often omitted for variants and never repeated, otherwise it would be Bulmer regular italic, Bulmer bold regular and even Bulmer regular regular. Roman can also refer to the language coverage of a font, acting as a shorthand for "Western European". Different fonts of the same typeface may be used in the same work for various degrees of readability and emphasis, or in a specific design to make it be of more visual interest. Weight The weight of a particular font is the thickness of the character outlines relative to their height. A typeface may come in fonts of many weights, from ultra-light to extra-bold or black; four to six weights are not unusual, and a few typefaces have as many as a dozen. Many typefaces for office, web and non-professional use come with a normal and a bold weight which are linked together. If no bold weight is provided, many renderers (browsers, word processors, graphic and DTP programs) support a bolder font by rendering the outline a second time at an offset, or smearing it slightly at a diagonal angle. The base weight differs among typefaces; that means one font may appear bolder than another font. For example, fonts intended to be used in posters are often bold by default while fonts for long runs of text are rather light. Weight designations in font names may differ in regard to the actual absolute stroke weight or density of glyphs in the font. Attempts to systematize a range of weights led to a numerical classification first used in 1957 by Adrian Frutiger with the Univers typeface: 35 Extra Light, 45 Light, 55 Medium or Regular, 65 Bold, 75 Extra Bold, 85 Extra Bold, 95 Ultra Bold or Black. Deviants of these were the "6 series" (italics), e.g. 46 Light Italics etc., the "7 series" (condensed versions), e.g. 57 Medium Condensed etc., and the "8 series" (condensed italics), e.g. 68 Bold Condensed Italics. From this brief numerical system it is easier to determine exactly what a font's characteristics are, for instance "Helvetica 67" (HE67) translates to "Helvetica Bold Condensed". The first algorithmic description of fonts was made by Donald Knuth in his 1986 Metafont description language and interpreter. The TrueType font format introduced a scale from 100 through 900, which is also used in CSS and OpenType, where 400 is regular (roman or plain). The Mozilla Developer Network provides the following rough mapping to typical font weight names: Font mapping varies by font designer. A good example is Bigelow and Holmes's Go Go font family. In this family, the "fonts have CSS numerical weights of 400, 500, and 600. Although CSS specifies 'Bold' as a 700 weight and 600 as Semibold or Demibold, the Go numerical weights match the actual progression of the ratios of stem thicknesses: Normal:Medium = 400:500; Normal:Bold = 400:600". The terms normal, regular and plain (sometimes book) are used for the standard-weight font of a typeface. Where both appear and differ, book is often lighter than regular, but in some typefaces it is bolder. Before the arrival of computers, each weight had to be drawn manually. As a result, many older multi-weight families such as Gill Sans and Monotype Grotesque have considerable differences in weights from light to extra-bold. Since the 1980s, it has become common to use automation to construct a range of weights as points along a trend, multiple master or other parameterized font design. This means that many modern digital fonts such as Myriad and TheSans are offered in a large range of weights which offer a smooth and continuous transition from one weight to the next, although some digital fonts are created with extensive manual corrections. As digital font design allows more variants to be created faster, a common development in professional font design is the use of "grades": slightly different weights intended for different types of paper and ink, or printing in a different region with different ambient temperature and humidity. For example, a thin design printed on book paper and a thicker design printed on high-gloss magazine paper may come out looking identical, since in the former case the ink will soak and spread out more. Grades are offered with characters having the same width on all grades, so that a change of printing materials does not affect copy-fit. Grades are common on serif fonts with their finer details. Fonts in which the bold and non-bold letters have the same width are "duplexed". Style Slope In European typefaces, especially Roman ones, a slope or slanted style is used to emphasize important words. This is called italic type or oblique type. These designs normally slant to the right in left-to-right scripts. Oblique styles are often called italic, but differ from "true italic" styles. Italic styles are more flowing than the normal typeface, approaching a more handwritten, cursive style, possibly using ligatures more commonly or gaining swashes. Although rarely encountered, a typographic face may be accompanied by a matching calligraphic face (cursive, script), giving an exaggeratedly italic style. In many sans-serif and some serif typefaces, especially in those with strokes of even thickness, the characters of the italic fonts are only slanted, which is often done algorithmically, without otherwise changing their appearance. Such oblique fonts are not true italics, because lowercase letter shapes do not change, but they are often marketed as such. Fonts normally do not include both oblique and italic styles: the designer chooses to supply one or the other. Since italic styles clearly look different than regular (roman) styles, it is possible to have "upright italic" designs that take a more cursive form but remain upright; Computer Modern is an example of a font that offers this style. In Latin-script countries, upright italics are rare but are sometimes used in mathematics or in complex documents where a section of text already in italics needs a "double italic" style to add emphasis to it. For example, the Cyrillic minuscule "т" may look like a smaller form of its majuscule "Т" or more like a roman small "m" as in its standard italic appearance; in this case, the distinction between styles is also a matter of local preference. Other style attributes In Frutiger's nomenclature the second digit for upright fonts is a 5, for italic fonts a 6 and for condensed italic fonts an 8. The two Japanese syllabaries, katakana and hiragana, are sometimes seen as two styles or typographic variants of each other, but usually are considered separate character sets as a few of the characters have separate kanji origins and the scripts are used for different purposes. The gothic style of the roman script with broken letter forms, on the other hand, is usually considered a mere typographic variant. Cursive-only scripts such as Arabic also have different styles, in this case for example Naskh and Kufic, although these often depend on application, area or era. There are other aspects that can differ among font styles, but more often these are considered intrinsic features of the typeface. These include the look of digits (text figures) and the minuscules, which may be smaller versions of the capital letters (small caps) although the script has developed characteristic shapes for them. Some typefaces do not include separate glyphs for the cases at all, thereby abolishing the bicamerality. While most of these use uppercase characters only, some labeled unicase exist which choose either the majuscule or the minuscule glyph at a common height for both characters. Titling fonts are designed for headlines and displays, and have stroke widths optimized for large sizes. Width Some typefaces include fonts that vary the width of the characters (stretch), although this feature is usually rarer than weight or slope. Narrower fonts are usually labeled compressed, condensed or narrow. In Frutiger's system, the second digit of condensed fonts is a 7. Wider fonts may be called wide, extended or expanded. Both can be further classified by prepending extra, ultra or the like. Compressing a font design to a condensed weight is a complex task, requiring the strokes to be slimmed down proportionally and often making the capitals straight-sided. It is particularly common to see condensed fonts for sans-serif and slab-serif families, since it is relatively practical to modify their structure to a condensed weight. Serif text faces are often only issued in the regular width. These separate fonts have to be distinguished from techniques that alter the letter-spacing to achieve narrower or smaller words, especially for justified text alignment. Most typefaces either have proportional or monospaced (for example, those resembling typewriter output) letter widths, if the script provides the possibility. Some superfamilies include both proportional and monospaced fonts. Some fonts also provide both proportional and fixed-width (tabular) digits, where the former usually coincide with lowercase text figures and the latter with uppercase lining figures. The width of a font will depend on its intended use. Times New Roman was designed with the goal of having small width, to fit more text into a newspaper. On the other hand, Palatino has large width to increase readability. The "billing block" on a movie poster often uses extremely condensed type in order to meet union requirements on the people who must be credited and the font height relative to the rest of the poster. Optical size Optical sizes refer to different versions of the same typefaces optimised for specific font sizes. For instance, thinner stroke weight might be used if a font style is intended for large-size display use, or ink traps might be added to the design if it is to be printed at small size on poor-quality paper. This was a natural feature in the metal type period for most typefaces, since each size would be cut separately and made to its own slightly different design. As an example of this, experienced Linotype designer Chauncey H. Griffith commented in 1947 that for a type he was working on intended for newspaper use, the 6 point size was not 50% as wide as the 12 point size, but about 71%. Optical sizing declined in use as pantograph engraving emerged, while phototypesetting and digital fonts further made printing the same font at any size simpler. A mild revival has taken place in recent years, although typefaces with optical sizes remain rare. The recent variable font technology further allows designers to include an optical size axis for a typeface, which means end users can manually adjust optical sizing on a continuous scale. Examples of variable fonts with such an axis are Roboto Flex and Helvetica Now Variable. Optical sizes are more common for serif fonts, since their typically finer detail and higher contrast benefits more from being bulked up for smaller sizes and made less overpowering at larger ones. Furthermore, it is often desirable for mathematical fonts (i.e., typefaces designed for typesetting mathematical equations) to have two optical sizes below "Regular", typically for higher-order superscripts and subscripts which are very small in sizes. Examples of such mathematical fonts include Minion Math and MathTime 2. Naming convention Naming schemes for optical sizes vary. One such scheme, invented and popularised by Adobe, labels the variant designs by their typical usages (with the intended point sizes varying slightly by typefaces): Poster: Extremely large sizes, usually larger than 72 point Display: Large sizes, typically 19–72 point Subhead: Large text, typically about 14–18 point "Regular" or "Text": Usually left unnamed, typically about 10–13 point Small Text (SmText): Typically about 8–10 point Caption: Very small, typically about 4–8 point Other type designers and publishers might use different naming schemes. For instance, the smaller optical size of Helvetica Now is labelled "Micro", while the display variant of Hoefler Text is called "Titling". Another example is Times, whose variants are labelled by their intended point sizes, such as Times Ten, Times Eighteen, and Times New Roman Seven. Variable fonts typically do not use any naming scheme, because the inclusion of an adjustable optical size axis means optical sizes are not released as separate products. Metrics Font metrics refers to metadata consisting of numeric values relating to size and space in the font overall, or in its individual glyphs. Font-wide metrics include cap height (the height of the capitals), x-height (the height of the lowercase letters) and ascender height, descender depth, and the font bounding box. Glyph-level metrics include the glyph bounding box, the advance width (the proper distance between the glyph's initial pen position and the next glyph's initial pen position), and sidebearings (space that pads the glyph outline on either side). Many digital (and some metal type) fonts are able to be kerned so that characters can be fitted more closely; the pair "Wa" is a common example of this. Some fonts, especially those intended for professional use, are duplexed: made with multiple weights having the same character width so that (for example) changing from regular to bold or italic does not affect word wrap. Sabon as originally designed was a notable example of this. (This was a standard feature of the Linotype hot metal typesetting system with regular and italic being duplexed, requiring awkward design choices as italics normally are narrower than the roman.) A particularly important basic set of fonts that became an early standard in digital printing was the Core Font Set included in the PostScript printing system developed by Apple and Adobe. To avoid paying licensing fees for this set, many computer companies commissioned "metrically compatible" knock-off fonts with the same spacing, which could be used to display the same document without it seeming clearly different. Arial and Century Gothic are notable examples of this, being functional equivalents to the PostScript standard fonts Helvetica and ITC Avant Garde respectively. Some of these sets were created in order to be freely redistributable, for example Red Hat's Liberation fonts and Google's Croscore fonts, which duplicate the PostScript set and other common fonts used in Microsoft software such as Calibri. It is not a requirement that a metrically compatible design be identical to its origin in appearance apart from width. Serifs Although most typefaces are characterised by their use of serifs, there are superfamilies that incorporate serif (antiqua) and sans-serif (grotesque) or even intermediate slab serif (Egyptian) or semi-serif fonts with the same base outlines. A more common font variant, especially of serif typefaces, is that of alternate capitals. They can have swashes to go with italic minuscules or they can be of a flourish design for use as initials (drop caps). Character variants Typefaces may be made in variants for different uses. These may be issued as separate font files, or the different characters may be included in the same font file if the font is a modern format such as OpenType and the application used can support this. Alternative characters are often called stylistic alternates. These may be switched on to allow users more flexibility to customise the typeface to suit their needs. The practice is not new: in the 1930s, Gill Sans, a British design, was sold abroad with alternative characters to make it resemble typefaces such as Futura popular in other countries, while Bembo from the same period has two shapes of "R": one with a stretched-out leg, matching its fifteenth-century model, and one less-common shorter version. With modern digital fonts, it is possible to group related alternative characters into stylistic sets, which may be turned on and off together. For example, in Williams Caslon Text, a revival of the 18th century Caslon typeface, the default italic forms have many swashes matching the original design. For a more spare appearance, these can all be turned off at once by engaging stylistic set 4. Junicode, intended for academic publishing, uses ss15 to enable a variant form of "e" used in medieval Latin. A corporation commissioning a modified version of a commercial computer font for their own use, meanwhile, might request that their preferred alternates be set to default. It is common for typefaces intended for use in books for young children to use simplified, single-storey forms of the lowercase letters a and g (sometimes also t, y, l and the digit 4); these may be called infant or schoolbook alternates. They are traditionally believed to be easier for children to read and less confusing as they resemble the forms used in handwriting. Often schoolbook characters are released as a supplement to popular families such as Akzidenz-Grotesk, Gill Sans and Bembo; a well-known font intended specifically for school use is Sassoon Sans. Besides alternate characters, in the metal type era The New York Times commissioned custom condensed single sorts for common long names that might often appear in news headings, such as "Eisenhower", "Chamberlain" or "Rockefeller". Digits Fonts can have multiple kinds of digits, including, as described above, proportional (variable width) and tabular (fixed width) as well as lining (uppercase height) and text (lowercase height) figures. They may also include separate shapes for superscript and subscript digits. Professional computer fonts may include even more complex settings for typesetting digits, such as digits intended to match the height of small caps. In addition, some fonts such as Adobe's Acumin and Christian Schwartz's Neue Haas Grotesk digitisation offer two heights of lining (uppercase height) figures: one slightly lower than cap height, intended to blend better into continuous text, and one at exactly the cap height to look better in combination with capitals for uses such as UK postcodes. With the OpenType format, it is possible to bundle all these into a single digital font file, but earlier font releases may have only one type per file.
Technology
Printing
null
3331228
https://en.wikipedia.org/wiki/Optical%20power
Optical power
In optics, optical power (also referred to as dioptric power, refractive power, focusing power, or convergence power) is the degree to which a lens, mirror, or other optical system converges or diverges light. It is equal to the reciprocal of the focal length of the device: . High optical power corresponds to short focal length. The SI unit for optical power is the inverse metre (), which, in this case, is commonly called the dioptre (symbol: dpt or D). Converging lenses have positive optical power, while diverging lenses have negative power. When a lens is immersed in a refractive medium, its optical power and focal length change. For two or more thin lenses close together, the optical power of the combined lenses is approximately equal to the sum of the optical powers of each lens: . Similarly, the optical power of a single lens is roughly equal to the sum of the powers of each surface. These approximations are commonly used in optometry. An eye that has too much or too little refractive power to focus light onto the retina has a refractive error. A myopic eye has too much power so light is focused in front of the retina. This is noted as a minus power. Conversely, a hyperopic eye has too little power so when the eye is relaxed, light is focused behind the retina. An eye with a refractive power in one meridian that is different from the refractive power of the other meridians has astigmatism. This is also known as a cylindrical power. Anisometropia is the condition in which one eye has a different refractive power than the other eye.
Physical sciences
Optics
Physics
3331499
https://en.wikipedia.org/wiki/Thin%20lens
Thin lens
In optics, a thin lens is a lens with a thickness (distance along the optical axis between the two surfaces of the lens) that is negligible compared to the radii of curvature of the lens surfaces. Lenses whose thickness is not negligible are sometimes called thick lenses. The thin lens approximation ignores optical effects due to the thickness of lenses and simplifies ray tracing calculations. It is often combined with the paraxial approximation in techniques such as ray transfer matrix analysis. Focal length The focal length, f, of a lens in air is given by the lensmaker's equation: where n is the index of refraction of the lens material, R1 and R2 are the radii of curvature of the two surfaces, and d is the thickness of the lens. Here R1 is taken to be positive if the first surface is convex, and negative if the surface is concave. The signs are reversed for the back surface of the lens: R2 is positive if the surface is concave, and negative if it is convex. This is an arbitrary sign convention; some authors choose different signs for the radii, which changes the equation for the focal length. For a thin lens, d is much smaller than one of the radii of curvature (either R1 or R2). In these conditions, the last term of the Lensmaker's equation becomes negligible, and the focal length of a thin lens in air can be approximated by Derivation using Snell's law Consider a thin lens with a first surface of radius and a flat rear surface, made of material with index of refraction . Applying Snell's law, light entering the first surface is refracted according to , where is the angle of incidence on the interface and is the angle of refraction. For the second surface, , where is the angle of incidence and is the angle of refraction. For small angles, . The geometry of the problem then gives: If the incoming ray is parallel to the optical axis and distance from it, then Substituting into the expression above, one gets This ray crosses the optical axis at distance , given by Combining the two expressions gives . It can be shown that if two such lenses of radii and are placed close together, the inverses of the focal lengths can be added up giving the thin lens formula: Image formation Certain rays follow simple rules when passing through a thin lens, in the paraxial ray approximation: Any ray that enters parallel to the axis on one side of the lens proceeds towards the focal point on the other side. Any ray that arrives at the lens after passing through the focal point on the front side, comes out parallel to the axis on the other side. Any ray that passes through the center of the lens will not change its direction. If three such rays are traced from the same point on an object in front of the lens (such as the top), their intersection will mark the location of the corresponding point on the image of the object. By following the paths of these rays, the relationship between the object distance so and the image distance si (these distances are with respect to the lens) can be shown to be which is known as the Gaussian thin lens equation, which sign convention is the following. There are other sign conventions such as Cartesian sign convention where the thin lens equation is written asFor a thick lens, the same form of lens equation is applicable with the modification that parameters in the equation are with respect to principal planes of the lens. Physical optics In scalar wave optics, a lens is a part which shifts the phase of the wavefront. Mathematically this can be understood as a multiplication of the wavefront with the following function: .
Technology
Optical components
null
16946852
https://en.wikipedia.org/wiki/Rabies
Rabies
Rabies is a viral disease that causes encephalitis in humans and other mammals. It was historically referred to as hydrophobia ("fear of water") because its victims would panic when offered liquids to drink. Early symptoms can include fever and abnormal sensations at the site of exposure. These symptoms are followed by one or more of the following symptoms: nausea, vomiting, violent movements, uncontrolled excitement, fear of water, an inability to move parts of the body, confusion, and loss of consciousness. Once symptoms appear, the result is virtually always death. The time period between contracting the disease and the start of symptoms is usually one to three months but can vary from less than one week to more than one year. The time depends on the distance the virus must travel along peripheral nerves to reach the central nervous system. Rabies is caused by lyssaviruses, including the rabies virus and Australian bat lyssavirus. It is spread when an infected animal bites or scratches a human or other animals. Saliva from an infected animal can also transmit rabies if the saliva comes into contact with the eyes, mouth, or nose. Globally, dogs are the most common animal involved. In countries where dogs commonly have the disease, more than 99% of rabies cases in humans are the direct result of dog bites. In the Americas, bat bites are the most common source of rabies infections in humans, and less than 5% of cases are from dogs. Rodents are very rarely infected with rabies. The disease can be diagnosed only after the start of symptoms. Animal control and vaccination programs have decreased the risk of rabies from dogs in a number of regions of the world. Immunizing people before they are exposed is recommended for those at high risk, including those who work with bats or who spend prolonged periods in areas of the world where rabies is common. In people who have been exposed to rabies, the rabies vaccine and sometimes rabies immunoglobulin are effective in preventing the disease if the person receives the treatment before the start of rabies symptoms. Washing bites and scratches for 15 minutes with soap and water, povidone-iodine, or detergent may reduce the number of viral particles and may be somewhat effective at preventing transmission. , only fourteen people were documented to have survived a rabies infection after showing symptoms. However, research conducted in 2010 among a population of people in Peru with a self-reported history of one or more bites from vampire bats (commonly infected with rabies), found that out of 73 individuals reporting previous bat bites, seven people had rabies virus-neutralizing antibodies (rVNA). Since only one member of this group reported prior vaccination for rabies, the findings of the research suggest previously undocumented cases of infection and viral replication followed by an abortive infection. This could indicate that people may have an exposure to the virus without treatment and develop natural antibodies as a result. Rabies causes about 59,000 deaths worldwide per year, about 40% of which are in children under the age of 15. More than 95% of human deaths from rabies occur in Africa and Asia. Rabies is present in more than 150 countries and on all continents but Antarctica. More than 3 billion people live in regions of the world where rabies occurs. A number of countries, including Australia and Japan, as well as much of Western Europe, do not have rabies among dogs. Many Pacific islands do not have rabies at all. It is classified as a neglected tropical disease. The global cost of rabies is estimated to be around US$8.6 billion per year including lost lives and livelihoods, medical care and associated costs, as well as uncalculated psychological trauma. Etymology The name rabies is derived from the Latin rabies, meaning "madness". The Greeks derived the word lyssa, from lud or "violent"; this root is used in the genus name of the rabies virus, Lyssavirus. Signs and symptoms The period between infection and the first symptoms (incubation period) is typically one to three months in humans. This period may be as short as four days or longer than six years, depending on the location and severity of the wound and the amount of virus introduced. Initial symptoms of rabies are often nonspecific, such as fever and headache. As rabies progresses and causes inflammation of the brain and meninges, symptoms can include slight or partial paralysis, anxiety, insomnia, confusion, agitation, abnormal behavior, paranoia, terror, and hallucinations. The person may also have fear of water. The symptoms eventually progress to delirium and coma. Death usually occurs two to ten days after first symptoms. Survival is almost unknown once symptoms have presented, even with intensive care. Rabies has also occasionally been referred to as hydrophobia ("fear of water") throughout its history. It refers to a set of symptoms in the later stages of an infection in which the person has difficulty swallowing, shows panic when presented with liquids to drink, and cannot quench their thirst. Saliva production is greatly increased, and attempts to drink, or even the intention or suggestion of drinking, may cause excruciatingly painful spasms of the muscles in the throat and larynx. Since the infected individual cannot swallow saliva and water, the virus has a much higher chance of being transmitted, because it multiplies and accumulates in the salivary glands and is transmitted through biting. Hydrophobia is commonly associated with furious rabies, which affects 80% of rabies-infected people. This form of rabies causes irrational aggression in the host, which aids in the spreading of the virus through animal bites; a "foaming at the mouth" effect, caused by the accumulation of saliva, is also commonly associated with rabies in the public perception and in popular culture. The remaining 20% may experience a paralytic form of rabies that is marked by muscle weakness, loss of sensation, and paralysis; this form of rabies does not usually cause fear of water. Cause Rabies is caused by a number of lyssaviruses including the rabies virus and Australian bat lyssavirus. Duvenhage lyssavirus may cause a rabies-like infection. The rabies virus is the type species of the Lyssavirus genus, in the family Rhabdoviridae, order Mononegavirales. Lyssavirions have helical symmetry, with a length of about 180 nm and a cross-section of about 75 nm. These virions are enveloped and have a single-stranded RNA genome with negative sense. The genetic information is packed as a ribonucleoprotein complex in which RNA is tightly bound by the viral nucleoprotein. The RNA genome of the virus encodes five genes whose order is highly conserved: nucleoprotein (N), phosphoprotein (P), matrix protein (M), glycoprotein (G), and the viral RNA polymerase (L). To enter cells, trimeric spikes on the exterior of the membrane of the virus interact with a specific cell receptor, the most likely one being the acetylcholine receptor. The cellular membrane pinches in a procession known as pinocytosis and allows entry of the virus into the cell by way of an endosome. The virus then uses the acidic environment, which is necessary, of that endosome and binds to its membrane simultaneously, releasing its five proteins and single-strand RNA into the cytoplasm. Once within a muscle or nerve cell, the virus undergoes replication. The L protein then transcribes five mRNA strands and a positive strand of RNA all from the original negative strand RNA using free nucleotides in the cytoplasm. These five mRNA strands are then translated into their corresponding proteins (P, L, N, G and M proteins) at free ribosomes in the cytoplasm. Some proteins require post-translational modifications. For example, the G protein travels through the rough endoplasmic reticulum, where it undergoes further folding, and is then transported to the Golgi apparatus, where a sugar group is added to it (glycosylation). When there are enough viral proteins, the viral polymerase will begin to synthesize new negative strands of RNA from the template of the positive-strand RNA. These negative strands will then form complexes with the N, P, L and M proteins and then travel to the inner membrane of the cell, where a G protein has embedded itself in the membrane. The G protein then coils around the N-P-L-M complex of proteins taking some of the host cell membrane with it, which will form the new outer envelope of the virus particle. The virus then buds from the cell. From the point of entry, the virus is neurotropic, traveling along the neural pathways into the central nervous system. The virus usually first infects muscle cells close to the site of infection, where they are able to replicate without being 'noticed' by the host's immune system. Once enough virus has been replicated, they begin to bind to acetylcholine receptors at the neuromuscular junction. The virus then travels through the nerve cell axon via retrograde transport, as its P protein interacts with dynein, a protein present in the cytoplasm of nerve cells. Once the virus reaches the cell body it travels rapidly to the central nervous system (CNS), replicating in motor neurons and eventually reaching the brain. After the brain is infected, the virus travels centrifugally to the peripheral and autonomic nervous systems, eventually migrating to the salivary glands, where it is ready to be transmitted to the next host. Transmission All warm-blooded species, including humans, may become infected with the rabies virus and develop symptoms. Birds were first artificially infected with rabies in 1884; however, infected birds are largely, if not wholly, asymptomatic, and recover. Other bird species have been known to develop rabies antibodies, a sign of infection, after feeding on rabies-infected mammals. The virus has also adapted to grow in cells of cold-blooded vertebrates. Most animals can be infected by the virus and can transmit the disease to humans. Worldwide, about 99% of human rabies cases come from domestic dogs. Other sources of rabies in humans include bats, monkeys, raccoons, foxes, skunks, cattle, wolves, coyotes, cats, and mongooses (normally either the small Asian mongoose or the yellow mongoose). Rabies may also spread through exposure to infected bears, domestic farm animals, groundhogs, weasels, and other wild carnivorans. However, lagomorphs, such as hares and rabbits, and small rodents, such as chipmunks, gerbils, guinea pigs, hamsters, mice, rats, and squirrels, are almost never found to be infected with rabies and are not known to transmit rabies to humans. Bites from mice, rats, or squirrels rarely require rabies prevention because these rodents are typically killed by any encounter with a larger, rabid animal, and would, therefore, not be carriers. The Virginia opossum (a marsupial, unlike the other mammals named in this paragraph, which are all eutherians/placental), has a lower internal body temperature than the rabies virus prefers and therefore is resistant but not immune to rabies. Marsupials, along with monotremes (platypuses and echidnas), typically have lower body temperatures than similarly sized eutherians. In 2024, reports emerged that rabies is spreading in South African Cape fur seals, possibly making it the first outbreak documented in marine mammals. The virus is usually present in the nerves and saliva of a symptomatic rabid animal. The route of infection is usually, but not always, by a bite. In many cases, the infected animal is exceptionally aggressive, may attack without provocation, and exhibits otherwise uncharacteristic behavior. This is an example of a viral pathogen modifying the behavior of its host to facilitate its transmission to other hosts. After a typical human infection by bite, the virus enters the peripheral nervous system. It then travels retrograde along the efferent nerves toward the central nervous system. During this phase, the virus cannot be easily detected within the host, and vaccination may still confer cell-mediated immunity to prevent symptomatic rabies. When the virus reaches the brain, it rapidly causes encephalitis, the prodromal phase, which is the beginning of the symptoms. Once the patient becomes symptomatic, treatment is almost never effective and mortality is over 99%. Rabies may also inflame the spinal cord, producing transverse myelitis. Although it is theoretically possible for rabies-infected humans to transmit it to others by biting or otherwise, no such cases have ever been documented, because infected humans are usually hospitalized and necessary precautions taken. Casual contact, such as touching a person with rabies or contact with non-infectious fluid or tissue (urine, blood, feces), does not constitute an exposure and does not require post-exposure prophylaxis. But as the virus is present in sperm and vaginal secretions, it might be possible for rabies to spread through sex. There are only a small number of recorded cases of human-to-human transmission of rabies, and all occurred through organ transplants, most frequently with corneal transplantation, from infected donors. Diagnosis Rabies can be difficult to diagnose because, in the early stages, it is easily confused with other diseases or even with a simple aggressive temperament. The reference method for diagnosing rabies is the fluorescent antibody test (FAT), an immunohistochemistry procedure, which is recommended by the World Health Organization (WHO). The FAT relies on the ability of a detector molecule (usually fluorescein isothiocyanate) coupled with a rabies-specific antibody, forming a conjugate, to bind to and allow the visualisation of rabies antigen using fluorescent microscopy techniques. Microscopic analysis of samples is the only direct method that allows for the identification of rabies virus-specific antigen in a short time and at a reduced cost, irrespective of geographical origin and status of the host. It has to be regarded as the first step in diagnostic procedures for all laboratories. Autolysed samples can, however, reduce the sensitivity and specificity of the FAT. The RT PCR assays proved to be a sensitive and specific tool for routine diagnostic purposes, particularly in decomposed samples or archival specimens. The diagnosis can be reliably made from brain samples taken after death. The diagnosis can also be made from saliva, urine, and cerebrospinal fluid samples, but this is not as sensitive or reliable as brain samples. Cerebral inclusion bodies called Negri bodies are 100% diagnostic for rabies infection but are found in only about 80% of cases. If possible, the animal from which the bite was received should also be examined for rabies. Some light microscopy techniques may also be used to diagnose rabies at a tenth of the cost of traditional fluorescence microscopy techniques, allowing identification of the disease in less-developed countries. A test for rabies, known as LN34, is easier to run on a dead animal's brain and might help determine who does and does not need post-exposure prevention. The test was developed by the Centers for Disease Control and Prevention (CDC) in 2018. The differential diagnosis in a case of suspected human rabies may initially include any cause of encephalitis, in particular infection with viruses such as herpesviruses, enteroviruses, and arboviruses such as West Nile virus. The most important viruses to rule out are herpes simplex virus type one, varicella zoster virus, and (less commonly) enteroviruses, including coxsackieviruses, echoviruses, polioviruses, and human enteroviruses 68 to 71. New causes of viral encephalitis are also possible, as was evidenced by the 1999 outbreak in Malaysia of 300 cases of encephalitis with a mortality rate of 40% caused by Nipah virus, a newly recognized paramyxovirus. Likewise, well-known viruses may be introduced into new locales, as is illustrated by the outbreak of encephalitis due to West Nile virus in the eastern United States. Prevention Almost all human exposure to rabies was fatal until a vaccine was developed in 1885 by Louis Pasteur and Émile Roux. Their original vaccine was harvested from infected rabbits, from which the virus in the nerve tissue was weakened by allowing it to dry for five to ten days. Similar nerve tissue-derived vaccines are still used in some countries, as they are much cheaper than modern cell culture vaccines. The human diploid cell rabies vaccine was started in 1967. Less expensive purified chicken embryo cell vaccine and purified vero cell rabies vaccine are now available. A recombinant vaccine called V-RG has been used in Belgium, France, Germany, and the United States to prevent outbreaks of rabies in undomesticated animals. Immunization before exposure has been used in both human and nonhuman populations, where, as in many jurisdictions, domesticated animals are required to be vaccinated. The Missouri Department of Health and Senior Services Communicable Disease Surveillance 2007 Annual Report states the following can help reduce the risk of contracting rabies: Vaccinating dogs, cats, and ferrets against rabies Keeping pets under supervision Not handling wild animals or strays Contacting an animal control officer upon observing a wild animal or a stray, especially if the animal is acting strangely If bitten by an animal, washing the wound with soap and water for 10 to 15 minutes and contacting a healthcare provider to determine if post-exposure prophylaxis is required 28 September is World Rabies Day, which promotes the information, prevention, and elimination of the disease. In Asia and in parts of the Americas and Africa, dogs remain the principal host. Mandatory vaccination of animals is less effective in rural areas. Especially in developing countries, pets may not be privately kept and their destruction may be unacceptable. Oral vaccines can be safely distributed in baits, a practice that has successfully reduced rabies in rural areas of Canada, France, and the United States. In Montreal, Quebec, Canada, baits are successfully used on raccoons in the Mount-Royal Park area. Vaccination campaigns may be expensive, but cost-benefit analysis suggests baits may be a cost-effective method of control. In Ontario, a dramatic drop in rabies was recorded when an aerial bait-vaccination campaign was launched. The number of recorded human deaths from rabies in the United States has dropped from 100 or more annually in the early 20th century to one or two per year because of widespread vaccination of domestic dogs and cats and the development of human vaccines and immunoglobulin treatments. Most deaths now result from bat bites, which may go unnoticed by the victim and hence untreated. Treatment After exposure Treatment after exposure can prevent the disease if given within 10 days. The rabies vaccine is 100% effective if given before symptoms of rabies appear. Every year, more than 15 million people get vaccinated after potential exposure. While this works well, the cost is significant. In the US it is recommended people receive one dose of human rabies immunoglobulin (HRIG) and four doses of rabies vaccine over a 14-day period. HRIG is expensive and makes up most of the cost of post-exposure treatment, ranging as high as several thousand dollars. In the UK, one dose of HRIG costs the National Health Service £1,000, although this is not flagged as a "high-cost medication". A full course of vaccine costs £120–180. As much as possible of HRIG should be injected around the bites, with the remainder being given by deep intramuscular injection at a site distant from the vaccination site. People who have previously been vaccinated against rabies do not need to receive the immunoglobulin—only the postexposure vaccinations on days 0 and 3. The side effects of modern cell-based vaccines are similar to the side effects of flu shots. The old nerve-tissue-based vaccination required multiple injections into the abdomen with a large needle but is inexpensive. It is being phased out and replaced by affordable World Health Organization intradermal-vaccination regimens. In children less than a year old, the lateral thigh is recommended. Thoroughly washing the wound as soon as possible with soap and water for approximately five minutes is effective in reducing the number of viral particles. Povidone-iodine or alcohol is then recommended to reduce the virus further. Awakening to find a bat in the room, or finding a bat in the room of a previously unattended child or mentally disabled or intoxicated person, is an indication for post-exposure prophylaxis (PEP). The recommendation for the precautionary use of PEP in bat encounters where no contact is recognized has been questioned in the medical literature, based on a cost–benefit analysis. However, a 2002 study has supported the protocol of precautionary administration of PEP where a child or mentally compromised individual has been alone with a bat, especially in sleep areas, where a bite or exposure may occur with the victim being unaware. After onset Once rabies develops, death almost certainly follows. Palliative care in a hospital setting is recommended with administration of large doses of pain medication, and sedatives in preference to physical restraint. Ice fragments can be given by mouth for thirst, but there is no good evidence intravenous hydration is of benefit. A treatment known as the Milwaukee protocol, which involves putting people with rabies symptoms into a chemically induced coma and using antiviral medications in an attempt to protect their brain until their body has had time to produce rabies antibodies, has been occasionally used. It was initially attempted in 2004 on Jeanna Giese, a teenage girl from Wisconsin, who subsequently became the first human known to have survived rabies without receiving post-exposure prophylaxis before symptom onset. Giese did require extensive rehabilitation afterward, and her balance and neural function remained impaired. The protocol has been enacted on many rabies victims since, but has been adjudged a failure; some survivors of the acute initial phase later died of rabies. Concerns have also been raised about its monetary costs and its ethics. Antiviral therapy to combat the effects of rabies has also been researched; favipiravir, for example, has shown potential at inhibiting the development of encephalitis. It has been suggested that such therapy in combination with immunotherapy and neuroprotective measures could be beneficial. Prognosis Vaccination after exposure, PEP, is highly successful in preventing rabies. In unvaccinated humans, rabies is almost certainly fatal after neurological symptoms have developed. Epidemiology In 2010, an estimated 26,000 people died from rabies, down from 54,000 in 1990. The majority of the deaths occurred in Asia and Africa. , India (approximately 20,847), followed by China (approximately 6,000) and the Democratic Republic of the Congo (5,600), had the most cases. A 2015 collaboration between the World Health Organization, World Organization of Animal Health (OIE), Food and Agriculture Organization of the United Nation (FAO), and Global Alliance for Rabies Control has a goal of eliminating deaths from rabies by 2030. India India has the highest rate of human rabies in the world, primarily because of stray dogs, whose number has greatly increased since a 2001 law forbade the killing of dogs. Effective control and treatment of rabies in India is hindered by a form of mass hysteria known as puppy pregnancy syndrome (PPS). Dog bite victims with PPS, male as well as female, become convinced that puppies are growing inside them, and often seek help from faith healers rather than medical services. An estimated 20,000 people die every year from rabies in India, more than a third of the global total. Australia Australia has an official rabies-free status, although Australian bat lyssavirus (ABLV), discovered in 1996, is a rabies-causing virus related to the rabies virus prevalent in Australian native bat populations. United States Canine-specific rabies has been eradicated in the United States, but rabies is common among wild animals, and an average of 100 dogs become infected from other wildlife each year. High public awareness of the virus, efforts at vaccination of domestic animals and curtailment of feral populations, and availability of postexposure prophylaxis have made rabies very rare in humans in the United States. From 1960 to 2018, a total of 125 such cases were reported in the United States; of them, 36 (28%) were attributed to dog bites suffered during international travel. Among the 89 infections acquired in the United States, 62 (70%) were attributed to bats. The most recent rabies death in the United States was in November 2021, where a Texas child was bitten by a bat in late August 2021 but his parents failed to get him treatment. He died less than three months later. Europe Either no or very few cases of rabies are reported each year in Europe; cases are contracted both during travel and in Europe. In Switzerland the disease was virtually eliminated after scientists placed chicken heads laced with live attenuated vaccine in the Swiss Alps. Foxes, proven to be the main source of rabies in the country, ate the chicken heads and became immunized. Italy, after being declared rabies-free from 1997 to 2008, has witnessed a reemergence of the disease in wild animals in the Triveneto regions (Trentino-Alto Adige/Südtirol, Veneto and Friuli-Venezia Giulia) due to the spreading of an epidemic in the Balkans that also affected Austria. An extensive wild animal vaccination campaign eliminated the virus from Italy again, and it regained the rabies-free country status in 2013, the last reported case of rabies being reported in a red fox in early 2011. The United Kingdom has been free of rabies since the early 20th century except for a rabies-like virus (EBLV-2) in a few Daubenton's bats. There has been one fatal case of EBLV-2 transmission to a human. There have been four deaths from rabies, transmitted abroad by dog bites, since 2000. The last infection in the UK occurred in 1922, and the last death from indigenous rabies was in 1902. Sweden and mainland Norway have been free of rabies since 1886. Bat rabies antibodies (but not the virus) have been found in bats. On Svalbard, animals can cross the arctic ice from Greenland or Russia. Mexico Mexico was certified by the World Health Organization as being free of dog-transmitted rabies in 2019 because no case of dog-human transmission had been recorded in two years. Asian countries Despite rabies being preventable and the many successes of the years from countries such as North America, South Korea and Western Europe, Rabies remains endemic in many Southeast Asian countries including Cambodia, Bangladesh, Bhutan, North Korea, India, Indonesia, Myanmar Nepal, Sri Lanka, and Thailand. Half the global rabies deaths occur in southeast Asia - approx. 26,000 per year. Much of what prevents Asia from implementing the same measures as other countries is cost. Treating wild canines is the primary means of preventing rabies, however it costs 10 times more than treating individuals as they come with bites, and research also increases cost. As a result, India and other surrounding countries are unable to apply many preventative measures because of financial restrictions. Thailand In 2013 human rabies was nearly eradicated in the state of Thailand after new measures were put into place requiring the vaccination of all domestic dogs as well as programs seeking to vaccinate wild dogs and large animals. However, neighboring countries unable to afford rabies control measures – Cambodia, Laos, and Myanmar – allowed infected animals to continue to pass the border and infect the Thai population, leading to ~100 cases a year. These areas around the border are called Rabies Red areas and are where Thailand continuously struggles with eradication and will do so until the surrounding countries eliminate the virus. Thailand has the resources and medicine necessary to tackle rabies such as implementing regulations that require all children to receive a rabies vaccination before attending schools and having clinics available for those bit or scratched by a possible rabid animal. However, individual choice is still a factor, and 10 people per year die out of refusal to seek treatment. Cambodia Cambodia has approximately 800 cases of human rabies per year, making it one of the top countries in human rabies incidences. Much of this falls on their lack of animal care; Cambodia has hundreds of thousands of animals infected with rabies, another global high, yet little surveillance of said animals and few laws requiring pets and other household animals to be vaccinated. In recent years Cambodia has improved significantly in their human rabies medical practices, with clinics all over the countries being made with treatments and vaccination on hand as well as rabies-related education in school classes. However, it is still lacking in terms of animal surveillance and treatment which leads to bleeding into surrounding countries. History Rabies has been known since around 2000 BC. The first written record of rabies is in the Mesopotamian Codex of Eshnunna (), which dictates that the owner of a dog showing symptoms of rabies should take preventive measures against bites. If another person were bitten by a rabid dog and later died, the owner was heavily fined. In ancient Greece, rabies was supposed to be caused by Lyssa, the spirit of mad rage. Ineffective folk remedies abounded in the medical literature of the ancient world. The physician Scribonius Largus prescribed a poultice of cloth and hyena skin; Antaeus recommended a preparation made from the skull of a hanged man. Rabies appears to have originated in the Old World, the first epizootic in the New World occurring in Boston in 1768. Rabies was considered a scourge for its prevalence in the 19th century. In France and Belgium, where Saint Hubert was venerated, the "St Hubert's Key" was heated and applied to cauterize the wound. By an application of magical thinking, dogs were branded with the key in hopes of protecting them from rabies. It was not uncommon for a person bitten by a dog merely suspected of being rabid to commit suicide or to be killed by others. In ancient times the attachment of the tongue (the lingual frenulum, a mucous membrane) was cut and removed, as this was where rabies was thought to originate. This practice ceased with the discovery of the actual cause of rabies. Louis Pasteur's 1885 nerve tissue vaccine was successful, and was progressively improved to reduce often severe side-effects. In modern times, the fear of rabies has not diminished, and the disease and its symptoms, particularly agitation, have served as an inspiration for several works of zombie or similarly themed fiction, often portraying rabies as having mutated into a stronger virus which fills humans with murderous rage or incurable illness, bringing about a devastating, widespread pandemic. Other animals Rabies is infectious to mammals; three stages of central nervous system infection are recognized. The clinical course is often shorter in animals than in humans, but result in similar symptoms and almost always death. The first stage is a one- to three-day period characterized by behavioral changes and is known as the prodromal stage. The second is the excitative stage, which lasts three to four days. This stage is often known as "furious rabies" for the tendency of the affected animal to be hyper-reactive to external stimuli and bite or attack anything near. In some cases, animals skip the excitative stage and develop paralysis, as in the third phase; the paralytic phase. This stage develops by damage to motor neurons. Incoordination is seen, owing to rear limb paralysis, and drooling and difficulty swallowing is caused by paralysis of facial and throat muscles. Death is usually caused by respiratory arrest.
Biology and health sciences
Infectious disease
null
414958
https://en.wikipedia.org/wiki/Paleobotany
Paleobotany
Paleobotany, also spelled as palaeobotany, is the branch of botany dealing with the recovery and identification of plant remains from geological contexts, and their use for the biological reconstruction of past environments (paleogeography), and the evolutionary history of plants, with a bearing upon the evolution of life in general. A synonym is paleophytology. It is a component of paleontology and paleobiology. The prefix palaeo- or paleo- means "ancient, old", and is derived from the Greek adjective , . Paleobotany includes the study of terrestrial plant fossils, as well as the study of prehistoric marine photoautotrophs, such as photosynthetic algae, seaweeds or kelp. A closely related field is palynology, which is the study of fossilized and extant spores and pollen. Paleobotany is important in the reconstruction of ancient ecological systems and climate, known as paleoecology and paleoclimatology respectively. It is fundamental to the study of green plant development and evolution. Paleobotany is a historical science much like its adjacent, paleontology. Because of the understanding that paleobotany gives to archeologists, it has become important to the field of archaeology as a whole. primarily for the use of phytoliths in relative dating and in paleoethnobotany. The study and discipline of paleobotany was seen as far back as the 19th century. Known as the “Father of Paleobotany”, French botanist Adolphe-Theodore Brongniart was a sufficient figure in this emergence of Paleobotany, known for his work on the relationship between the living and extinct plant life. This work not only progressed paleobotany but also the understanding of the earth and its longevity in actuality and the organic matter that existed over the earth’s timeline. Paleobotany also succeeded in the hands of German paleontologist Ernst Friedrich von Schlothiem, and Czech nobleman and scholar, Kaspar Maria von Sternberg. Related Sciences Paleoecology As paleobotany is the specification of fossilized plant life and the environment in which they thrived in, paleoecology is the study of all once-living organisms and the interactions held in the environments they once existed in, before becoming extinct. Paleoecology is a similar study to that of paleontology, but paleoecology uses more methodology from the biological sciences and geological sciences rather than from an anthropological standpoint as paleontologists do. Paleopalynology Paleopalynology, more commonly known as palynology, is the science and study of ancient palynomorphs: particles sized between 5 and 500 micrometers. This would be an inclusion of pollen and spores and any other micro-organic matter. Paleopalynology is simply paleobotany on a much smaller scale, the two in close association with each other. Similar to paleobotany, we can tell a great deal of information about the environment and biome at the time these particles existed prehistorically. These particles also help geologists identify and date the rock strata of sedimentary rocks. It is also used to find natural oils and gas within these rock layers for extraction. Besides uncovering documentation of our past environmental conditions, palynology can also tell us about animal diets, historical standings of human allergies, and reveal evidence in crime cases. Overview of the paleobotanical record Macroscopic remains of true vascular plants are first found in the fossil record during the Silurian Period of the Paleozoic era. Some dispersed, fragmentary fossils of disputed affinity, primarily spores and cuticles, have been found in rocks from the Ordovician Period in Oman, and are thought to derive from liverwort- or moss-grade fossil plants. An important early land plant fossil locality is the Rhynie chert, found outside the village of Rhynie in Scotland. The Rhynie chert is an Early Devonian sinter (hot spring) deposit composed primarily of silica. It is exceptional due to its preservation of several different clades of plants, from mosses and lycophytes to more unusual, problematic forms. Many fossil animals, including arthropods and arachnids, are also found in the Rhynie chert, and it offers a unique window into the history of early terrestrial life. Plant-derived macrofossils become abundant in the Late Devonian including tree trunks, fronds, and roots. The earliest tree was once thought to be Archaeopteris, which bears simple, fern-like leaves spirally arranged on branches atop a conifer-like trunk, although it is now known to be the recently discovered Wattieza. Widespread coal swamp deposits across North America and Europe during the Carboniferous Period contain a wealth of fossils containing arborescent lycopods up to 30 m tall, abundant seed plants, such as conifers and seed ferns, and countless smaller, herbaceous plants. Angiosperms (flowering plants) evolved during the Mesozoic, and flowering plant pollen and leaves first appeared during the Early Cretaceous, approximately 130 million years ago. Plant fossils A plant fossil is any preserved part of a plant that has long since died. Such fossils may be prehistoric impressions that are many millions of years old, or bits of charcoal that are only a few hundred years old. Prehistoric plants are various groups of plants that lived before recorded history (before about 3500 BC). Preservation of plant fossils Plant fossils can be preserved in a variety of ways, each of which can give different types of information about the original parent plant. These modes of preservation may be summarised in a paleobotanical context as follows. Adpressions (compressions – impressions). These are the most commonly found type of plant fossil. They provide good morphological detail, especially of dorsiventral (flattened) plant parts such as leaves. If the cuticle is preserved, they can also yield fine anatomical detail of the epidermis. Little other detail of cellular anatomy is normally preserved. Petrifactions (permineralisations or anatomically preserved fossils). These provide fine detail of the cell anatomy of the plant tissue. Morphological detail can also be determined by serial sectioning, but this is both time consuming and difficult. Moulds and casts. These only tend to preserve the more robust plant parts such as seeds or woody stems. They can provide information about the three-dimensional form of the plant, and in the case of casts of tree stumps can provide evidence of the density of the original vegetation. However, they rarely preserve any fine morphological detail or cell anatomy. A subset of such fossils are pith casts, where the centre of a stem is either hollow or has delicate pith. After death, sediment enters and forms a cast of the central cavity of the stem. The best known examples of pith casts are in the Carboniferous Sphenophyta (Calamites) and cordaites (Artisia). Authigenic mineralisations. These can provide very fine, three-dimensional morphological detail, and have proved especially important in the study of reproductive structures that can be severely distorted in adpressions. However, as they are formed in mineral nodules, such fossils can rarely be of large size. Fusain. Fire normally destroys plant tissue but sometimes charcoalified remains can preserve fine morphological detail that is lost in other modes of preservation; some of the best evidence of early flowers has been preserved in fusain. Fusain fossils are delicate and often small, but because of their buoyancy can often drift for long distances and can thus provide evidence of vegetation away from areas of sedimentation. Fossil-taxa Plant fossils almost always represent disarticulated parts of plants; even small herbaceous plants are rarely preserved whole. The few examples of plant fossils that appear to be the remains of whole plants are in fact incomplete as the internal cellular tissue and fine micromorphological detail is normally lost during fossilization. Plant remains can be preserved in a variety of ways, each revealing different features of the original parent plant. Because of this, paleobotanists usually assign different taxonomic names to different parts of the plant in different modes of preservation. For instance, in the subarborescent Palaeozoic sphenophytes, an impression of a leaf might be assigned to the genus Annularia, a compression of a cone assigned to Palaeostachya, and the stem assigned to either Calamites or Arthroxylon depending on whether it is preserved as a cast or a petrifaction. All of these fossils may have originated from the same parent plant but they are each given their own taxonomic name. This approach to naming plant fossils originated with the work of Adolphe-Théodore Brongniart. For many years this approach to naming plant fossils was accepted by paleobotanists but not formalised within the International Rules of Botanical Nomenclature. Eventually, and proposed a set of formal provisions, the essence of which was introduced into the 1952 International Code of Botanical Nomenclature. These early provisions allowed fossils representing particular parts of plants in a particular state of preservation to be placed in organ-genera. In addition, a small subset of organ-genera, to be known as form-genera, were recognised based on the artificial taxa introduced by Brongniart mainly for foliage fossils. The concepts and regulations surrounding organ- and form-genera were modified within successive codes of nomenclature, reflecting a failure of the paleobotanical community to agree on how this aspect of plant taxonomic nomenclature should work (a history reviewed by Cleal and Thomas in 2020). The use of organ- and fossil-genera was abandoned with the St Louis Code, and replaced by "morphotaxa". The situation in the Vienna Code of 2005 was that any plant taxon whose type is a fossil, except diatoms, can be described as a morphotaxon, a particular part of a plant preserved in a particular way. Although the name is always fixed to the type specimen, the circumscription (i.e. range of specimens that may be included within the taxon) is defined by the taxonomist who uses the name. Such a change in circumscription could result in an expansion of the range of plant parts or preservation states that could be incorporated within the taxon. For instance, a fossil-genus originally based on compressions of ovules could be used to include the multi-ovulate cupules within which the ovules were originally borne. A complication can arise if, in this case, there was an already named fossil-genus for these cupules. If paleobotanists were confident that the type of the ovule fossil-genus and of the cupule fossil-genus could be included in the same genus, then the two names would compete as to being the correct one for the newly emended genus. In general, there would be competing priority whenever plant parts that had been given different names were discovered to belong to the same species. It appeared that morphotaxa offered no real advantage to paleobotanists over normal fossil-taxa and the concept was abandoned with the 2011 botanical congress and the 2012 International Code of Nomenclature for algae, fungi, and plants. Fossil groups of plants Some plants have remained almost unchanged throughout earth's geological time scale. Horsetails had evolved by the Late Devonian, early ferns had evolved by the Mississippian, conifers by the Pennsylvanian. Some plants of prehistory are the same ones around today and are thus living fossils, such as Ginkgo biloba and Sciadopitys verticillata. Other plants have changed radically, or became extinct. Examples of prehistoric plants are: Araucaria mirabilis Archaeopteris Calamites Dillhoffia Glossopteris Hymenaea protera Nelumbo aureavallis Pachypteris Palaeoraphe Peltandra primaeva Protosalvinia Trochodendron nastae Notable paleobotanists Edward W. Berry (1875–1945), paleoecology and phytogeography William Gilbert Chaloner (1928–2016) Isabel Cookson (1893–1973), early vascular plants, palynology Margaret Bryan Davis, (1931-), American paleoecologist and palynologist Dianne Edwards (1942–), colonization of land by early terrestrial floras Constantin von Ettingshausen (1826–1897), Tertiary floras Thomas Maxwell Harris (1903–1983), Mesozoic plants of Jameson Land (Greenland) and Yorkshire. Robert Kidston (1852–1924), early land plants, Devonian and Carboniferous floras, and their use in stratigraphy Ana María Ragonese (1928–1999), fossil wood morphology, spermatophytes Ethel Ida Sanborn (1883–1952), extinct flora of Oregon and the Western United States Birbal Sahni (1891–1949), Revision of Indian Gondwana Plants Dunkinfield Henry Scott (1854–1934), analysis of the structures of fossil plants Kaspar Maria von Sternberg (1761–1838), the "father of paleobotany" Franz Unger (1800–1870), pioneer in plant physiology, phytotomy and soil science Jack A. Wolfe (1936–2005), Tertiary paleoclimate of western North America Gilbert Arthur Leisman (1924–1996), known for work on Carboniferous lycophytes of central North America.
Biology and health sciences
Paleontology
Biology
415070
https://en.wikipedia.org/wiki/Chinese%20herbology
Chinese herbology
Chinese herbology () is the theory of traditional Chinese herbal therapy, which accounts for the majority of treatments in traditional Chinese medicine (TCM). A Nature editorial described TCM as "fraught with pseudoscience", and said that the most obvious reason why it has not delivered many cures is that the majority of its treatments have no logical mechanism of action. The term herbology is misleading in the sense that, while plant elements are by far the most commonly used substances, animal, human, and mineral products are also used, some of which are poisonous. In the they are referred to as () which means toxin, poison, or medicine. Paul U. Unschuld points out that this is similar etymology to the Greek and so he uses the term pharmaceutic. Thus, the term medicinal (instead of herb) is usually preferred as a translation for (). Research into the effectiveness of traditional Chinese herbal therapy is of poor quality and often tainted by bias, with little or no rigorous evidence of efficacy. There are concerns over a number of potentially toxic Chinese herbs. History Chinese herbs have been used for centuries. Among the earliest literature are lists of prescriptions for specific ailments, exemplified by the manuscript Recipes for 52 Ailments, found in the Mawangdui which were sealed in 168BCE. The first traditionally recognized herbalist is Shénnóng (, ), a mythical god-like figure, who is said to have lived around 2800BCE. He allegedly tasted hundreds of herbs and imparted his knowledge of medicinal and poisonous plants to farmers. His (, Shennong's Materia Medica) is considered as the oldest book on Chinese herbal medicine. It classifies 365 species of roots, grass, woods, furs, animals and stones into three categories of herbal medicine: The "superior" category, which includes herbs effective for multiple diseases and are mostly responsible for maintaining and restoring the body balance. They have almost no unfavorable side-effects. A category comprising tonics and boosters, whose consumption must not be prolonged. A category of substances which must usually be taken in small doses, and for the treatment of specific diseases only. The original text of Shennong's Materia Medica has been lost; however, there are extant translations. The true date of origin is believed to fall into the late Western Han dynasty (i.e., the first century BCE). The Treatise on Cold Damage Disorders and Miscellaneous Illnesses was collated by Zhang Zhongjing, also sometime at the end of the Han dynasty, between 196 and 220 CE. Focusing on drug prescriptions, it was the first medical work to combine Yinyang and the Five Phases with drug therapy. This formulary was also the earliest Chinese medical text to group symptoms into clinically useful "patterns" (, ) that could serve as targets for therapy. Having gone through numerous changes over time, it now circulates as two distinct books: the Treatise on Cold Damage Disorders and the Essential Prescriptions of the Golden Casket, which were edited separately in the eleventh century, under the Song dynasty. Succeeding generations augmented these works, as in the (), a 7th-century Tang dynasty Chinese treatise on herbal medicine. There was a shift in emphasis in treatment over several centuries. A section of the Huangdi Neijing Suwen including Chapter 74 was added by Wang Bing in his 765 edition. In which it says: "Ruler of disease it called Sovereign, aid to Sovereign it called Minister, comply with Minister it called Envoy (Assistant), not upper lower three classes (qualities) it called." The last part is interpreted as stating that these three rulers are not the three classes of Shénnóng mentioned previously. This chapter in particular outlines a more forceful approach. Later on Zhang Zihe ( Zhang Cong-zhen, 1156–1228) is credited with founding the 'Attacking School' which criticized the overuse of tonics. Arguably the most important of these later works is the Compendium of Materia Medica (, ) compiled during the Ming dynasty by Li Shizhen, which is still used today for consultation and reference. The use of Chinese herbs was popular during the medieval age in western Asian and Islamic countries. They were traded through the Silk Road from the East to the West. Cinnamon, ginger, rhubarb, nutmeg and cubeb are mentioned as Chinese herbs by medieval Islamic medical scholars Such as Rhazes (854–925 CE), Haly Abbas (930–994 CE) and Avicenna (980–1037 CE). There were also multiple similarities between the clinical uses of these herbs in Chinese and Islamic medicine. Raw materials There are roughly 13,000 medicinals used in China and over 100,000 medicinal recipes recorded in the ancient literature. Plant elements and extracts are by far the most common elements used. In the classic Handbook of Traditional Drugs from 1941, 517 drugs were listed – out of these, only 45 were animal parts, and 30 were minerals. For many plants used as medicinals, detailed instructions have been handed down not only regarding the locations and areas where they grow best, but also regarding the best timing of planting and harvesting them. Some animal parts used as medicinals can be considered rather strange such as cows' gallstones. Furthermore, the classic materia medica describes the use of 35 traditional Chinese medicines derived from the human body, including bones, fingernail, hairs, dandruff, earwax, impurities on the teeth, feces, urine, sweat, and organs, but most are no longer in use. Preparation Decoction Typically, one batch of medicinals is prepared as a decoction of about 9 to 18 substances. Some of these are considered as main herbs, some as ancillary herbs; within the ancillary herbs, up to three categories can be distinguished. Some ingredients are added to cancel out toxicity or side-effects of the main ingredients; on top of that, some medicinals require the use of other substances as catalysts. Chinese patent medicine Chinese patent medicine () is a kind of traditional Chinese medicine. They are standardized herbal formulas. From ancient times, pills were formed by combining several herbs and other ingredients, which were dried and ground into a powder. They were then mixed with a binder and formed into pills by hand. The binder was traditionally honey. Modern teapills, however, are extracted in stainless steel extractors to create either a water decoction or water-alcohol decoction, depending on the herbs used. They are extracted at a low temperature (below ) to preserve essential ingredients. The extracted liquid is then further condensed, and some raw herb powder from one of the herbal ingredients is mixed in to form a herbal dough. This dough is then machine cut into tiny pieces, a small amount of excipients are added for a smooth and consistent exterior, and they are spun into pills. These medicines are not patented in the traditional sense of the word. No one has exclusive rights to the formula. Instead, "patent" refers to the standardization of the formula. In China, all Chinese patent medicines of the same name will have the same proportions of ingredients, and manufactured in accordance with the PRC Pharmacopoeia, which is mandated by law. However, in western countries there may be variations in the proportions of ingredients in patent medicines of the same name, and even different ingredients altogether. Several producers of Chinese herbal medicines are pursuing FDA clinical trials to market their products as drugs in U.S. and European markets. Chinese herbal extracts Chinese herbal extracts are herbal decoctions that have been condensed into a granular or powdered form. Herbal extracts, similar to patent medicines, are easier and more convenient for patients to take. The industry extraction standard is 5:1, meaning for every five pounds of raw materials, one pound of herbal extract is derived. Categorization There are several different methods to classify traditional Chinese medicinals: The Four Natures () The Five Flavors () The meridians () The specific function. Four Natures The Four Natures are: hot (), warm (), cool (), cold () or neutral (). Hot and warm herbs are used to treat cold diseases, while cool and cold herbs are used to treat hot diseases. Five Flavors The Five Flavors, sometimes also translated as Five Tastes, are: acrid/pungent (), sweet (), bitter (), sour (), and salty (). Substances may also have more than one flavor, or none (i.e., a bland () flavor). Each of the Five Flavors corresponds to one of the zàng organs, which in turn corresponds to one of the Five Phases: A flavor implies certain properties and presumed therapeutic "actions" of a substance: saltiness "drains downward and softens hard masses"; sweetness is "supplementing, harmonizing, and moistening"; pungent substances are thought to induce sweat and act on qi and blood; sourness tends to be astringent () in nature; bitterness "drains heat, purges the bowels, and eliminates dampness". Specific function These categories mainly include: exterior-releasing or exterior-resolving heat-clearing downward-draining or precipitating wind-damp-dispelling dampness-transforming promoting the movement of water and percolating dampness or dampness-percolating interior-warming qi-regulating or qi-rectifying dispersing food accumulation or food-dispersing worm-expelling stopping bleeding or blood-stanching quickening the Blood and dispelling stasis or blood-quickening or blood-moving. transforming phlegm, stopping coughing and calming wheezing or phlegm-transforming and cough- and panting-suppressing Spirit-quieting or Shen-calming. calming the Liver and expelling wind or liver-calming and wind-extinguishing orifice-opening supplementing or tonifying: this includes qi-supplementing, blood-nourishing, yin-enriching, and yang-fortifying. astriction-promoting or securing and astringing vomiting-inducing substances for external application Nomenclature Many herbs earn their names from their unique physical appearance. Examples of such names include (Radix cyathulae seu achyranthis), 'cow's knees,' which has big joints that might look like cow knees; (Fructificatio tremellae fuciformis), 'white wood ear', which is white and resembles an ear; (Rhizoma cibotii), 'dog spine,' which resembles the spine of a dog. Color Color is not only a valuable means of identifying herbs, but in many cases also provides information about the therapeutic attributes of the herb. For example, yellow herbs are referred to as (yellow) or (gold). (Cortex Phellodendri) means 'yellow fir," and (Flos Lonicerae) has the label 'golden silver flower." Smell and taste Unique flavors define specific names for some substances. means 'sweet,' so (Radix glycyrrhizae) is 'sweet herb,' an adequate description for the licorice root. means 'bitter', thus (Sophorae flavescentis) translates as 'bitter herb.' Geographic location The locations or provinces in which herbs are grown often figure into herb names. For example, (Radix glehniae) is grown and harvested in northern China, whereas (Radix adenophorae) originated in southern China. And the Chinese words for north and south are respectively and . (Bulbus fritillariae cirrhosae) and (Radix cyathulae) are both found in Sichuan province, as the character indicates in their names. Function Some herbs, like (Radix Saposhnikoviae), literally 'prevent wind,' preventing or treating wind-related illnesses. (Radix Dipsaci), literally 'restore the broken,' treating torn soft tissues and broken bones. Country of origin Many herbs indigenous to other countries have been incorporated into the Chinese materia medica. (Radix panacis quinquefolii), imported from North American crops, translates as 'western ginseng,' while (Radix ginseng Japonica), grown in and imported from North Asian countries, is 'eastern ginseng.' Toxicity From the earliest records regarding the use of medicinals to today, the toxicity of certain substances has been described in all Chinese materia medica. Since TCM has become more popular in the Western world, there are increasing concerns about the potential toxicity of many traditional Chinese medicinals including plants, animal parts and minerals. For most medicinals, efficacy and toxicity testing are based on traditional knowledge rather than laboratory analysis. The toxicity in some cases could be confirmed by modern research (i.e., in scorpion); in some cases it could not (i.e., in Curculigo). Further, ingredients may have different names in different locales or in historical texts, and different preparations may have similar names for the same reason, which can create inconsistencies and confusion in the creation of medicinals, with the possible danger of poisoning. Edzard Ernst "concluded that adverse effects of herbal medicines are an important albeit neglected subject in dermatology, which deserves further systematic investigation." Research suggests that the toxic heavy metals and undeclared drugs found in Chinese herbal medicines might be a serious health issue. Substances known to be potentially dangerous include aconite, secretions from the Asiatic toad, powdered centipede, the Chinese beetle (Mylabris phalerata, Ban mao), and certain fungi. There are health problems associated with Aristolochia. Toxic effects are also frequent with Aconitum. To avoid its toxic adverse effects Xanthium sibiricum must be processed. Hepatotoxicity has been reported with products containing Reynoutria multiflora (synonym Polygonum multiflorum), glycyrrhizin, Senecio and Symphytum. The evidence suggests that hepatotoxic herbs also include Dictamnus dasycarpus, Astragalus membranaceus, and Paeonia lactiflora; although there is no evidence that they cause liver damage. Contrary to popular belief, Ganoderma lucidum mushroom extract, as an adjuvant for cancer immunotherapy, appears to have the potential for toxicity. Also, adulteration of some herbal medicine preparations with conventional drugs which may cause serious adverse effects, such as corticosteroids, phenylbutazone, phenytoin, and glibenclamide, has been reported. However, many adverse reactions are due to misuse or abuse of Chinese medicine. For example, the misuse of the dietary supplement Ephedra (containing ephedrine) can lead to adverse events including gastrointestinal problems as well as sudden death from cardiomyopathy. Products adulterated with pharmaceuticals for weight loss or erectile dysfunction are one of the main concerns. Chinese herbal medicine has been a major cause of acute liver failure in China. Most Chinese herbs are safe but some have shown not to be. Reports have shown products being contaminated with drugs, toxins, or false reporting of ingredients. Some herbs used in TCM may also react with drugs, have side effects, or be dangerous to people with certain medical conditions. Efficacy Only a few trials exist that are considered to have adequate methodology by scientific standards. Proof of effectiveness is poorly documented or absent. A 2016 Cochrane review found "insufficient evidence that Chinese Herbal Medicines were any more or less effective than placebo or hormonal therapy" for the relief of menopause related symptoms. A 2012 Cochrane review found no difference in decreased mortality for SARS patients when Chinese herbs were used alongside Western medicine versus Western medicine exclusively. A 2010 Cochrane review found there is not enough robust evidence to support the effectiveness of traditional Chinese medicine herbs to stop the bleeding from haemorrhoids. A 2008 Cochrane review found promising evidence for the use of Chinese herbal medicine in relieving painful menstruation, compared to conventional medicine such as NSAIDs and the oral contraceptive pill, but the findings are of low methodological quality. A 2012 Cochrane review found weak evidence suggesting that some Chinese medicinal herbs have a similar effect at preventing and treating influenza as antiviral medication. Due to the poor quality of these medical studies, there is insufficient evidence to support or dismiss the use of Chinese medicinal herbs for the treatment of influenza. There is a need for larger and higher quality randomized clinical trials to determine how effective Chinese herbal medicine is for treating people with influenza. A 2005 Cochrane review found that although the evidence was weak for the use of any single herb, there was low quality evidence that some Chinese medicinal herbs may be effective for the treatment of acute pancreatitis. Ecological impacts The traditional practice of using now-endangered species is controversial within TCM. Modern Materia Medicas such as Bensky, Clavey and Stoger's comprehensive Chinese herbal text discuss substances derived from endangered species in an appendix, emphasizing alternatives. Parts of endangered species used as TCM drugs include tiger bones and rhinoceros horn. Poachers supply the black market with such substances, and the black market in rhinoceros horn, for example, has reduced the world's rhino population by more than 90 percent over the past 40 years. Concerns have also arisen over the use of turtle plastron and seahorses. TCM recognizes bear bile as a medicinal. In 1988, the Chinese Ministry of Health started controlling bile production, which previously used bears killed before winter. Now bears are fitted with a sort of permanent catheter, which is more profitable than killing the bears. More than 12,000 asiatic black bears are held in "bear farms", where they suffer cruel conditions while being held in tiny cages. The catheter leads through a permanent hole in the abdomen directly to the gall bladder, which can cause severe pain. Increased international attention has mostly stopped the use of bile outside of China; gallbladders from butchered cattle () are recommended as a substitute for this ingredient. Collecting American ginseng to assist the Asian traditional medicine trade has made ginseng the most harvested wild plant in North America for the last two centuries, which eventually led to a listing on CITES Appendix II. Chinese medicinal plant materials (CMPMs) release chemicals that attracts the Drugstore beetle, leading to the accumulation of this pest and further infestation and damage to these plants. Herbs in use Chinese herbology is a pseudoscientific practice with potentially unreliable product quality, safety hazards or misleading health advice. There are regulatory bodies, such as China GMP (Good Manufacturing Process) of herbal products. However, there have been notable cases of an absence of quality control during herbal product preparation. There is a lack of high-quality scientific research on herbology practices and product effectiveness for anti-disease activity. In the herbal sources listed below, there is little or no evidence for efficacy or proof of safety across consumer age groups and disease conditions for which they are intended. There are over 300 herbs in common use. Some of the most commonly used herbs are Ginseng (), wolfberry ( (Angelica sinensis, ), astragalus (), atractylodes (), bupleurum (), cinnamon (cinnamon twigs () and cinnamon bark ()), coptis (), ginger (), hoelen (), licorice (), ephedra sinica (), peony (white: and reddish: ), rehmannia (), rhubarb (), and salvia (). 50 fundamental herbs In Chinese herbology, there are 50 "fundamental" herbs, as given in the reference text, although these herbs are not universally recognized as such in other texts. The herbs are: Other Chinese herbs In addition to the above, many other Chinese herbs and other substances are in common use, and these include: Akebia quinata () Arisaema heterophyllum () Chenpi (sun-dried tangerine (mandarin) peel) () Clematis () Concretio silicea bambusae () Cordyceps sinensis () Curcuma () Dalbergia odorifera () Myrrh () Frankincense () Persicaria () Patchouli' () Polygonum () Sparganium () Zedoary (Curcuma zedoaria) () Herbal Formulas Types of Formulas Traditional Chinese herbs are used either standalone, or in a grouping, jointly with other herbs. When several herbs are used together, this amalgamation is called a 'herbal formula'. There are, generally speaking, three types of herbal formulas used in TCM: 1. Classic Formulas - these are formulas which TCM practitioners believe have withstood the test of time over the centuries, and are mentioned in classical texts, such as the Shanghan Lun. 2. Patent Formulas - these are either classic formulas, or newer commonly-used formulas created in recent decades. The patent formulas stand out in that their usage is common enough, that they are frequently mass-produced by large companies, in China, the United States, and elsewhere. 3. Custom-Made Formulas - these formulas are composed by a TCM Practitioner, to match the specific diagnosis and medical condition of a patient. These formulas are often partially-based on the older, classic formulas. Formula Hierarchy The prescription of TCM formulas, is based on 4-tier system of hierarchy. The 4-tiers are: Jun (君), Chen (臣), Zuo (佐) and Shi (使). These four tiers are often translated as: Sovereign, Minister, Assistant, Courier; or Monarch, Minister, Assistant, Envoy (also: 'Guide'). This feudal-like hierarchy denotes the power and role of each herb in a given formula. The Jun is the herb which is usually of the highest relative dosage, and leads the main action of the formula. In the majority of formulas, there is only one Jun (Monarch) herb. Sometimes, a formula may feature 2-3 Jun herbs, or lack a dominant Jun herb altogether. The Chen support the Jun in its actions, and provide additional uses for the medical purpose of the formula. The Zuo assist the Jun and Chen, but are given at a much lower dosage (relative to themselves), to deemphasize their influence, for various reasons. The Shi's main role is to help guide the formula to the correct bodily areas or organ systems inside of which it is meant to act. The Shi are also sometimes used "to harmonize the properties of other herbs in the formula". Most herbs can serve as either Jun (Monarch), Chen (Minister) or Zuo (Assistant) - the first three tiers in the herbal hierarchy. But only certain herbs, are considered fit to serve as Shi. This is because only some herbs are believed to have the ability, to guide other herbs into a given bodily area or organ system. Matching and Contrasting Herbs Within TCM formulas, there are also strict rules about which herbs pair well together (Dui Yao), and which are either contradictory, incompatible, or may cause a reaction amongst themselves, or with Western Medicine Drugs. For example: Gan Cao (Licorice) is incompatible with the herbs Yuan hua, Jing Da Ji, Hai Zao and Gan Sui. It may also alter the therapeutic effects of corticosteroids. Notable people Ji Desheng (1898–1981), Chinese herbalist from Nantong. Li Ching-Yuen (died 1933), Chinese herbalist, martial artist and tactical advisor. Aw Chu Kin (died 1908), Burmese Chinese herbalist, inventor of Tiger Balm. Ing Hay (1862–1952), migrated to the United States in 1887 and practiced traditional Chinese medicine in Oregon.
Biology and health sciences
Alternative and traditional medicine
Health
415088
https://en.wikipedia.org/wiki/River%20source
River source
The headwater of a river or stream is the point on each of its tributaries upstream from its mouth or estuary into a lake, sea, or confluence with another river. Each headwater is considered one of the river's sources, as it is the place where surface runoffs from rainwater, meltwater, or spring water begin accumulating into a more substantial and consistent flow that becomes a first-order tributary of that river. The tributary with the longest course downstream of the headwaters is regarded as the main stem. Definition The United States Geological Survey (USGS) states that a river's "length may be considered to be the distance from the mouth to the most distant headwater source (irrespective of stream name), or from the mouth to the headwaters of the stream commonly known as the source stream". As an example of the second definition above, the USGS at times considers the Missouri River as a tributary of the Mississippi River. But it also follows the first definition above (along with virtually all other geographic authorities and publications) in using the combined Missouri—lower Mississippi length figure in lists of lengths of rivers around the world. Most rivers have numerous tributaries and change names often; it is customary to regard the longest tributary or stem as the source, regardless of what name that watercourse may carry on local maps and in local usage. This most commonly identified definition of a river source specifically uses the most distant point (along watercourses from the river mouth) in the drainage basin from which water runs year-around (perennially), or, alternatively, as the furthest point from which water could possibly flow ephemerally. The latter definition includes sometimes-dry channels and removes any possible definitions that would have the river source "move around" from month to month depending on precipitation or ground water levels. This definition, from geographer Andrew Johnston of the Smithsonian Institution, is also used by the National Geographic Society when pinpointing the source of rivers such as the Amazon or Nile. A definition given by the state of Montana agrees, stating that a river source is never a confluence but is "in a location that is the farthest, along water miles, from where that river ends." Under this definition, neither a lake (excepting lakes with no inflows) nor a confluence of tributaries can be a true river source, though both often provide the starting point for the portion of a river carrying a single name. For example, National Geographic and virtually every other geographic authority and atlas define the source of the Nile River not as Lake Victoria's outlet where the name "Nile" first appears, which would reduce the Nile's length by over (dropping it to fourth or fifth on the list of world's rivers), but instead use the source of the largest river flowing into the lake, the Kagera River. Likewise, the source of the Amazon River has been determined this way, even though the river changes names numerous times along its course. However, the source of the Thames in England is traditionally reckoned according to the named river Thames rather than its longer tributary, the Churn — although not without contention. When not listing river lengths, however, alternative definitions may be used. The Missouri River's source is named by some USGS and other federal and state agency sources, following Lewis and Clark's naming convention, as the confluence of the Madison and Jefferson rivers, rather than the source of its longest tributary (the Jefferson). This contradicts the most common definition, which is, according to a US Army Corps of Engineers official on a USGS site, that "[geographers] generally follow the longest tributary to identify the source of rivers and streams." In the case of the Missouri River, this would have the source be well upstream from Lewis and Clark's confluence, "following the Jefferson River to the Beaverhead River to Red Rock River, then Red Rock Creek to Hell Roaring Creek." Characteristics Sometimes the source of the most remote tributary may be in an area that is more marsh-like, in which the "uppermost" or most remote section of the marsh would be the true source. For example, the source of the River Tees is marshland. The furthest stream is also often called the head stream. Headwaters are often small streams with cool waters because of shade and recently melted ice or snow. They may also be glacial headwaters, waters formed by the melting of glacial ice. Headwater areas are the upstream areas of a watershed, as opposed to the outflow or discharge of a watershed. The river source is often but not always on or quite near the edge of the watershed, or watershed divide. For example, the source of the Colorado River is at the Continental Divide separating the Atlantic Ocean and Pacific Ocean watersheds of North America. Example A river is considered a linear geographic feature, with only one mouth and one source. For an example, the Mississippi River and Missouri River sources are officially defined as follows: , Length: , Source: , Length: , Source: Related usages The verb "rise" can be used to express the general region of a river's source, and is often qualified with an adverbial expression of place. For example: The River Thames rises in Gloucestershire. The White Nile rises in the Great Lakes region of central Africa. The word "source", when applied to lakes rather than rivers or streams, refers to the lake's inflow.
Physical sciences
Hydrology
Earth science
415513
https://en.wikipedia.org/wiki/Net%20force
Net force
In mechanics, the net force is the sum of all the forces acting on an object. For example, if two forces are acting upon an object in opposite directions, and one force is greater than the other, the forces can be replaced with a single force that is the difference of the greater and smaller force. That force is the net force. When forces act upon an object, they change its acceleration. The net force is the combined effect of all the forces on the object's acceleration, as described by Newton's second law of motion. When the net force is applied at a specific point on an object, the associated torque can be calculated. The sum of the net force and torque is called the resultant force, which causes the object to rotate in the same way as all the forces acting upon it would if they were applied individually. It is possible for all the forces acting upon an object to produce no torque at all. This happens when the net force is applied along the line of action. In some texts, the terms resultant force and net force are used as if they mean the same thing. This is not always true, especially in complex topics like the motion of spinning objects or situations where everything is perfectly balanced, known as static equilibrium. In these cases, it is important to understand that "net force" and "resultant force" can have distinct meanings. Concept In physics, a force is considered a vector quantity. This means that it not only has a size (or magnitude) but also a direction in which it acts. We typically represent force with the symbol F in boldface, or sometimes, we place an arrow over the symbol to indicate its vector nature, like this: . When we need to visually represent a force, we draw a line segment. This segment starts at a point A, where the force is applied, and ends at another point B. This line not only gives us the direction of the force (from A to B) but also its magnitude: the longer the line, the stronger the force. One of the essential concepts in physics is that forces can be added together, which is the basis of vector addition. This concept has been central to physics since the times of Galileo and Newton, forming the cornerstone of Vector calculus, which came into its own in the late 1800s and early 1900s. The picture to the right shows how to add two forces using the "tip-to-tail" method. This method involves drawing forces , and from the tip of the first force. The resulting force, or "total" force, , is then drawn from the start of the first force (the tail) to the end of the second force (the tip). Grasping this concept is fundamental to understanding how forces interact and combine to influence the motion and equilibrium of objects. When forces are applied to an extended body (a body that's not a single point), they can be applied at different points. Such forces are called 'bound vectors'. It's important to remember that to add these forces together, they need to be considered at the same point. The concept of "net force" comes into play when you look at the total effect of all of these forces on the body. However, the net force alone may not necessarily preserve the motion of the body. This is because, besides the net force, the 'torque' or rotational effect associated with these forces also matters. The net force must be applied at the right point, and with the right associated torque, to replicate the effect of the original forces. When the net force and the appropriate torque are applied at a single point, they together constitute what is known as the resultant force. This resultant force-and-torque combination will have the same effect on the body as all the original forces and their associated torques. Parallelogram rule for the addition of forces A force is known as a bound vector—which means it has a direction and magnitude and a point of application. A convenient way to define a force is by a line segment from a point A to a point B. If we denote the coordinates of these points as A = (Ax, Ay, Az) and B = (Bx, By, Bz), then the force vector applied at A is given by The length of the vector defines the magnitude of and is given by The sum of two forces F1 and F2 applied at A can be computed from the sum of the segments that define them. Let F1 = B−A and F2 = D−A, then the sum of these two vectors is which can be written as where E is the midpoint of the segment BD that joins the points B and D. Thus, the sum of the forces F1 and F2 is twice the segment joining A to the midpoint E of the segment joining the endpoints B and D of the two forces. The doubling of this length is easily achieved by defining a segments BC and DC parallel to AD and AB, respectively, to complete the parallelogram ABCD. The diagonal AC of this parallelogram is the sum of the two force vectors. This is known as the parallelogram rule for the addition of forces. Translation and rotation due to a force Point forces When a force acts on a particle, it is applied to a single point (the particle volume is negligible): this is a point force and the particle is its application point. But an external force on an extended body (object) can be applied to a number of its constituent particles, i.e. can be "spread" over some volume or surface of the body. However, determining its rotational effect on the body requires that we specify its point of application (actually, the line of application, as explained below). The problem is usually resolved in the following ways: Often, the volume or surface on which the force acts is relatively small compared to the size of the body, so that it can be approximated by a point. It is usually not difficult to determine whether the error caused by such approximation is acceptable. If it is not acceptable (obviously e.g. in the case of gravitational force), such "volume/surface" force should be described as a system of forces (components), each acting on a single particle, and then the calculation should be done for each of them separately. Such a calculation is typically simplified by the use of differential elements of the body volume/surface, and the integral calculus. In a number of cases, though, it can be shown that such a system of forces may be replaced by a single point force without the actual calculation (as in the case of uniform gravitational force). In any case, the analysis of the rigid body motion begins with the point force model. And when a force acting on a body is shown graphically, the oriented line segment representing the force is usually drawn so as to "begin" (or "end") at the application point. Rigid bodies In the example shown in the diagram opposite, a single force acts at the application point H on a free rigid body. The body has the mass and its center of mass is the point C. In the constant mass approximation, the force causes changes in the body motion described by the following expressions:    is the center of mass acceleration; and    is the angular acceleration of the body. In the second expression, is the torque or moment of force, whereas is the moment of inertia of the body. A torque caused by a force is a vector quantity defined with respect to some reference point:    is the torque vector, and    is the amount of torque. The vector is the position vector of the force application point, and in this example it is drawn from the center of mass as the reference point of (see diagram). The straight line segment is the lever arm of the force with respect to the center of mass. As the illustration suggests, the torque does not change (the same lever arm) if the application point is moved along the line of the application of the force (dotted black line). More formally, this follows from the properties of the vector product, and shows that rotational effect of the force depends only on the position of its line of application, and not on the particular choice of the point of application along that line. The torque vector is perpendicular to the plane defined by the force and the vector , and in this example, it is directed towards the observer; the angular acceleration vector has the same direction. The right-hand rule relates this direction to the clockwise or counterclockwise rotation in the plane of the drawing. The moment of inertia is calculated with respect to the axis through the center of mass that is parallel with the torque. If the body shown in the illustration is a homogeneous disc, this moment of inertia is . If the disc has the mass 0,5 kg and the radius 0,8 m, the moment of inertia is 0,16 kgm2. If the amount of force is 2 N, and the lever arm 0,6 m, the amount of torque is 1,2 Nm. At the instant shown, the force gives to the disc the angular acceleration α = /I = 7,5 rad/s2, and to its center of mass it gives the linear acceleration a = F/m = 4 m/s2. Resultant force Resultant force and torque replaces the effects of a system of forces acting on the movement of a rigid body. An interesting special case is a torque-free resultant, which can be found as follows: Vector addition is used to find the net force; Use the equation to determine the point of application with zero torque: where is the net force, locates its application point, and individual forces are with application points . It may be that there is no point of application that yields a torque-free resultant. The diagram opposite illustrates simple graphical methods for finding the line of application of the resultant force of simple planar systems: Lines of application of the actual forces and on the leftmost illustration intersect. After vector addition is performed "at the location of ", the net force obtained is translated so that its line of application passes through the common intersection point. With respect to that point all torques are zero, so the torque of the resultant force is equal to the sum of the torques of the actual forces. The illustration in the middle of the diagram shows two parallel actual forces. After vector addition "at the location of ", the net force is translated to the appropriate line of application, where it becomes the resultant force . The procedure is based on decomposition of all forces into components for which the lines of application (pale dotted lines) intersect at one point (the so-called pole, arbitrarily set at the right side of the illustration). Then the arguments from the previous case are applied to the forces and their components to demonstrate the torque relationships. The rightmost illustration shows a couple, two equal but opposite forces for which the amount of the net force is zero, but they produce the net torque    where   is the distance between their lines of application. Since there is no resultant force, this torque can be [is?] described as "pure" torque. Usage In general, a system of forces acting on a rigid body can always be replaced by one force plus one pure (see previous section) torque. The force is the net force, but to calculate the additional torque, the net force must be assigned the line of action. The line of action can be selected arbitrarily, but the additional pure torque depends on this choice. In a special case, it is possible to find such line of action that this additional torque is zero. The resultant force and torque can be determined for any configuration of forces. However, an interesting special case is a torque-free resultant. This is useful, both conceptually and practically, because the body moves without rotating as if it was a particle. Some authors do not distinguish the resultant force from the net force and use the terms as synonyms.
Physical sciences
Classical mechanics
Physics
415691
https://en.wikipedia.org/wiki/Pith
Pith
Pith, or medulla, is a tissue in the stems of vascular plants. Pith is composed of soft, spongy parenchyma cells, which in some cases can store starch. In eudicotyledons, pith is located in the center of the stem. In monocotyledons, it extends only into roots. The pith is encircled by a ring of xylem; the xylem, in turn, is encircled by a ring of phloem. While new pith growth is usually white or pale in color, as the tissue ages it commonly darkens to a deeper brown color. In trees pith is generally present in young growth, but in the trunk and older branches the pith often gets replaced – in great part – by xylem. In some plants, the pith in the middle of the stem may dry out and disintegrate, resulting in a hollow stem. A few plants, such as walnuts, have distinctive chambered pith with numerous short cavities (see image at middle right). The cells in the peripheral parts of the pith may, in some plants, develop to be different from cells in the rest of the pith. This layer of cells is then called the perimedullary region of the pithamus. An example of this can be observed in Hedera helix, a species of ivy. The term pith is also used to refer to the pale, spongy inner layer of the rind, more properly called mesocarp or albedo, of citrus fruits (such as oranges) and other hesperidia. The word comes from the Old English word piþa, meaning substance, akin to Middle Dutch pitte (modern Dutch pit), meaning the pit of a fruit. Uses Food The pith of the sago palm, although highly toxic to animals in its raw form, is an important human food source in Melanesia and Micronesia by virtue of its starch content and its availability. There is a simple process of starch extraction from sago pith that leaches away a sufficient amount of the toxins and thus only the starch component is consumed. Current processes for starch extraction are generally only about 50% efficient, however, with the other half remaining in residual pith waste. The form of the starch after processing is similar to tapioca. Other foods sometimes mistakenly called piths include heart of palm (actually the core of the bud) and banana piths (actually the rolled up young leaves). Pith helmets The spongy wood of the pith wood plant or other similar species, often mistakenly called pith, was once used to make pith helmets. Watch cleaning Pith wood is a cleaning tool used in watchmaking to clean watch parts and tools. It is used to remove oil from the tips of tools to prevent the contamination of watch movements. A pith wood consists of a piece of pith (such as elder or mullein). Light Dried pith (which is actually the center of the leaf) of certain rush plants soaked in fat or grease, held using a rushlight, was used as home lighting. Beginning in the 17th century, it would continue to be used in this method until the mid-20th century. It saw a brief revival during World War 2.
Biology and health sciences
Plant stem
Biology
415810
https://en.wikipedia.org/wiki/Hector%27s%20dolphin
Hector's dolphin
Hector's dolphin (Cephalorhynchus hectori) is one of four dolphin species belonging to the genus Cephalorhynchus. Hector's dolphin is the only cetacean endemic to New Zealand, and comprises two subspecies: C. h. hectori, the more numerous subspecies, also referred to as South Island Hector's dolphin; and the critically endangered Māui dolphin (C. h. maui), found off the West Coast of the North Island. Etymology Hector's dolphin was named after Sir James Hector (1834–1907), who was the curator of the Colonial Museum in Wellington (now the Museum of New Zealand Te Papa Tongarewa). He examined the first specimen of the dolphin found by cephologists. The species was scientifically described by Belgian zoologist Pierre-Joseph van Beneden in 1881. Māori names for Hector's and Māui dolphin include tutumairekurai, tupoupou and popoto. Description Hector's dolphin is the smallest dolphin species. Mature adults have a total length of and weigh . The species is sexually dimorphic, with females being about 5–7% longer than males. The body shape is stocky, with no discernible beak. The most distinctive feature is the rounded dorsal fin, with a convex trailing edge and undercut rear margin. The overall coloration appearance is pale grey, but closer inspection reveals a complex and elegant combination of colours. The back and sides are predominantly light grey, while the dorsal fin, flippers, and flukes are black. The eyes are surrounded by a black mask, which extends forward to the tip of the rostrum and back to the base of the flipper. A subtly shaded, crescent-shaped black band crosses the head just behind the blowhole. The throat and belly are creamy white, separated by dark-grey bands meeting between the flippers. A white stripe extends from the belly onto each flank below the dorsal fin. At birth, Hector's dolphin calves have a total length of and weigh . Their coloration is the almost same as adults, although the grey has a darker hue. Newborn Hector's dolphins have distinct fetal fold marks on their flanks that cause a change in coloration pattern of the skin. These changes are visible for approximately six months and consist of four to six vertical light grey stripes against darker grey skin. Life history Data from field studies, beachcast individuals, and dolphins caught in fishing nets have provided information on their life history and reproductive parameters. Photo-ID based observations at Banks Peninsula from 1984 to 2006 show that individuals can reach at least 22 years of age. Males attain sexual maturity between 6 and 9 years old and females begin calving between 7 and 9 years old. Females will continue to calve every 2–3 years, resulting in a maximum of 4–7 calves in one female's lifetime. Calving occurs during the spring and summer. Calves are assumed to be weaned at around one year of age, and the mortality rate in the first 6 months was estimated to be around 36%. These combined life-history characteristics mean that, like many other cetaceans, Hector's dolphins are only capable of slow population growth. Their maximum population growth rate was previously estimated to be 1.8–4.9% per year, based on old demographic information, which was then updated to 3–7% per year, based on updated demographic information and a life history invariant observed across all vertebrates Ecology Habitat The species' range includes murky coastal waters out to depth, though almost all sightings are in waters shallower than . Hector's dolphins display a seasonal inshore-offshore movement; favouring shallow coastal waters during spring and summer, and moving offshore into deeper waters during autumn and winter. They have also been shown to return to the same location during consecutive summers, displaying high foraging site fidelity. The inshore-offshore movement of Hector's dolphins are thought to relate to seasonal patterns of turbidity and the inshore movements of prey species during spring and summer. Diet Hector's dolphins are generalist feeders, with prey selection based on size (mostly under 10 cm in length) rather than species, although spiny species also appear to be avoided. The largest prey item recovered from a Hector's dolphin stomach was an undigested red cod weighing 500 g with a standard length of 35 cm. The stomach contents of dissected dolphins include a mixture of surface-schooling fish, midwater fish, squid, and a variety of benthic species. The main prey species in terms of mass contribution is red cod, and other important prey include Peltorhamphus flatfish, ahuru, New Zealand sprat, Nototodarus arrow squid, and juvenile giant stargazer. Predators The remains of Hector's dolphins have been found in the stomachs of broadnose sevengill shark (considered to be their main predator), great white shark and blue shark. Unconfirmed predators of Hector's and Māui dolphins include killer whales (orca), mako sharks and bronze whaler shark. Behaviour Group dynamics Hector's dolphins preferentially form groups of less than 5 individuals, with a mean of 3.8 individuals, that are highly segregated by sex. The majority of these small groups are single sex. Groups of greater than 5 individuals are formed much less frequently. These larger groups, >5, are usually mixed sex and have been shown to form only to forage or participate in sexual behaviour. Nursery groups can also be observed and are usually all female groups of less than 7 mothers and young. This species has been found to show a high level of fluidity with weak inter-individual associations, meaning they do not form strong bonds with other individuals. Three types of small preferential groups have been found: nursery groups; immature and subadult groups; and adult male/female groups. All of these small groups show a high level of sex segregation. Hector's dolphins display a sex-age population group composition, meaning they group by biological sex and age. Sexual behaviour Males of the species have extremely large testes in proportion to body size, with the highest relative weight in one study being 2.9% of body weight. Large testes in combination with males' smaller overall body size suggests a promiscuous mating system. This type of reproductive system would involve a male attempting to fertilize as many females as possible and little male-male aggression. The amount of sexual behaviour per individual in the species is observed most when small single sex groups form large mixed sex groups. Sexual behaviour in the species is usually non-aggressive. Echolocation Similar to the hourglass dolphin, Hector's dolphins use high-frequency echolocation clicks. However, the Hector's dolphin produces lower source-level clicks than hourglass dolphins due to their crowded environment. This means they can only spot prey at half the distance compared to an hourglass dolphin. The species has a very simple repertoire with few types of clicks, as well as little audible signals in addition to these. More complex clicks could be observed in large groups. Distribution and population size Hector's and Māui dolphins are endemic to the coastal regions of New Zealand. The Hector's dolphin sub-species is most abundant in discontinuous regions of high turbidity around the South Island. They are most abundant off the East Coast and West Coast, most notably around Banks Peninsula, with smaller, more isolated populations off the North Coast and South Coast (notably at Te Waewae Bay). Smaller populations are scattered around the South Island, including: Cook Strait, Kaikōura, Catlins (e.g., Porpoise Bay, Curio Bay), and Otago coasts (e.g.Karitane, Oamaru, Moeraki, Otago Harbour, and Blueskin Bay). Māui dolphin are typically found on the west coast of the North Island between Maunganui Bluff and Whanganui. An aerial survey of South Island Hector's dolphin abundance—which was commissioned by the Ministry for Primary Industries, carried out the Cawthron Institute, and endorsed by the International Whaling Commission—estimated a total population size of 14,849 dolphins (95% confidence interval = 11,923–18,492). This was almost twice the previous, published estimate from earlier surveys (7,300; 95% CI 5,303–9,966). This difference was primarily due to a much larger estimated population along East Coast, which was distributed further offshore than previously thought. The latest estimate of the Māui dolphin subspecies 2020–2021 is 54 individuals aged 1 year or older (1+) (95% confidence interval (CI) = 48–66). Mixing of sub-species Occasionally, South Island Hector's dolphins (determined from genetics) are found around the North Island, up to Bay of Plenty or Hawke's Bay. In 2012, a genetic analysis of tissue samples from dolphins in the core Maui dolphin range, including historical samples, revealed the presence of at least three South Island Hector's dolphins off the West Coast of the North Island (two of them alive), along with another five South Island Hector's dolphins sampled between Wellington and Oakura from 1967 to 2012. Previously, the deep waters of the Cook Strait were considered to be an effective barrier to mixing between the South Island Hector's and North Island Māui sub-species for around 15,000 and 16,000 years. This is coincident with the separation of the North and South Islands of New Zealand at the end of the last ice age. To date, there is no evidence of interbreeding between South Island Hector's dolphin and Māui dolphin, but it is likely they could given their close genetic composition. Threats Fishing Hector's and Māui dolphin deaths occur as a direct result of commercial and recreational fishing due to entanglement or capture in gillnets or trawls. Death is ultimately caused by suffocation, although injury and sub-lethal effects can also result from the mechanical abrasion of fins resulting from entanglement. Since the 1970s, gillnets have been made from lightweight monofilament, which is difficult for dolphins to detect. Hector's dolphins are actively attracted to trawling vessels and can frequently be seen following trawlers and diving down to the net, which could result in the unwanted bycatch. Deaths in fishing nets were previously considered to be the most serious threat (responsible for more than 95% of the human-caused deaths in Māui dolphins), with currently lower level threats including tourism, disease, and marine mining. Research of decreases in mitochondrial DNA diversity among hector's dolphin populations has suggested that the number of gill-net entanglement deaths likely far surpasses that reported by fisheries. Population simulations estimated that the current population is 30% of the 1970 population size estimate of 50,000 dolphins, based on their estimated capture rate in commercial gillnet fisheries. The latest government-approved estimates of annual deaths in commercial gillnets (for the period from 2014/15 to 2016/17) was 19–93 South Island Hector's dolphins and 0.0–0.3 Māui dolphins annually. The low estimate for Māui dolphin deaths in gillnets is consistent with the lack of any observed captures in commercial setnets off the West Coast of the North Island since late-2012, despite 100% observer coverage in this fishery across this time period. Annual deaths in commercial trawls were estimated to be 0.2–26.6 Hector's dolphins and 0.00–0.05 Maui dolphins (from 2014/15 to 2016/17). Based on these levels of mortality, the increased abundance of Hector's dolphins and faster population growth potential than previously thought, the commercial fishery threat (alone) would be unlikely to prevent population recovery to at least 80% of unaffected levels, for either Hector's or Māui dolphins. However the threat from commercial fishing was estimated to be higher for some regional populations relative to others, e.g., East Coast South Island, and may have a greater effect on certain smaller populations, e.g., Hector's dolphins along the Kaikoura Coast. Fishing restrictions The first marine protected area (MPA) for Hector's dolphin was designated in 1988 at Banks Peninsula, where commercial gill-netting was effectively prohibited out to offshore and recreational gill-netting was subject to seasonal restrictions. A second MPA was designated on the west coast of the North Island in 2003. Populations continued to decline due to by-catch outside the MPAs. Additional protection was introduced in 2008, banning gill-netting within 4 nautical miles of the majority of the South Island's east and south coasts, out to 2 nautical miles (3.7 km) offshore off the South Island's west coast and extending the gillnet ban on the North Island's west coast to offshore. Also, restrictions were placed on trawling in some of these areas. For further details on these regulations, see the Ministry of Fisheries website. Five marine mammal sanctuaries were designated in 2008 to manage nonfishing-related threats to Hector's and Māui dolphins. Their regulations include restrictions on mining and seismic acoustic surveys. Further restrictions were introduced into Taranaki waters in 2012 and 2013 to protect Māui dolphins. The Banks Peninsula Marine Mammal Sanctuary was expanded in 2020, with restrictions introduced on seismic surveying and seabed mining. The sanctuary stretches from the Jed River south to the Waitaki River, and extends 20 nautical miles out to sea, a total area of about 14,310 km2. The Scientific Committee of the International Whaling Commission has recommended extending protection for Māui dolphin further south to Whanganui and further offshore to 20 nautical miles from the coastline. The IUCN has recommended protecting Hector's and Māui dolphins from gill-net and trawl fisheries, from the shoreline to the 100 m depth contour. Infectious diseases The unicellular parasiteToxoplasma gondii is considered to be the main non-fishery cause of death. A 2013 study found that seven of 28 beachcast or bycaught Hector's and Māui dolphins died as a result of toxoplasmosis, which had necrotising and haemorrhagic lesions in the lung (n = 7), lymph nodes (n = 6), liver (n = 4) and adrenals (n = 3). The same study found that approximately two-thirds of dolphins had previously been infected with the toxoplasma parasite. An update to this study found that toxoplasmosis had killed nine out of 38 post-weaning age Hector's and Māui dolphins found washed up or floating at-sea, and that were not too autolised to determine a cause of death. Of these nine, six were reproductive females, tentatively indicating that this demographic may be more susceptible to infection. In New Zealand, the domestic house cat is the only known definitive host for toxoplasma, and Hector's and Maui dolphins are thought to become infected as a result of their preference for turbid coastal waters near river mouths, where toxoplasma oocyst densities are likely to be relatively high. Brucellosis is a notable bacterial disease of Hector's and Māui dolphins that can cause late pregnancy abortion in terrestrial mammals, and has been found in a range of cetacean species elsewhere. Brucellosis has been determined from necropsies to have killed both Hector's and Māui dolphins and to have caused reproductive disease, indicating that it may affect the reproductive success of both sub-species. Loss of genetic diversity and population decline The high levels of sex segregation and fragmentation of different populations in Hector's dolphin have been discussed as contributing to the overall population decline, as it becomes more difficult for males to find a female and copulate. The Allee effect begins to occur when a low-density population has low reproductive rates leading to increased population decline. In addition, low gene flow between populations may result from this species' high foraging site fidelity. Hector's dolphins have not been found to participate in alongshore migrations, which may also contribute to their lack of genetic diversity. Samples from 1870 to today have provided a historical timeline for the species' population decline. Lack of neighboring populations due to fishery-related mortality has decreased gene flow and contributed to an overall loss in mitochondrial DNA diversity. As a result, the populations have become fragmented and isolated, leading to inbreeding. The geographical range has been lessened to the point where gene flow and immigration may no longer be possible between Māui dolphin and Hector's dolphin. Potential interbreeding between Hector's and Māui dolphins could increase the numbers of dolphins in the Māui range and reduce the risk of inbreeding depression, but such interbreeding could eventually result in a hybridisation of the Māui back into the Hector's species and lead to a reclassification of Māui as again the North Island Hector's. Hybridisation in this manner threatens the Otago black stilt and the Chatham Islands' Forbes parakeet and has eliminated the South Island brown teal as a subspecies. Researchers have also identified potential interbreeding as threatening the Māui with hybrid breakdown and outbreeding depression.
Biology and health sciences
Toothed whale
Animals
415883
https://en.wikipedia.org/wiki/Hydrogen-alpha
Hydrogen-alpha
Hydrogen-alpha, typically shortened to H-alpha or Hα, is a deep-red visible spectral line of the hydrogen atom with a wavelength of 656.28 nm in air and 656.46 nm in vacuum. It is the first spectral line in the Balmer series and is emitted when an electron falls from a hydrogen atom's third- to second-lowest energy level. H-alpha has applications in astronomy where its emission can be observed from emission nebulae and from features in the Sun's atmosphere, including solar prominences and the chromosphere. Balmer series According to the Bohr model of the atom, electrons exist in quantized energy levels surrounding the atom's nucleus. These energy levels are described by the principal quantum number n = 1, 2, 3, ... . Electrons may only exist in these states, and may only transit between these states. The set of transitions from n ≥ 3 to n = 2 is called the Balmer series and its members are named sequentially by Greek letters: n = 3 to n = 2 is called Balmer-alpha or H-alpha, n = 4 to n = 2 is called Balmer-beta or H-beta, n = 5 to n = 2 is called Balmer-gamma or H-gamma, etc. For the Lyman series the naming convention is: n = 2 to n = 1 is called Lyman-alpha, n = 3 to n = 1 is called Lyman-beta, etc. H-alpha has a wavelength of 656.281 nm, is visible in the red part of the electromagnetic spectrum, and is the easiest way for astronomers to trace the ionized hydrogen content of gas clouds. Since it takes nearly as much energy to excite the hydrogen atom's electron from n = 1 to n = 3 (12.1 eV, via the Rydberg formula) as it does to ionize the hydrogen atom (13.6 eV), ionization is far more probable than excitation to the n = 3 level. After ionization, the electron and proton recombine to form a new hydrogen atom. In the new atom, the electron may begin in any energy level, and subsequently cascades to the ground state (n = 1), emitting photons with each transition. Approximately half the time, this cascade will include the n = 3 to n = 2 transition and the atom will emit H-alpha light. Therefore, the H-alpha line occurs where hydrogen is being ionized. The H-alpha line saturates (self-absorbs) relatively easily because hydrogen is the primary component of nebulae, so while it can indicate the shape and extent of the cloud, it cannot be used to accurately determine the cloud's mass. Instead, molecules such as carbon dioxide, carbon monoxide, formaldehyde, ammonia, or acetonitrile are typically used to determine the mass of a cloud. Filter An H-alpha filter is an optical filter designed to transmit a narrow bandwidth of light generally centred on the H-alpha wavelength. These filters can be dichroic filters manufactured by multiple (~50) vacuum-deposited layers. These layers are selected to produce interference effects that filter out any wavelengths except at the requisite band. Taken in isolation, H-alpha dichroic filters are useful in astrophotography and for reducing the effects of light pollution. They do not have narrow enough bandwidth for observing the Sun's atmosphere. For observing the Sun, a much narrower band filter can be made from three parts: an "energy rejection filter" which is usually a piece of red glass that absorbs most of the unwanted wavelengths, a Fabry–Pérot etalon which transmits several wavelengths including one centred on the H-alpha emission line, and a "blocking filter" -a dichroic filter which transmits the H-alpha line while stopping those other wavelengths that passed through the etalon. This combination will pass only a narrow (<0.1 nm) range of wavelengths of light centred on the H-alpha emission line. The physics of the etalon and the dichroic interference filters are essentially the same (relying on constructive/destructive interference of light reflecting between surfaces), but the implementation is different (a dichroic interference filter relies on the interference of internal reflections while the etalon has a relatively large air gap). Due to the high velocities sometimes associated with features visible in H-alpha light (such as fast moving prominences and ejections), solar H-alpha etalons can often be tuned (by tilting or changing the temperature or air density) to cope with the associated Doppler effect. Commercially available H-alpha filters for amateur solar observing usually state bandwidths in Angstrom units and are typically 0.7Å (0.07 nm). By using a second etalon, this can be reduced to 0.5Å leading to improved contrast in details observed on the Sun's disc. An even more narrow band filter can be made using a Lyot filter.
Physical sciences
Atomic physics
Physics
416103
https://en.wikipedia.org/wiki/Helsinki%20Metro
Helsinki Metro
The Helsinki Metro (, ) is a rapid transit system serving the Helsinki capital region, Finland. It is the world's northernmost metro system. It was opened to the general public on 2 August 1982 after 27 years of planning. It is operated by Helsinki City Transport and Metropolitan Area Transport Ltd for Helsinki Regional Transport Authority and carries 92.6 million passengers per year. The system consists of 2 lines, serving a total of 30 stations. It has a total length of . It is the predominant rail link between the suburbs of East Helsinki and the western suburbs in the city of Espoo and downtown Helsinki. The line passes under Helsinki Central Station, allowing passengers to transfer to and from the Helsinki commuter rail network, including trains on the Ring Rail Line to Helsinki Airport. History 1955–67: Light rail plan The initial motion for building a metropolitan railway system in Helsinki was made in September 1955, though during the five decades beforehand, the idea of a tunneled urban railway for Helsinki had surfaced several times. A suburban traffic committee () was formed under the leadership of (1908–1981), and in late 1955, the committee set to work on the issue of whether or not there was truly a need for a tunneled public transport system in Helsinki. After nearly four years of work, the committee presented its findings to the city council. The findings of the committee were clear: Helsinki needed a metro system built on separate right-of-way. This was the first time the term "metro" was used to describe the planned system. At the time the committee did not yet elaborate on what kind of vehicles should be used on the metro: trams, heavier rail vehicles, buses or trolleybuses were all alternatives. The city council's reaction to the committee's presentation was largely apathetic, with several council members stating to the press that they did not understand anything about Castrén's presentation. Despite the lacklustre reception, Castrén's committee was asked to continue its work, now as the metro committee, although very little funding was provided. In spring 1963 the committee presented its proposal for the Helsinki Metro system. On a technical level this proposal was very different from the system that was finally realised. In the 1963 proposal the metro was planned as a light rail system, running in tunnels a maximum of below the surface (compared to in the finalized system), and with stations placed at shorter intervals (for instance, the committee's presentation shows ten stations between Sörnäinen and Ruoholahti, compared to the six in the realized system). The Castrén Committee proposed for the system to be built in five phases, with the first complete by 1969 and the final by 2000, by which time the system would have a total length of with 108 stations. This was rejected after lengthy discussions as too extensive. In 1964 the city commissioned experts from Hamburg, Stockholm and Copenhagen to evaluate the metro proposal. Their opinions were unanimous: a metro was needed and the first sections should be built by 1970. Although no official decision to build a system along the lines proposed by Castrén was ever made, several provisions for a light rail metro system were made during the 1950s–1960s, including separate lanes on the Kulosaari and Naurissaari bridges, and space for a metro station in the 1964 extension of Munkkivuori shopping center. The RM 1, HM V and RM 3 trams built for the Helsinki tram system in the late 1950s were also equipped to be usable on the possible light rail metro lines. 1967–69: Heavy rail plan In late 1967, Reino Castrén departed Helsinki for Calcutta, where he had been invited as an expert in public transport. Prior to his departure Castrén indicated he planned to return to Helsinki in six months and continue his work as leader of the metro committee. For the duration of Castrén's absence, (1929–1989) was appointed as the leader of the committee. However, by the time Castrén returned, Valtanen's position had been made permanent. Following his appointment Valtanen informed the other members of the committee that the plans made under Castrén's leadership were outdated, and now the metro would be planned as a heavy rail system in deep tunnels mined into bedrock. Following two more years of planning, the Valtanen-led committee's proposal for an initial metro line from Kamppi to Puotila in the east of the city was approved after hours of debate in the city council on the early morning hours of 8 May 1969. The initial section was to be opened for service in 1977. 1969–82: Construction Construction of a testing track from the depot in Roihupelto to Herttoniemi was begun in 1969 and finished in 1971. The first prototype train, units M1 and M2, arrived from the Valmet factory in Tampere on 10 November 1971, with further four units (M3–M6) arriving the following year. Car M1 burned in the metro depot in 1973. Excavating the metro tunnels under central Helsinki had begun in June 1971. Most of the tunneling work had been completed by 1976, excluding the Kluuvi bruise (), a wedge of clay and pieces of rock in the bedrock, discovered during the excavation process. To build a tunnel through the bruise an unusual solution was developed: the bruise was turned into a giant freezer, with pipes filled with Freon 22 pushed through the clay. The frozen clay was then carefully blasted away, with cast iron tubes installed to create a durable tunnel. Construction of the first stations, Kulosaari and Hakaniemi begun in 1974. The Kulosaari station was the first to be completed, in 1976, but construction of the other stations took longer. As the case with many underground structures in Helsinki, the underground metro stations were designed to also serve as bomb shelters. In summer 1976, Teuvo Aura, the mayor of Helsinki, signed an agreement with Valmet and Strömberg to purchase the trains required for the metro from them. In doing so Aura bypassed the city council completely, reportedly because he feared the council would decide to buy the rolling stock from manufacturers in the Soviet Union instead. By this time the direct current–based technology of the M1 series trains had become outdated. In 1977 prototypes for the M100 train series (referred to as "nokkajuna", , to differentiate from the M1 prototypes) were delivered. In these units the direct current from the power rail was converted to alternating current powering induction motors. The M100 trains were the first metro trains in the world to be equipped with such technology. Aura's bypassing the city council in acquiring the rolling stock was not the only questionable part of the construction process of the Metro. On 3 June 1982, two days after the Metro had been opened for provisional traffic, Unto Valtanen came under investigation for taking bribes. Subsequently, several members of the metro committee and Helsinki municipal executive committee in addition to Valtanen were charged with taking bribes. In the end it was found that charges against all the accused except Valtanen had expired. Valtanen was convicted for having taken bribes from Siemens. 1982 onwards: In service On 1 June 1982, the test drives were opened to the general public. Trains ran with passengers during the morning and afternoon rush hours between Itäkeskus and Hakaniemi (the Sörnäinen station was not yet opened at this time). On 1 July the provisional service was extended to Rautatientori. President of the Republic of Finland Mauno Koivisto officially opened the Metro for traffic on 2 August 1982 – 27 years after the initial motion to the city assembly had been made. The Metro did not immediately win the approval from inhabitants of eastern Helsinki, whose direct bus links to the city centre had now been turned into feeder lines for the Metro. Within six months of the Metro's official opening, a petition signed by 11,000 people demanded the restoration of direct bus links. Subsequently, the timetables of the feeder services were adjusted and opposition to the Metro mostly died down. On 1 March 1983, the Metro was extended in the west to Kamppi. The Sörnäinen station, between Hakaniemi and Kulosaari, was opened on 1 September 1984. The Metro was extended eastwards in the late 1980s, with the Kontula and Myllypuro stations opened in 1986, and the Mellunmäki station following in 1989. The construction of a westwards expansion begun in 1987 with tunneling works from Kamppi towards Ruoholahti. The Ruoholahti metro station was opened on 16 August 1993. Another new station followed: the Kaisaniemi station, between Rautatientori and Hakaniemi, was opened on 1 March 1995. Its construction had, in fact, been decided on in 1971, and the station cavern had been carved out of the rock during the original tunneling works, but a lack of funds had pushed back the station's completion. On 31 August 1998, after four years of construction, the final section of the original plan was completed, with the opening of a three-station fork from Itäkeskus to Vuosaari. The second generation of Metro trains to be used in passenger service (the M200s) were delivered in 2000 and 2001 by Bombardier. These trains are based on Deutsche Bahn's Class 481 EMUs used on the Berlin S-Bahn network. On 25 September 2006, the city council of Espoo approved, after decades of debate, planning, and controversy, the construction of a western extension of the Metro. Metro trains began to run to Matinkylä in late 2017. (See section The future below.) On 1 January 2007, Kalasatama station, between the Sörnäinen and Kulosaari stations, was opened. It serves the new "Sörnäistenranta-Hermanninranta" (Eastern Harbour) area, a former port facility redeveloped as its functions were relocated to the new Port of Vuosaari in the east of the city. On 8 November 2009, the Rautatientori station, under the Central Railway Station, was closed due to flooding caused by a burst water main. After renovations, the station reopened for public use on 15 February 2010. The lifts were fully replaced; the new ones opened on 21 June 2010. On 23 August 2019, heavy rain caused the Rautatientori station to close once again due to flooding. The station reopened in a matter of days, but the lifts again took many months to fix, finally reopening on 17 March 2020. 2006 onwards: The western extension The construction of the Western extension from Ruoholahti to Matinkylä in Espoo was approved by the Espoo city council in 2006. Construction began in 2009 and the extension was opened on 18 November 2017. This first stage of the extension was long, with eight new stations, two in Helsinki and six in Espoo and was built entirely in a tunnel excavated in bedrock. After first stage of the Western extension opened, the bus lines in Southern Espoo were reconfigured as feeder lines to either Matinkylä or Tapiola metro stations instead of terminating at Kamppi in the centre of Helsinki. After much outcry, four new peak-time lines began running into Kamppi on 22 August 2018. Before the extension of the metro, trains could be a maximum length of three units (each unit being two cars) but the new stations west of Ruoholahti were built shorter than the existing stations because it was originally planned to introduce driverless operation. The driverless project was cancelled in 2015, but the shorter new stations mean that the maximum train length is reduced to two units, shorter than on the original sections of the metro. To increase capacity, the automatic train protection system theoretically permits headway as short as 90 seconds, if required in the future. The decision to fund the construction of the second stage, from Matinkylä to Kivenlahti, was taken by the Espoo city council and the state of Finland in 2014. Construction began in late 2014. This stage of extension is long and includes five new stations and a new depot in Sammalvuori. All of the track, including the depot, was built in tunnels. The line opened for passenger traffic on the 3rd of December 2022. As with the first phase to Matinkylä, the feeder lines that used ro run to Matinkylä bus terminal were changed to run to Espoonlahti bus terminal in Lippulaiva shopping centre. Also in common with the first phase, many people were unhappy with the reorganisation of bus lines. Those living in Kivenlahti and Saunalahti, especially, were annoyed at direct bus lines into Kamppi, taking 25-30 minutes, being replaced with feeder lines to Espoonlahti, a transfer to the metro and a half-hour metro ride into the city centre. Network The Helsinki metro system consists of 30 stations. The stations are located along a Y shape, where the main part runs from the Matinkylä through the center of the city towards the eastern suburbs. The line forks at the Itäkeskus metro station. 22 of the network's stations are located below ground; all eight of those stations located above ground are in Helsinki. Trains are generally operated as Kivenlahti–Vuosaari or Tapiola–Mellunmäki with some services running Kivenlahti–Mellunmäki in the early mornings and evenings. The rush-hour frequency of 24tph in the central section between Tapiola and Itäkeskus was reduced to 20tph from August 2022, due to a lack of drivers and rolling stock. All services stop at every station, and the names of the stations are announced in both Finnish and Swedish (with the exceptions of Central Railway Station, University of Helsinki and Aalto University, which are also announced in English). The metro is designed as a core transport route, which means that extensive feeder bus transport links are provided between the stations and the surrounding districts. Taking a feeder bus to the metro is often the only option to get to the city centre from some districts. For example, since the construction of the metro, all daytime bus routes from the islands of Laajasalo terminate at the Herttoniemi metro station with no through routes from Laajasalo to the centre of Helsinki. Lines The Helsinki Metro is operated as two lines called M1 and M2, although these designations are not universally applied. List of stations (), below surface (), below surface (), below surface (), below surface (), below surface (), below surface (), below surface (), below surface (), below surface ( / ), below surface (), below surface (), below sea (), below surface (), below surface (), below surface ( / ), below surface University of Helsinki ( / ), formerly (), below surface (), below surface (), below surface (), above surface (), above surface (), below surface (), above surface (), below surface (), above surface (), above surface (), above surface (), below surface (), above surface (), above surface Accessibility Some stations are located above ground level, making the metro system more friendly to passengers with mobility problems. Sub-surface stations have no stairs from the ticket hall to the platform, and one can access them from the street level via escalators or lifts. The trains themselves have no steps, and the floors of the trains are level with the platforms, with the gap between the two being just a couple of centimetres. Ticketing The ticketing scheme on the Metro is consistent with other forms of transport inside the city of Helsinki, managed by the Helsinki Regional Transport Authority (HSL) agency. The HSL travel card (matkakortti) is the most commonly used ticket, which can be paid either per journey or for a period of two weeks to one year. The metro stations between Koivusaari and Kulosaari lie within zone A. The stations between Keilaniemi and Matinkylä and from Herttoniemi to Mellunmäki or Vuosaari lie within the zone B, and from Finnoo to Kivenlahti in zone C, so an ABC ticket covers the entire system. Single tickets can be bought from ticket machines at the stations (except for the stations between Finnoo and Kivenlahti, which have no ticket machines) or via the HSL mobile app. A single ticket can be used to change to any other form of transport inside the HSL area with the validity time based on the number of zones purchased. There are no gates to the platforms; a proof-of-payment system is used instead. Safety Passenger safety instructions are inside train carriages above the doors and stations at ticket hall and platforms. These instructions direct passengers to use emergency phones and also include an emergency phone number to traffic center. Emergency stop handles at platforms discharge traction current and set nearby signals to danger. There are emergency brake handles inside the carriage next to the door. Especially for people with visual impairments, all platforms have a yellow line marking the safe area on platform. Additionally, there are fire extinguishers on trains and in stations. Rolling stock The 750V DC current is drawn from a bottom-contact third rail alongside the running rails. Since the opening of the Länsimetro extension, trains are always formed with 4 carriages. There are three different types of rolling stock in service on the system as of . The first trains adopted on the system consisted of the M100 series that was built by Strömberg in the late 1970s to the early 1980s. The newer M200 series was built by Bombardier and has been in service since 2000; each set is composed of two cars connected by an open gangway. The latest version, the M300 series, entered service in 2016, built by CAF. A further 5 M300 units were built in 2022 for the extension to Kivenlahti. Unlike the first two series, the M300 trains operate as 4-car sets with open gangways and were designed to run without drivers, though since the cancellation of the automation project, they retain their temporary cabs. Line speed of the system is inside the tunnels and on the open portion of the network. Points have a maximum speed of , with some sets near termini having a maximum speed of . Technically the M200 and M100 series have a maximum speed of and , respectively, but they are electrically limited to . Depots and facilities The original maintenance and storage depot for the metro system is at , between the stations of Siilitie and Itäkeskus. The depot is connected to the metro line from both directions, with a third, central, platform at Itäkeskus used for empty services and during times of disruption. Both warm and cold storage is provided at the depot, to avoid having to pre-heat trains before service in the cold winters. Behind the Roihupelto depot is the metro test track, allowing testing at speeds of up to ; the far end of this test-track was until 2012 connected via the non-electrified long and then to the VR main line at Oulunkylä railway station. Both the metro and railways share interoperable gauges. The old access line was mostly along the first two-thirds of the old Herttoniemi harbour railway. Through the area of Viikki, this single line had street running since 2002. In 2012 the old depot link was closed and partially removed when a new metro link line was built from the then present end at Vuosaari metro station, to the electrified long in the new Vuosaari harbour. From 2019 the route of the old link line was redeveloped to form part of the Jokeri light rail line which was opened on 21 October 2023. The new underground located between Kivenlahti and Espoonlahti stations, opened along with the second stage of Länsimetro on 3 December 2022. Future Eastern extension In 2018, a new zoning plan for the Östersundom area east of Helsinki, was confirmed. New homes are due to be built on the condition that the metro is extended eastwards to serve this area. The eastward extension of the metro has been named Itämetro (English: Eastern Metro, Swedish: Östmetron) as a counterpart to the western extension. The current plan is for the line to continue from Mellunmäki, briefly cross into Vantaa through Länsisalmi and then back into Helsinki through Itäsalmi, before continuing onwards over the municipal border to Majvik in Sipoo. Construction of the metro line is tentatively slated to begin in the 2030s at the earliest. Proposals also exist for the line to be extended even further east into central Sipoo, possibly as far as to Sibbesborg, to an envisioned new city centre there. Other A second Metro line from Laajasalo via Kamppi to Pasila north of the city centre, and possibly onwards to Helsinki-Vantaa Airport, is also in the planning stages. This is being taken into consideration in city plans and has been discussed by the city assembly, but does not look likely to be seriously planned before the mid-2030s at the earliest. To prepare for this eventuality, a platform level for a crossing line was already excavated during the original construction of the Kamppi station. The Ring Rail Line, which connects the airport to the rail network, began service in 2015. The current plans commissioned by the city recommend the extension of the tram network, instead of the metro, to Laajasalo. Thus construction of a second metro line along the Laajasalo–Kamppi–Airport route appears unlikely. On 17 May 2006 the Helsinki city council decided that the current, manually driven metro trains would be replaced by automatic ones, operated without drivers. This project was cancelled in 2015 but the western extension was planned with this driverless operation in mind and the stations were built shorter than the existing ones which meant that the maximum train length for the whole system had to be reduced in 2017 when the western extension opened. The system is planned to be automated eventually as the old M100 trains are approaching the end of their effective service lifespan. There is a plan to extend the Vuosaari section of the line to the new Vuosaari harbour (see section The depot above). A new station is being planned in Roihupelto, between Siilitie and Itäkeskus, to serve a possible future suburb. Unused stations In addition to the metro stations already in operation, forward-looking design has led to a number of extra facilities being constructed in case they are needed in the future. Kamppi The current metro station lies in an east-west direction but there is a second metro station beneath it that was excavated at the same time of construction in 1981. This second station is perpendicular (north-south) to the first one and has platforms in length, slightly shorter than those above. Tunnels designed to eventually connect the two sets of lines curve off from the west-end of Kamppi.
Technology
Scandinavia
null
416246
https://en.wikipedia.org/wiki/Loratadine
Loratadine
Loratadine, sold under the brand name Claritin among others, is a medication used to treat allergies. This includes allergic rhinitis (hay fever) and hives. It is also available in drug combinations such as loratadine/pseudoephedrine, in which it is combined with pseudoephedrine, a nasal decongestant. It is taken orally. Common side effects include sleepiness, dry mouth, and headache. Serious side effects are rare and include allergic reactions, seizures, and liver problems. Use during pregnancy appears to be safe but has not been well studied. It is not recommended in children less than two years old. It is in the second-generation antihistamine family of medications. Loratadine was patented in 1980 and came to market in 1988. It is on the World Health Organization's List of Essential Medicines. Loratadine is available as a generic medication. In the United States, it is available over the counter. In 2022, it was the 72nd most commonly prescribed medication in the United States, with more than 9million prescriptions. In 2022, the combination with pseudoephedrine was the 289th most commonly prescribed medication in the United States, with more than 500,000 prescriptions. Medical uses Loratadine is indicated for the symptomatic relief of allergies such as hay fever (allergic rhinitis), urticaria (hives), chronic idiopathic urticaria, and other skin allergies. For allergic rhinitis, loratadine is indicated for both nasal and eye symptoms including sneezing, runny nose, and itchy or burning eyes. Similarly to cetirizine, loratadine attenuates the itching associated with Kimura's disease. Combination drugs Loratadine/pseudoephedrine is a fixed dose combination of the drug with pseudoephedrine, a nasal decongestant. Dosage forms The medication is available in many different forms, including tablets, oral suspension, and syrups. Also available are quick-dissolving tablets. Contraindications Loratadine is usually compatible with breastfeeding (classified category L-2 - probably compatible, by the American Academy of Pediatrics). In the U.S., it is classified as category B in pregnancy, meaning animal reproduction studies have failed to demonstrate a risk to the fetus, but no adequate and well-controlled studies in pregnant women have been conducted. Adverse effects As a "non-sedating" antihistamine, loratadine causes less (but still significant, in some cases) sedation and psychomotor retardation than the older antihistamines, because it penetrates the blood/brain barrier less. Headache is also a possible side effect. Unlike earlier-generation antihistamines, loratadine is considered largely free of antimuscarinic effects (urinary retention, dry mouth, blurred vision). Interactions Substances that act as inhibitors of the CYP3A4 enzyme such as ketoconazole, erythromycin, cimetidine, and furanocoumarin derivatives (found in grapefruit) lead to increased plasma levels of loratadine — that is, more of the drug was present in the bloodstream than typical for a dose. This had clinically significant effects in controlled trials of 10 mg loratadine treatment. Antihistamines should be discontinued 48 hours before skin allergy tests, since these drugs may prevent or diminish otherwise positive reactions to dermal activity indicators. Pharmacology Pharmacodynamics Loratadine is a tricyclic antihistamine, which acts as a selective inverse agonist of peripheral histamine H1 receptors. The potency of second generation histamine antagonists is (from strongest to weakest) desloratadine (Ki 0.4 nM) > levocetirizine (Ki 3 nM) > cetirizine (Ki 6 nM) > fexofenadine (Ki 10 nM) > terfenadine > loratadine. However, the onset of action varies significantly and clinical efficacy is not always directly related to only the H1 receptor potency, as the concentration of free drug at the receptor must also be considered. Loratadine also shows anti-inflammatory properties independent of H1 receptors. The effect is exhibited through suppression of the NF-κB pathway, and by regulating the release of cytokines and chemokines, thereby regulating the recruitment of inflammatory cells. Pharmacokinetics Loratadine is given orally, is well absorbed from the gastrointestinal tract, and has rapid first-pass hepatic metabolism; it is metabolized by isoenzymes of the cytochrome P450 system, including CYP3A4, CYP2D6, and, to a lesser extent, several others. Loratadine is almost totally (97–99%) bound to plasma proteins. Its metabolite desloratadine, which is largely responsible for the antihistaminergic effects, binds to plasma proteins by 73–76%. Loratadine's peak effect occurs after 1–2 hours, and its biological half life is on average eight hours (range 3 to 20 hours) with desloratadine's half-life being 27 hours (range 9 to 92 hours), accounting for its long-lasting effect. About 40% is excreted as conjugated metabolites into the urine, and a similar amount is excreted into the feces. Traces of unmetabolised loratadine can be found in the urine. In structure, it is closely related to tricyclic antidepressants, such as imipramine, and is distantly related to the atypical antipsychotic quetiapine. History Schering-Plough developed loratadine as part of a quest for a potential blockbuster drug: a nonsedating antihistamine. By the time Schering submitted the drug to the U.S. Food and Drug Administration (FDA) for approval, the agency had already approved a competitor's nonsedating antihistamine, terfenadine (trade name Seldane), and, therefore, put loratadine on a lower priority. However, terfenadine had to be removed from the U.S. market by the manufacturer in late 1997 after reports of serious ventricular arrhythmias among those taking the drug. Loratadine was approved by the FDA in 1993. The drug continued to be available only by prescription in the U.S. until it went off patent in 2002. It was then subsequently approved for over-the-counter sales. Once it became an unpatented over-the-counter drug, the price dropped significantly. Schering also developed desloratadine (Clarinex/Aerius), which is an active metabolite of loratadine. Society and culture Over the counter In 1998, in an unprecedented action in the United States, an American insurance company, Anthem Inc., petitioned the federal Food and Drug Administration to allow loratadine and two other antihistamines to be made available over the counter (OTC) while they were still protected by patents; the administration granted the request, which was not binding on manufacturers. In the United States, Schering-Plough made loratadine available over the counter in 2002. By 2015, loratadine was available over the counter in many countries. Brands In 2017, loratadine was available under many brand names and in many forms worldwide, including several combination drug formulations with pseudoephedrine, paracetamol, betamethasone, ambroxol, salbutamol, phenylephrine, and dexamethasone. Marketing The marketing of the Claritin brand is important in the history of direct-to-consumer advertising of drugs. The first television commercial for a prescription drug was broadcast in the United States in 1983, by Boots. It caused controversy. The federal Food and Drug Administration responded with strong regulations requiring disclosure of side effects and other information. These rules made pharmaceutical manufacturers balk at spending money on ads that had to highlight negative aspects. In the mid-1990s, the marketing team for Claritin at Schering-Plough found a way around these rules. They created brand awareness commercials that never actually said what the drug was for, but instead showed sunny images, and the voiceover said such things as "At last, a clear day is here" and "It's time for Claritin" and repeatedly told viewers "Ask your doctor [about Claritin]." The first ads made people aware of the brand and increased prescriptions, which led Schering-Plough and others to aggressively pursue the advertising strategy. In 1998, a 12-page one-shot comic based on the Batman: The Animated Series was given away to advertise Claritin. The book, written by PRIEST, penciled by Joe Staton, and inked by Mike DeCarlo, sees Tim Drake unable to perform his crime-fighting duties because hay fever and antihistamines make him drowsy. After being given a prescription for Claritin, he saved Batman from Poison Ivy. This trend, along with advice from the Food and Drug Administration's attorneys that it could not win a First Amendment case on the issue, prompted the administration to issue new rules for television commercials in 1997. Instead of including the "brief summary" that took up a full page in magazine ads and would take too long to explain in a short television advertisement, drug makers were allowed to refer viewers to print ads, informative telephone lines, and websites, and to urge people to talk to their doctors if they wanted additional information. Schering-Plough invested million in Claritin direct-to-consumer advertising in 1998 and 1999, far more than any other brand. Spending on direct-to-consumer advertising by the pharmaceutical industry rose from million in 1995 to billion in 1998, and by 2006, was billion.
Biology and health sciences
Antihistamines
Health
416369
https://en.wikipedia.org/wiki/Picea%20mariana
Picea mariana
Picea mariana, the black spruce, is a North American species of spruce tree in the pine family. It is widespread across Canada, found in all 10 provinces and all 3 territories. It is the official tree of Newfoundland and Labrador and is that province's most abundant tree. Its range extends into northern parts of the United States: in Alaska, the Great Lakes region, and the upper Northeast. It is a frequent part of the biome known as taiga or boreal forest. The Latin specific epithet mariana means "of the Virgin Mary". Description P. mariana is a slow-growing, small upright evergreen coniferous tree (rarely a shrub), having a straight trunk with little taper, a scruffy habit, and a narrow, pointed crown of short, compact, drooping branches with upturned tips. Through much of its range it averages tall with a trunk diameter at maturity, though occasional specimens can reach tall and diameter. The bark is thin, scaly, and greyish brown. The leaves are needle-like, long, stiff, four-sided, dark bluish green on the upper sides, paler glaucous green below. The cones are the smallest of all of the spruces, long and broad, spindle-shaped to nearly round, dark purple ripening red-brown, produced in dense clusters in the upper crown, opening at maturity but persisting for several years. Natural hybridization occurs regularly with the closely related P. rubens (red spruce) and very rarely with P. glauca (white spruce). It differs from P. glauca in having a dense cover of small hairs on the bark of young branch tips, an often darker reddish-brown bark, shorter needles, smaller and rounder cones, and a preference for wetter lowland areas. Numerous differences in details of its needle and pollen morphology also exist but require careful microscopic examination to detect. From true firs, such as Abies balsamea (balsam fir), it differs in having pendulous cones, persistent woody leaf-bases, and four-angled needles, arranged all round the shoots. Due to the large difference between heartwood and sapwood moisture content, it is easy to distinguish these two wood characteristics in ultrasound images, which are widely used as a nondestructive technique to assess the internal condition of the tree and avoid useless log breakdown. Older taxonomic synonyms include A. mariana, P. brevifolia, or P. nigra. Ecology Growth varies with site quality. In swamp and muskeg it shows progressively slower growth rates from the edges toward the centre. The roots are shallow and wide spreading, resulting in susceptibility to windthrow. In the northern part of its range, ice pruned asymmetric black spruce are often seen with diminished foliage on the windward side. Tilted trees colloquially called "drunken trees" are associated with thawing of permafrost. In the southern portion of its range it is found primarily on wet organic soils, but farther north its abundance on uplands increases. In the Great Lakes region it is most abundant in peat bogs and swamps, also on transitional sites between peatlands and uplands. In these areas it is rare on uplands, except in isolated areas of northern Minnesota and the Upper Peninsula of Michigan. Most stands are even-aged due to frequent fire intervals in black spruce forests. It commonly grows in pure stands on organic soils and in mixed stands on mineral soils. It is tolerant of nutrient-poor soils and is commonly found on poorly drained acidic peatlands. It is considered a climax species over most of its range; however, some ecologists question whether black spruce forests truly attain climax because fires usually occur at 50 to 150 year intervals, while "stable" conditions may not be attained for several hundred years. The frequent fire return interval, a natural fire ecology, perpetuates numerous successional communities. Throughout boreal North America, Betula papyrifera (paper birch) and Populus tremuloides (quaking aspen) are successional hardwoods that frequently invade burns in black spruce. Black spruce typically seeds in promptly after fire and with the continued absence of fire eventually dominates the hardwoods. Black spruce is a pioneer that invades the sphagnum mat in filled-lake bogs, though often preceded slightly by Larix laricina (tamarack). Black spruce frequently out-competes shade-intolerant tamarack in the course of bog succession. However, as the peat soil is gradually elevated by the accumulation of organic matter and the fertility of the site improves, balsam fir and northern white cedar (Thuja occidentalis) eventually replace black spruce and tamarack. On drier sites following fires, black spruce can take over stands of faster growing jack pine (Pinus banksiana) by virtue of its ability to grow in partially shaded conditions which inhibit pine seedlings. But black spruce seedlings are intolerant to the low light and low moisture conditions under mature spruce stands. Balsam fir and northern white cedar, both more understory-tolerant species with deeper taproots, survive and eventually succeed the spruce in the absence of fire. The spruce budworm, a moth larva, causes defoliation which kills trees if it occurs several years in a row, though black spruce is less susceptible than white spruce or balsam fir. Trees most at risk are those growing along with balsam fir and white spruce. Cultivation Numerous cultivars have been selected for use in parks and gardens. The cultivar P. mariana 'Nana' is a dwarf form which has gained the Royal Horticultural Society's Award of Garden Merit. Picea mariana is known to hybridize with Serbian spruce, Picea omorika. The hybrid is Picea machala, and hybrids with Sitka spruce are known as well. Uses and symbolism Black spruce is the provincial tree of Newfoundland and Labrador. The timber is of low value due to the small size of the trees, but it is an important source of pulpwood and the primary source of it in Canada. Fast-food chopsticks are often made from black spruce. It is increasingly being used for making cross laminated timber by companies such as Nordic Structures, which allows the high strength due to the tight growth rings to be assembled into larger timbers. Along with red spruce, it has also been used to make spruce gum and spruce beer. Gallery
Biology and health sciences
Pinaceae
Plants
416499
https://en.wikipedia.org/wiki/Mass%20Rapid%20Transit%20%28Singapore%29
Mass Rapid Transit (Singapore)
The Mass Rapid Transit system, locally known by the initialism MRT, is a rapid transit system in Singapore and the island country's principal mode of railway transportation. After two decades of planning the system commenced operations in November 1987 with an initial stretch consisting of five stations. The network has since grown to span the length and breadth of the country's main island – with the exception of the forested core and the rural northwestern region – in accordance with Singapore's aim of developing a comprehensive rail network as the backbone of the country's public transportation system, averaging a daily ridership of 3.45 million in 2023. The MRT network encompasses approximately of grade-separated route on standard gauge. As of 2024, there are currently 142 operational stations dispersed across six operational lines arrayed in a circle-radial topology. Two more lines and 45 stations are currently under construction, in addition to ongoing extension works on existing lines. In total, this will schedule the network to double in length to about by 2040. Further studies are ongoing on potential new alignments and lines, as well as infill stations in the Land Transport Authority's (LTA) Land Transport Masterplan 2040. The island-wide heavy rail network interchanges with a series of automated guideway transit networks localised to select suburban towns — collectively known as the Light Rail Transit (LRT) system — which, along with public buses, complement the mainline by providing a last mile link between MRT stations and HDB public housing estates. The MRT is the oldest, busiest, and most comprehensive metro system in Southeast Asia. Capital expenditure on its rail infrastructure reached a cumulative S$150 billion in 2021, making the network one of the world's costliest on both a per-kilometre and absolute basis. The system is managed in conformity with a semi-nationalised hybrid regulatory framework; construction and procurement fall under the purview of the Land Transport Authority (LTA), a statutory board of the government that allocates operating concessions to the for-profit private corporations SMRT and SBS Transit. These operators are responsible for asset maintenance on their respective lines, and also run bus services, facilitating operational synchronicity and the horizontal integration of the broader public transportation network. The MRT is fully automated and has an extensive driverless rapid transit system. Asset renewal works are periodically carried out to modernise the network and ensure its continued reliability; all stations feature platform screen doors, Wi-Fi connectivity, lifts, climate control, and accessibility provisions, among others. Much of the early network is elevated above ground on concrete viaducts, with a small portion running at-grade; newer lines are largely subterranean, incorporating several of the lengthiest continuous subway tunnel sections in the world. A number of underground stations double as purpose-built air raid shelters under the operational authority of the Singapore Civil Defence Force (SCDF); these stations incorporate deep-level station boxes cast with hardened concrete and blast doors fashioned out of reinforced steel to withstand conventional aerial and chemical ordnance. History Planning and inception The origins of the Mass Rapid Transit (MRT) were derived from a forecast by the country's planners back in 1967 which stated the need for a rail-based urban transport system by 1992. In 1972, a study was conducted by the American firms Wilbur Smith and Associates, Parsons Brinckerhoff (now WSP USA), Tudor, and Bechtel, which was accounted for by the World Bank on behalf of the United Nations Development Programme. The study was undertaken for eight years, including the phases of the study in 1974 and 1977. In 1979, to prepare the third phase of the study, Halcrow, a British firm, was appointed to craft the system; meanwhile, a third phase of the study was published in 1981. However, opposition from the government on the feasibility of the MRT from prominent ministers, among them Finance Minister Goh Keng Swee and Trades and Industry Minister Tony Tan, nearly shuttered the programme on financial grounds and concerns of jobs saturation in the construction industry. Dr Goh instead endorsed the idea of an all-bus system recommended by Harvard University specialists, who argued this would reduce the cost by 50% compared to the proposed MRT system. Public opinion was split on the matter: several expressed concerns about the high cost while others were more focused on increasing the standard of living. Following a debate on whether a bus-only system would be more cost-effective, Communications Minister Ong Teng Cheong came to the conclusion that an all-bus system would be inadequate, as it would have to compete for road space in a land-scarce country. Ong was an architect and town planner by training and through his perseverance and dedication became the main figure behind the initial construction of the system. An MRT System Designs Option Study was also conducted to refine the technical details and the recommended measures for the MRT system - these include: Third rail is to standardise with many metros in the world instead of overhead rail in many railways of the world and MTR. Mandatory to have platform screen doors for safety and ventilation reasons, starting from underground stations and later on extended to elevated and surface stations. Parsons Brinckerhoff and SOFRETU, a French firm, undertook the design options study. Construction begins Singapore's MRT infrastructure is built, operated, and managed in accordance with a hybridised quasi-nationalised regulatory framework called the New Rail Financing Framework (NRFF), in which the lines are constructed and the assets owned by the Land Transport Authority, a statutory board of the Government of Singapore. The network was planned to be constructed and opened in stages, even as plans had already indicated the decision for two main arterial lines. The North–South Line was given priority because it passed through the Central Area that has a high demand for public transport. De Leuw Cather was appointed to undertake a two-year contract for consultancy in November 1982. The Mass Rapid Transit Corporation (MRTC)—later renamed SMRT Corporation—was established on 14 October 1983 and took over the roles and responsibilities of the former provisional Mass Rapid Transit Authority. On 7 November 1987, the first section of the North–South Line started operations, consisting of five stations over six kilometres. Within a year, 20 more stations had been added to the network and a direct service existed between Yishun and Lakeside stations, linking up Central Singapore to Jurong in the west by the end of 1988. The direct service was eventually split into the North–South and East–West lines after the latter's completion of the eastern sector to Tanah Merah station. By the end of 1990, the Branch line has further linked Choa Chu Kang to the network while the inauguration of Boon Lay station on 6 July 1990 marked the completion of the initial system two years ahead of schedule. Subsequent expansions The MRT has been continuously expanded ever since. On 10 February 1996, a S$1.2 billion expansion of the North–South Line into Woodlands was completed, merging the Branch Line into the North–South Line and joining Yishun and Choa Chu Kang stations. The concept of having rail lines that bring people almost directly to their homes led to the introduction of the Light Rail Transit (LRT) lines connecting with the MRT network. On 6 November 1999, the first LRT trains on the Bukit Panjang LRT went into operation. The Expo and Changi Airport stations were opened on 10 January 2001 and 8 February 2002 respectively. The very first infill station of the MRT network to be built on an existing line, Dover station opened on 18 October 2001. The North East Line, the first line operated by SBS Transit, opened on 20 June 2003, is one of the first fully automated heavy rail lines in the world. On 15 January 2006, after intense two-and-a-half years lobbying by the public, Buangkok station was opened, followed by Woodleigh station much later on 20 June 2011. The line's extension to Punggol Coast was opened on 10 December 2024. The Boon Lay Extension of the East–West Line, consisting of Pioneer and Joo Koon stations, opened on 28 February 2009. The Circle Line opened in four stages with Stage 3 on 28 May 2009, Stages 1 and 2 on 17 April 2010, Stages 4 and 5 on 8 October 2011 and the Marina Bay Extension on 14 January 2012. Stage 1 of Downtown line opened on 22 December 2013 with its official opening made on 21 December 2013 by Prime Minister Lee Hsien Loong. Stage 2 opened on 27 December 2015, after being officially opened on 26 December by Prime Minister Lee. The Tuas West Extension of the East–West Line, consisting of Gul Circle, Tuas Crescent, Tuas West Road, and Tuas Link stations, opened on 18 June 2017. Stage 3, the final stage of the Downtown Line, opened on 21 October 2017 with its official opening made on 20 October 2017 by Coordinating Minister for Infrastructure and Minister for Transport Khaw Boon Wan. The second infill station, Canberra station opened on 2 November 2019. Stage 1 of the Thomson–East Coast Line opened on 31 January 2020. Stage 2 of the Thomson–East Coast Line opened on 28 August 2021, extending the line from Woodlands South to Caldecott. Stage 3 of the Thomson–East Coast Line opened on 13 November 2022, extending the line from Caldecott to Gardens by the Bay. On 23 June 2024, the line was extended eastwards terminating at Bayshore. Network and infrastructure A map of the network can be found on the Land Transport Authority's website. Line names The lines are named based on their directions and/or locations. The names were envisioned to be user-friendly, as shown in a survey in which 70% of the respondents expressed such a preference. The Land Transport Authority (LTA) had considered other naming methods in June 2007, whether by name, colour or numbers. After the survey, however, the naming scheme was retained and used for subsequent future MRT lines. Facilities and services Except for the partly at-grade Bishan MRT station (North–South Line), the entirety of the MRT is either elevated or underground. Most below-ground stations are deep and hardened enough to withstand conventional aerial bomb attacks and to serve as bomb shelters. Mobile phone, 3G, 4G and 5G services are available in every part of the network. Underground stations and trains are air-conditioned, while above-ground stations have ceiling fans installed. Every station is equipped with Top Up Kiosk (TUKs), a Passenger Service Centre and LED or plasma displays that show train service information and announcements. All stations are equipped with restrooms and payphones; some restrooms are located at street level. Some stations, especially the major ones, have additional amenities and services, such as retail shops and kiosks, supermarkets, convenience stores, automatic teller machines, and self-service automated kiosks for a variety of services. Most heavy-duty escalators at stations carry passengers up or down at a rate of 0.75 m/s, which is 50% faster than conventional escalators. The Land Transport Authority (LTA) announced a plan to introduce dual speeds to escalators along the North–South and East–West lines, to make it safer for senior citizens using them. As a result, all escalators on the two lines, through a refurbishment programme, will be able to operate at a different speed of 0.5 m/s during off-peak hours, with completion being targeted for 2022. All stations constructed before 2001 initially lacked barrier-free facilities and wider AFC faregates such as lifts, ramps and tactile guidance systems for the elderly and disabled. A retrofitting programme was completed in 2006, with every station provided with at least one barrier-free access route. Over the years, additional barrier-free facilities have been constructed in stations. Since 2020, newer MRT stations have been fitted with a minimum of two lifts. Safety Operators and authorities have stated that numerous measures had been taken to ensure the safety of passengers, and SBS Transit publicised the safety precautions on the driverless North East Line before and after its opening. Safety campaign posters are highly visible in trains and stations, and the operators frequently broadcast safety announcements to passengers and to commuters waiting for trains. Fire safety standards are consistent and equivalent with the guidelines of the National Fire Protection Association in the United States. Full-height platform screen doors were already installed in underground stations since 1987, supplied by Westinghouse. There were calls for platform screen doors to be installed at elevated stations after several incidents in which passengers were killed by oncoming trains when they fell onto the railway tracks at elevated stations. The authorities initially rejected such calls by casting doubts over functionality and concerns about the high installation costs. Nevertheless, the LTA reversed its decision and made plans to install half-height platform screen doors in all elevated stations on 25 January 2008. The first platform screen doors by ST Electronics were installed at Jurong East, Pasir Ris, and Yishun stations in 2009 under trials to test their feasibility. By 14 March 2012, all elevated stations have been retrofitted with the doors and are operational. These doors prevent suicides and unauthorised access to restricted areas. There were a few major incidents in the history of the MRT, which opened in 1987. On 5 August 1993, two trains collided at Clementi station because of an oil spillage on the track, which resulted in 132 injuries. During the construction of the Circle Line on 20 April 2004, a tunnel being constructed under Nicoll Highway collapsed and led to the deaths of four workers. On 15 November 2017, two trains, one being empty, collided at low speed at Joo Koon station due to a malfunction with the communications-based train control (CBTC). Prior to the 2020 circuit breaker measures during the early stages of the COVID-19 pandemic, the public transport operators and LTA were criticised by some commuters for its delayed actions of crowd control and the enforcement of social distancing on public transport. In response, the LTA rolled out a series of precautionary measures, such as social distancing measures and making the wearing of masks in public transport mandatory. Social distancing markers were progressively implemented in the MRT trains and stations which commuters must adhere to; enforced by auxiliary officers and transport ambassadors. The significant reduction of commuters as remote work increased resulted in the transport operators reducing train frequencies and closing stations earlier from 17 April. However, train frequencies were shortly reverted to normal upon review and feedback from the public. Since June 2020, the MRT system has resumed pre-circuit breaker operations. Regulations for social distancing on public transport are no longer applicable by law. Social distancing stickers on seats have been removed. Hours of operation MRT lines operate from 5:30am to 1:00am daily, with the exception of selected periods, such as New Year's Eve, Chinese New Year, Deepavali, Hari Raya, Christmas, eves of public holidays and special occasions such as the state funeral of Lee Kuan Yew (2015), when most of the lines stay open throughout the night or extended till later (before the COVID-19 pandemic began in 2020). Additionally, some stretches of the line end earlier, open later and close on a few days of the weekend. The nightly closures are used for maintenance. During the COVID-19 pandemic across the country, train services ended earlier from 7 April 2020 to 1 June 2020 and service extensions on the eves of public holidays ceased from 7 April 2020 until 28 September 2024 except New Year's Eve. Train service extensions were reinstated back to before the pandemic began in 2020 as mentioned earlier. Train frequencies are 2 - 3 mins during peak hours and 5 - 6 mins during off-peak hours. If the Christmas and New Year's Eve falls on the weekday, train frequencies will remain the same as weekdays only during morning peak whereas it will become 5 - 6 mins until 3pm and standardised to 5 min frequency throughout the whole period until the last train. Architecture and art Early stages of the MRT's construction paid scant attention to station design, with an emphasis on functionality over aesthetics. This is particularly evident in the first few stages of the North–South and East–West lines that opened between 1987 and 1988 from Yio Chu Kang to Clementi. An exception to this was Orchard, chosen by its designers to be a "showpiece" of the system and built initially with a domed roof. Architectural themes became more important only in subsequent stages, and resulted in such designs as the cylindrical station shapes on all stations between Kallang and Pasir Ris except Eunos, and west of Boon Lay, and the perched roofs at Boon Lay, Lakeside, Chinese Garden, Bukit Batok, Bukit Gombak, Choa Chu Kang, Khatib, Yishun, and Eunos stations. Expo station, located on the Changi Airport branch of the East–West Line, is adjacent to the 100,000-square-metre Singapore Expo exhibition facility. Designed by Foster and Partners and completed in January 2001, the station features a large, pillarless, titanium-clad roof in an elliptical shape that sheathes the length of the station platform. This complements a smaller 40-metre reflective stainless-steel disc overlapping the titanium ellipse and visually floats over a glass elevator shaft and the main entrance. The other station with similar architecture is Dover. Changi Airport station, the easternmost station on the MRT network, has the widest platform in any underground MRT station in Singapore. In 2011, it was rated 10 out of 15 most beautiful subway stops in the world by BootsnAll. Various features have been incorporated into the design to make the station aesthetically pleasing to travellers. The station is designed by architectural firm Skidmore, Owings and Merrill, featuring a large interior space and an illuminated link bridge spanning over the island platform. Two Circle Line stations—Bras Basah and Stadium—were commissioned through the Marina Line Architectural Design Competition, which was jointly organised by the Land Transport Authority and the Singapore Institute of Architects. The competition did not require any prior architectural experience from competitors and is acknowledged by the industry as one of the most impartial competitions held in Singapore to date. The winner of both stations was WOHA. In 2009, "Best Transport Building" was awarded to the designers at WOHA Architects at the World Architecture Festival for their design of Bras Basah station. Many MRT stations have specially commissioned artworks in a wide variety of art styles and mediums, including sculptures, murals and mosaics. With over 300 art pieces across 80 stations, it is Singapore's largest public art programme. In the early stages of the MRT, artworks were seldom included; primarily consisting of a few paintings or sculptures representing the recent past of Singapore, mounted in major stations. The opening of the Woodlands Extension introduced bolder pieces of artwork, such as a 4,000 kg sculpture in Woodlands. With the opening of the North East Line in 2003, a series of artworks under a programme called "Art in Transit" were commissioned by the Land Transport Authority (LTA). Created by 19 local artists and integrated into the stations' interior architecture, these works aim to promote the appreciation of public art in high-traffic environments. The artwork for each station is designed to suit the station's identity. Subsequently, all stations on the North East, Circle and Downtown lines have taken part in this programme during their construction, with additional artworks installed at stations on other MRT lines. Rolling stock and signalling Rolling stock Signalling A key component of the signalling system on the MRT is the automatic train control (ATC) system, which in turn is made up of two sub-systems: the automatic train operation (ATO) and automatic train protection (ATP). The ATC has trackside and trainborne components working together to provide safe train separation by using train detection, localisation, and end of authority protection. It also provides safe train operation and movement by using train speed determination, monitoring, over-speed protection and emergency braking. The safety of alighting and departing passengers will also be provided by using a station interlocking system. The ATO drives the train in automatic mode, providing the traction and braking control demands to the train rolling stock system, adjusts its speed upon approaching the station, and provides the control of opening and closing of train and platform screen doors once the train has stopped at the station. The ATP ensures safe train separation by using the ATP track circuit status and by location determination, monitors the speed of the train to maintain safe braking distance, and initiate emergency braking in the event of overspeed. The MRT also uses an automatic train supervision system to supervise the overall operation of the train service according to a prescribed timetable or train interval. The oldest lines, the North–South Line and East–West Line, were the only lines running with fixed block signalling. The North–South Line was upgraded to moving block/CBTC in 2017, and the East–West line upgraded in 2018. As of 27 May 2018, all MRT lines use the CBTC/moving block system in normal daily operations and from 2 January 2019, the old signalling system ceased operations. In comparison to the original fixed block system, the CBTC can reduce train intervals from 120 seconds to 100 seconds, allowing for a 20% increase in capacity and is able to support bidirectional train operations on a single track, enabling trains to be diverted onto another track in the event of a fault on one track. The CBTC system also permits for improved braking performance in wet weather as compared to the original fixed-block ATC. All new MRT lines built since the North East Line in 2003 were equipped with CBTC from the outset, and have the capability to be completely driverless and automated, requiring no on-board staffing. Operations are monitored remotely from the operations control centre of the respective lines. Trains are equipped with intercoms to allow passengers to communicate with staff during emergencies. Depots SMRT Corporation has six train depots: Bishan Depot is the central maintenance depot for the North–South Line with train overhaul facilities, while Changi Depot and Ulu Pandan Depot inspect and house trains overnight. The newer Tuas Depot, opened in 2017, provides the East–West Line with its own maintenance facility, while Mandai Depot services trains for the Thomson–East Coast line. The underground Kim Chuan Depot houses trains for the Circle and Downtown lines, now jointly managed by the two MRT operators. SBS Transit has three depots: Sengkang Depot houses trains for the North East line, the Sengkang LRT line, and the Punggol LRT line. Tai Seng Facility Building, connected to and located east of Kim Chuan Depot, is currently used for the Downtown line. While major operations were shifted to the main Gali Batu Depot in 2015, the Tai Seng Facility Building resumed stabling operations with the extension of the Downtown line in 2017. It currently operates independently from Kim Chuan Depot. Gali Batu Depot is the first MRT depot in Singapore to achieve the certification of Building and Construction Authority (BCA) Green Mark Gold. In August 2014, plans for the East Coast Integrated Depot, the world's first four-in-one train and bus depot were announced. It will be built at Tanah Merah beside the original Changi Depot site to serve the East–West, Downtown, and Thomson–East Coast lines. The new 36 ha depot can house about 220 trains and 550 buses and integrating the depot for both buses and trains will help save close to , or 60 football fields of land. The Tengah Depot for the Jurong Region Line will be situated at the western perimeter of Tengah, and an additional depot facility will be added near Peng Kang Hill station to support the operations of the JRL. Rolling stock for the Jurong Region Line will be stabled at both facilities. Tengah Depot will house the JRL Operations Control Centre and have a bus depot integrated with it to optimise land use. The Changi East Depot will serve the future Cross Island Line, and the depot is to be placed at the eastern end of the line. A Singapore Rail Test Centre (formerly known as Integrated Train Testing Centre) with several test tracks for different situations and workshops for maintenance and refurbishment is also to be built at Tuas by 2022, with the main function being to test trains and integrated systems robustly before they are deployed on operational lines. Future expansion Infrastructure The following table lists the upcoming lines and stations that have been officially announced: The MRT system relied on its two main lines, the North–South and East–West lines, for more than a decade until the opening of the North East Line in 2003. While plans for these lines as well as those currently under construction were formulated long before, the Land Transport Authority's publication of a White Paper titled "A World Class Land Transport System" in 1996 galvanised the government's intentions to greatly expand the system. It called for the expansion of the 67 kilometres of track in 1995 to 360 in 2030. It was expected that daily ridership in 2030 would grow to 6.0 million from the 1.4 million passengers at that time. New lines and extensions are mostly announced as part of the Land Transport Master Plan, which is announced every five years and outlines the government's intentions for the future of the transport network in Singapore. The latest plan, the Land Transport Master Plan 2040, was announced on 25 May 2019, and provides for line extensions to the Downtown and Thomson–East Coast lines, a new MRT line under study, and 2 new stations on the North–South Line. Downtown Line Hume is an infill station between Hillview and Beauty World slated to open on 28 February 2025. An extension from Expo is planned to begin operations in 2026, adding an additional and 2 stations to the line, terminating at Sungei Bedok and interchanging with the Thomson–East Coast Line. Upon opening, the entire line will be long and have 37 stations in total. On 6 January 2025, an extension to the future Sungei Kadut station was announced, including an additional unnamed station between Sungei Kadut and Bukit Panjang, slated to begin operation in 2035. Thomson–East Coast Line Stage 5 from Bedok South to Sungei Bedok is planned to be operational by 2026. The northern terminus of Woodlands North is expected to interchange with the Johor Bahru–Singapore Rapid Transit System for greater connectivity between Johor Bahru and Woodlands, while Founders' Memorial station is an infill station along Stage 4, scheduled to open in tandem with the Founders' Memorial in 2028. In addition, this line and Canberra MRT station were the first to use top-up kiosks (TUK) that only allows cashless payments, while GTMs were retained for traditional modes of payment. Line extension to Changi Airport In addition to the previously announced alignment of the Thomson–East Coast Line, an extension has been proposed to connect it to Changi Airport, with the line passing through Terminal 5, and eventually absorbing the existing Changi Airport branch on the East–West Line. With such an extension, there would be a direct connection between Changi Airport and the city. This extension is expected to start operating by 2040. Tunneling works are tendered out by the Land Transport Authority (LTA) Singapore and Shanghai Tunnel Engineering Corporation was awarded with the tunneling package for Terminal 5. Jurong Region Line First proposed as an LRT line when originally announced in 2001, the Jurong Region Line has since been upgraded to be a medium capacity line after the project was revived in 2013. The new configuration encompasses West Coast, Tengah and Choa Chu Kang and Jurong. West Coast extension Besides the original announced alignment of the line, a West Coast extension to the Circle Line from the Jurong Region Line is currently under study, linking the West Coast region directly to Haw Par Villa, and allowing commuters on the Jurong Region Line access to the central area of the city easily. If feasible, the extension would be ready by 2030. Cross Island Line The Cross Island Line is expected to span the island of Singapore, passing through Tuas, Jurong, Sin Ming, Ang Mo Kio, Hougang, Punggol, Pasir Ris, and Changi. The new line provides commuters with another alternative for east–west travel to the current East–West Line and Downtown Line. Connected to all the other major lines, it is designed to serve as a key transfer line, complementing the role currently fulfilled by the orbital Circle Line. Stage 1 of the line was announced in 2019 and consists of and 12 stations, and is planned to be completed in 2030. Vis-a-vis its short rail length from Aviation Park (Changi) to Bright Hill (Bishan), the project costs S$13.3 billion, and is one of the most expensive rail projects globally, to begin construction in 2022. In addition, the extension to Punggol announced in 2020 consists of three stations spanning , and is planned to be completed by 2032. Completion of the line is expected to take an even longer timeframe due to the environmental study aspects, targeted to be completed by 2030. Circle Line stage 6 The extension Stage 6 from Marina Bay through Keppel, ending at HarbourFront, effectively completes the circle and links the current ends of the line, allowing for through service through the future Southern Waterfront City without the need to change to other lines. Stage 6 comprises the Keppel, Cantonment, and Prince Edward Road stations. It is slated to commence operations in the first half of 2026. Brickland and Sungei Kadut MRT stations Two new stations are planned along the existing North–South Line. Brickland station is expected to be built between Bukit Gombak and Choa Chu Kang stations, while Sungei Kadut station is expected to be built between Yew Tee and Kranji stations. Both MRT stations are expected to be completed by mid-2030s. Proposed ninth line along the North–East Corridor As part of the Land Transport Master Plan 2040, feasibility studies are ongoing for a possible ninth MRT line to link the north and northeastern regions of Singapore to the south of the island. The new line is proposed to run from Woodlands North via Sembawang, Sengkang, Seletar, Serangoon North, Whampoa, Kallang, Marina East and towards the Greater Southern Waterfront. The official alignment has yet to be confirmed. Fares and ticketing Stations are divided into two areas, paid and unpaid, which allow the rail operators to collect fares by restricting entry only through the fare gates, also known as access control gates. These gates, connected to a computer network, can read and update electronic tickets capable of storing data, and can store information such as the initial and destination stations and the duration for each trip. The ticketing system currently utilises a mixture of Account-Based Ticketing (ABT), or SimplyGo, and legacy (non-ABT) card-based options. The station machines allow the customer to buy additional value for stored value smartcards. Such smartcards require a minimum amount of stored credit. As the fare system has been integrated by TransitLink, commuters need to pay only one fare and pass through two fare gates (once on entry, once on exit) for an entire journey for most interchange stations, even when transferring between lines operated by different companies. Commuters can choose to extend a trip mid-journey, and pay the difference when they exit their destination station. Fares Because the rail operators are government-assisted, profit-based corporations, fares on the MRT system are pitched to at least break-even level. The operators collect these fares by selling electronic data-storing tickets, the prices of which are calculated based on the distance between the start and destination stations. These prices increase in fixed stages for standard non-discounted travel. Fares are calculated in increments based on approximate distances between stations, in contrast to the use of fare zones in other subway systems, such as the London Underground. Although operated by private companies, the system's fare structure is regulated by the Public Transport Council (PTC), to which the operators submit requests for changes in fares. Fares are kept affordable by pegging them approximately to distance-related bus fares, thus encouraging commuters to use the network and reduce heavy reliance on the bus system. Fare increases have caused public concern. Historically, fares on the fully underground North East, Circle, and Downtown lines had been higher than those of the North–South and East–West lines (NSEWL), a disparity that was justified by citing higher costs of operation and maintenance on a completely underground line. However, the Public Transport Council (PTC) announced in 2016 that fares for the three underground lines would be reduced to match those on the NSEWL, which took effect along with the yearly-applied fare changes, on 30 December 2016. After the opening of Downtown line Stage 3, Transport Minister Khaw Boon Wan announced that public transport fare rules will be reviewed to allow for transfers across MRT lines at different stations due to the increasing density of the rail network. At the time, commuters were charged a second time when they made such transfers. He added that the PTC would review distance-based fare transfer rules to ensure they continue to facilitate "fast, seamless" public transport journeys. The review of distance-based fare rules on MRT lines was completed, and a waiver on the second boarding fee incurred when making such transfers was announced on 22 March 2018. The scheme was implemented on 29 December of the same year. Ticketing The SimplyGo ABT system, accepts bank cards, mobile wallets and proprietary cards issued by EZ-Link and NETS. The legacy card-based system, that utilises the EZ-Link and NETS flashpay cards, on the Symphony for e-Payments (SeP), remains usable beyond 1 June 2024, after the government agreed to spend an extra $40 million for their continued use. The EZ-Link and NETS flashpay cards had entered into service in 2009, and replaced the FeliCa EZ-Link card. The FeliCa EZ-Link card, had in turn replaced the magnetic Transitlink farecard in 2002. ABT using bank cards and mobile wallets, has eliminated the need for top-ups. The stored value cards using card-based, or cloud-based accounts, and issued by NETS and EZ-Link, may be purchased at the ticketing offices or merchant outlets, for immediate use. The stored value cards could be topped up from the user's primary accounts (such as bank deposits or credit facilities), via their respective mobile applications, or other options under the terms of use. Additional credit of a predetermined value may also be automatically credited into the card when the card value runs low via an automatic recharge service provided by Interbank GIRO or credit card. An Adult Monthly Travel Card for unlimited travel on MRT, LRT, and buses may also be purchased and is non-transferable. In 2017, TransitLink became the first public transport provider in Southeast Asia to accept contactless bank cards and the use of mobile wallets such as Apple Pay, Google Pay and Samsung Pay. The system, named SimplyGo, allows commuters to tap their contactless debit or credit cards, or smartphones/smart watches to pay for fares on the MRT, LRT and Bus network. The SimplyGo and NETS Prepaid cards were added to the system and made available to the public since 2021. The Standard Ticket contactless smart card for single or return journeys, has been phased out completely since March 2022. It was subject to a system of deposits and surcharges: A S$0.10 deposit was levied on top of the fare to be paid. The deposit would be automatically refunded through an offset of the fare to be paid for the third journey on the same ticket while an additional discount of S$0.10 would be given for the sixth journey on the same ticket. No refund of the deposit would be provided if the card was used for fewer than 3 journeys. The ticket could be used for the purchase of single or return journeys to and from pre-selected stations up to a maximum of six journeys over 30 days. Fares for the Standard Ticket were always higher than those charged for the stored-valued CEPAS (EZ-Link and NETS FlashPay) cards for the same distance traveled. The ticket could be retained by the user after each journey and does not need to be returned. For tourists, a Singapore Tourist Pass contactless smartcard may be purchased for use on the public transport network. The card may be bought at selected TransitLink ticket offices and Singapore Visitors Centres. Performance The MRT system did not experience any major performance issues during its first quarter-century of operations. However, there were occasional disruptions around the period from 2011 to 2018, the cause of which was often attributed to the system aging coupled with increased ridership due to population growth. Beginning with the train disruptions in 2011, this incident led to a committee of inquiry which uncovered serious shortcomings in SMRT's maintenance regime. For the December 2011 disruptions, the Land Transport Authority imposed a maximum penalty of S$2 million on SMRT (approximately US$1.526 million) for the two train disruptions along the North–South line on 15 and 17 December 2011. A Committee of Inquiry discovered shortcomings in the maintenance regime and checks, prompting then-CEO Saw Phaik Hwa to resign. A much larger power-related incident than the December 2011 event occurred on 7 July 2015, when train services on both the North–South and East–West lines were shut down in both directions following a major power trip. The disruption lasted for more than 3 hours, affecting 413,000 commuters. This was considered the worst disruption to the MRT network since it first began operations in 1987 – surpassing the December 2011 event. Independent experts from Sweden and Japan were hired to conduct investigation into the cause of the disruption. The cause was identified as damage to a third rail insulator due to a water leak at Tanjong Pagar station. Consequently, a program was implemented to replace insulators liable to similar failure. For the July 2015 disruption, LTA imposed a higher penalty of S$5.4 million on SMRT. On 22 March 2016, a fatal accident occurred off Pasir Ris station. Two of SMRT's track-maintenance trainee staff were lethally run over by an approaching C151 at a signalling box of the station. They were part of a technical team of 15 staff led by a supervisor and were asked to go down to the tracks to investigate an alarm triggered by a possible signalling equipment fault. The operator said the team had permission to access the tracks, but did not coordinate with a signal unit in the station control to ensure train captains in the area where the team was exercised caution while pulling into Pasir Ris station. This incident resulted in a 2.5-hour service disruption between Tanah Merah and Pasir Ris Stations, affecting at least 10,000 commuters. On 7 October 2017, a dilapidated float and pump system at Bishan station caused a tunnel flood after heavy torrential rainstorms. It was the worst train disruption since 2011 and was the first ever flooding incident in the history of the MRT. This resulted in criticism on the public transport operators among Singaporeans once again, and a huge debate about the "high rankings" that manage the system, with calls being made for the resignation of then Transport Minister Khaw Boon Wan. Urban transport expert Park Byung Joon from the Singapore University of Social Sciences added that the negligence displayed by SMRT in this regard was tantamount to a criminal offence, and after an internal investigation, found that the maintenance crew of the Bishan Station's pump system had submitted maintenance records for nearly a year without actually carrying out the works. On 25 September 2024, a major train disruption occurred when an eastbound train on the East-West Line (EWL) suffered a fault near Clementi station. After disembarking its passengers and upon reaching Ulu Pandan Depot, it started to smoke and created a power trip. The resulting incident shut down all EWL train services between Boon Lay and Queenstown, with the LTA and SMRT delaying reopening services for 2 days, making it the longest MRT train disruption in Singaporean history. Normal train service is expected to resume on the following Monday, 30 September 2024. Responses The December 2011 disruptions brought the state of public transportation as a whole to national prominence among Singaporeans, who had previously considered the system to be reliable and robust since its inception in 1987. LTA also noted a marked increase in dissatisfaction with public transport with the release of the 2012 Public Transport Customer Satisfaction Survey, and promised government action to deal with issues relating to system disruptions. The government reviewed the penalties for train disruptions, and made free travel available for all bus services passing MRT stations affected during any train disruptions. Exits were also made free. In addition, to increase satisfaction with public transport, free off-peak morning travel, later changed to a discount, was introduced with further improvements continuing to be discussed. Since 2018, efforts in both maintenance and renewal are starting to pay off with the MRT system clocking an average of 690,000 km between delays in 2018 – a 3.8 times improvement than in 2017. The North–South line, which was hit by the tunnel flood in 2017, in particular saw its train-km between delays increase by ten-fold from 89,000 km between delays in 2017 to 894,000 km in 2018. By July 2019, the Mean Kilometres Between Failure (MKBF) for the North–South and East–West lines had jumped to 700,000 km and 1,400,000 km respectively. The new challenges encountered by the government were now on keeping the funding of such renewals required sustainable in the decades ahead. Security Security concerns related to crime and terrorism were not high on the agenda of the system's planners at its inception. After the Madrid train bombings in 2004 and the foiled plot to bomb the Yishun MRT station in 2001, the operators deployed private, unarmed guards to patrol station platforms and conduct checks on the belongings of commuters, especially those carrying bulky items. Recorded announcements are frequently made to remind passengers to report suspicious activity and not to leave their belongings unattended, and since 2023; to remind people on voyeurism such as molestation and taking of upskirt photos. Digital closed-circuit cameras (CCTVs) have been upgraded with recording-capability at all stations and trains operated by SMRT Corporation. Trash bins and mail boxes have been removed from station platforms and concourse levels to station entrances, to eliminate the risk of bombs planted in them. While photography and filming is allowed at all of the public areas (except train depots where it is gazetted as restricted areas by law), station staff may conduct checks and interviews to ensure that they are not intended to be used for criminal activities such as taking of upskirt photos, staff and police may reserve the right to stop these activities. In 2005, the Singapore Police Force announced plans to step up rail security by establishing a specialised security unit for public transport, then known as the Police MRT unit. The unit today expanded to become Public Transport Security Command (TRANSCOM) since 2009. These armed officers began overt patrols on the MRT and LRT systems on 15 August 2005, conducting random patrols in pairs in and around stations and within trains. They are trained and authorised to use their firearms at their discretion, including deadly force if deemed necessary. The unit over time went on to handle other crimes committed on the MRT network, such as theft and molestation. Recently, on its tenth anniversary in 2019, it has formally evolved to become a hybrid, community-based force, and has launched an initiative to get commuters to aid Transcom officers. Since then, 26,000 people have volunteered, far above the 3000 target. Civil exercises are regularly conducted to maintain preparedness for contingencies. In January 2006, Exercise Northstar V involved over 2,000 personnel from 22 government agencies responding to simulated bombings and chemical attacks at Dhoby Ghaut, Toa Payoh, Raffles Place and Marina Bay stations. In August 2013, Exercise Greyhound tested the response of SBS Transit's Operations Control Centre and the implementation of its contingency plans for bus bridging, free bus service and deployment of goodwill ambassadors (GAs) during a simulated prolonged train service disruption. About 300 personnel including representatives from LTA, SBST, SMRT, the Singapore Police Force's Transport Command (TransCom), Traffic Police and Singapore Civil Defence Force (SCDF) participated in the exercise. Security concerns were brought up by the public when two incidents of vandalism at train depots occurred within two years. In both incidents, graffiti on the affected trains was discovered after they entered revenue service. The first incident, on 17 May 2010, involved a breach in the perimeter fence of Changi Depot and resulted in the imprisonment and caning of a Swiss citizen, and an Interpol arrest warrant for his accomplice. SMRT Corporation received a S$50,000 fine by the Land Transport Authority for the first security breach. Measures were put in place by the Public Transport Security Committee to enhance depot security in light of the first incident, but works were yet to be completed by SMRT Corporation when the second incident, on 17 August 2011, occurred at Bishan Depot. Regulations Under the Rapid Transit Systems Act, acts such as smoking, consumption of any food or drink, including sweets and plain water in stations and trains, misuse of emergency equipment, unauthorised photography or filming of railway assets and trespassing onto railway tracks or into train depots are illegal, with penalties ranging from fines to imprisonment and possibly caning. Some commentators have suggested that SMRT's strict enforcement of the total ban on the consumption of any food or drink, including sweets and plain water, especially during hot weather or against persons with legitimate needs (such as where consumption of food or drink is needed for medical reasons), is disproportionate and unnecessary. Priority seats There are generally a number of seats in each MRT carriage designated as 'priority seats' located near the train doors which are intended to be used by the elderly, pregnant women, parents with infants and others with mobility problems. The use of such seats by persons who do not fit the foregoing description or who do not outwardly appear to be in need of a seat on the MRT, has repeatedly been the subject of public debate in Singapore. In 2019, the LTA launched the "May I have a seat please?" initiative. Under the initiative, upon request, LTA provides commuters with non-visible health conditions or disabilities or short-term or temporary conditions (such as where they are on medical leave), with a lanyard or sticker respectively reading "May I have a seat please?".
Technology
Asia_2
null
416549
https://en.wikipedia.org/wiki/Morpho%20%28genus%29
Morpho (genus)
The morpho butterflies comprise many species of Neotropical butterfly under the genus Morpho. This genus includes more than 29 accepted species and 147 accepted subspecies, found mostly in South America, Mexico, and Central America. Morpho wingspans range from for M. rhodopteron to for M. hecuba, the imposing sunset morpho. The name morpho, meaning "changed" or "modified", is also an epithet. Blue morphos are severely threatened by the deforestation of tropical forests and habitat fragmentation. Humans provide a direct threat to this genus because their beauty attracts artists and collectors from all over the globe who wish to capture and display them. Aside from humans, birds like the jacamar and flycatcher are the adult butterfly’s natural predators. Taxonomy and nomenclature Many names attach to the genus Morpho. The genus has also been divided into subgenera. Hundreds of form, variety, and aberration names are used among Morpho species and subspecies. One lepidopterist includes all such species within a single genus, and synonymized many names in a limited number of species. Two other lepidopterists use a phylogenetic analysis with different nomenclature. Other authorities accept many more species. Etymology The genus name Morpho comes from an Ancient Greek epithet , roughly "the shapely one", for Aphrodite, goddess of love and beauty. Species This list is arranged alphabetically within species groups. Subgenus Iphimedeia Species group hercules Morpho amphitryon Staudinger, 1887 Morpho hercules (Dalman, 1823) – Hercules morpho Morpho richardus Fruhstorfer, 1898 – Richard's morpho Species group hecuba Morpho cisseis C. Felder & R. Felder, 1860 – Cisseis morpho Morpho hecuba (Linnaeus, 1771) – sunset morpho Species group telemachus Morpho telemachus (Linnaeus, 1758) Morpho theseus Deyrolle, 1860 – Theseus morpho Subgenus Iphixibia Morpho anaxibia (Esper, 1801) Subgenus Cytheritis Species group sulkowskyi Morpho sulkowskyi – Sulkowsky's morpho Species group lympharis Morpho lympharis Butler, 1873 – Lympharis morpho Species group rhodopteron Morpho rhodopteron Godman & Salvin, 1880 Species group portis Morpho portis (Hübner, [1821]) Morpho thamyris C. Felder & R. Felder, 1867 – Thamyris morpho – or as a subspecies of M. portis Species group zephyritis Morpho zephyritis Butler, 1873 – Zephyritis morpho Species group aega Morpho aega (Hübner, [1822]) – Aega morpho Species group adonis Morpho eugenia Deyrolle, 1860 – Empress Eugénie morpho Morpho marcus (Cramer, 1775) Morpho uraneis Bates, 1865 Subgenus Balachowskyna Morpho aurora – Aurora morpho Subgenus Cypritis Species group cypris Morpho cypris Westwood, 1851 – Cypris morpho Species group rhetenor Morpho helena Staudinger, 1890 – Helena blue morpho Morpho rhetenor (Cramer, [1775]) – Rhetenor blue morpho Subgenus Pessonia Species group polyphemus Morpho luna Butler, 1869 or as subspecies Morpho polyphemus luna Morpho polyphemus Westwood, [1850] – (Polyphemus) white morpho Species group catenaria Morpho catenarius Perry, 1811 or as a subspecies of M. epistrophus Morpho epistrophus (Fabricius, 1796) – Epistrophus white morpho Morpho laertes (Drury, 1782) may be a synonym of M. epistrophus Subgenus Crasseia Species group menelaus Morpho amathonte (Deyrolle, 1860) or as a subspecies of M. menelaus Morpho didius Hopffer, 1874 – giant blue morpho – or as a subspecies of M. menelaus Morpho godarti (Guérin-Méneville, 1844) – Godart's morpho – or as a subspecies of M. menelaus Morpho menelaus (Linnaeus, 1758) – Menelaus blue morpho Subgenus Morpho Species group deidamia Morpho deidamia (Hübner, [1819]) – Deidamia morpho Morpho granadensis Felder and Felder, 1867 – Granada morpho – or as a subspecies of M. deidamia Species group helenor Morpho helenor (Cramer, 1776) – Helenor blue morpho or common blue morpho Morpho peleides Kollar, 1850 – Peleides blue morpho, common morpho, or the emperor Species group achilles Morpho achilles (Linnaeus, 1758) – Achilles morpho Ungrouped: Morpho absoloni May, 1924 Morpho athena Otero, 1966 Morpho niepelti Röber, 1927 Coloration Many morpho butterflies are colored in metallic, shimmering shades of blues and greens. These colors are not a result of pigmentation, but are an example of iridescence through structural coloration. Specifically, the microscopic scales covering the morpho's wings reflect incident light repeatedly at successive layers, leading to interference effects that depend on both wavelength and angle of incidence/observance. Thus, the colors appear to vary with viewing angle, but they are surprisingly uniform, perhaps due to the tetrahedral (diamond-like) structural arrangement of the scales or diffraction from overlying cell layers. The wide-angle blue reflection property can be explained by exploring the nanostructures in the scales of the morpho butterfly wings. These optically active structures integrate three design principles leading to the wide-angle reflection: Christmas tree-like shaped ridges, alternating lamellae layers (or "branches"), and a small height offset between neighboring ridges. The reflection spectrum is found to be broad (about 90 nm) for alternating layers and can be controlled by varying the design pattern. The Christmas tree-like pattern helps to reduce the directionality of the reflectance by creating an impedance matching for blue wavelengths. In addition, the height offset between neighboring ridges increases the intensity of reflection for a wide range of angles. This structure may be likened to a photonic crystal. The lamellate structure of their wing scales has been studied as a model in the development of biomimetic fabrics, dye-free paints, and anticounterfeit technology used in currency. The iridescent lamellae are only present on the dorsal sides of their wings, leaving the ventral sides brown. The ventral side is decorated with ocelli (eyespots). In some species, such as M. godarti, the dorsal lamellae are so thin that ventral ocelli can peek through. While not all morphos have iridescent coloration, they all have ocelli. In most species, only the males are colorful, supporting the theory that the coloration is used for intrasexual communication between males. The lamellae reflect up to 70% of light falling on them, including any ultraviolet. The eyes of morpho butterflies are thought to be highly sensitive to UV light, so the males are able to see each other from great distances. Some South American species are reportedly visible to the human eye up to one kilometer away. Also, a number of other species exist which are tawny orange or dark brown (for instance M. hecuba and M. telemachus). Some species are white, principal among these being M. catenarius and M. laertes. An unusual species, fundamentally white in coloration, but which exhibits a stunning pearlescent purple and teal iridescence when viewed at certain angles, is the rare M. sulkowskyi. Some Andean species are small and delicate (M. lympharis). Among the metallic blue Morpho species, M. rhetenor stands out as the most iridescent of all, with M. cypris a close second. Indeed, M. cypris is notable in that specimens mounted in entomological collections exhibit color differences across the wings if they are not 'set' perfectly flat. Many species, like M. cypris and M. rhetenor helena have a white stripe pattern on their colored blue wings as well. Celebrated author and lepidopterist Vladimir Nabokov described their appearance as "shimmering light-blue mirrors". Sexual dimorphism The blue morpho species exhibit sexual dimorphism. In some species (for instance M.adonis, M. eugenia, M. aega, M. cypris, and M. rhetenor), only the males are iridescent blue; the females are disruptively colored brown and yellow. In other species (for instance M. anaxibia, M. godarti, M. didius, M. amathonte, and M. deidamia), the females are partially iridescent, but less blue than the males. Habitat Morpho butterflies inhabit the primary forests of the Amazon and Atlantic. They also adapted to breed in a wide variety of other forested habitats – for instance, the dry deciduous woodlands of Nicaragua and secondary forests. Morphos are found at altitudes between sea level and about . Biology Morphos are diurnal, as males spend the mornings patrolling along the courses of forest streams and rivers. They are territorial and chase any rivals. Morphos typically live alone, excluding in the mating season. The genus Morpho is palatable, but some species (such as M. amathonte) are very strong fliers; birds—even species which are specialized for catching butterflies on the wing—find it very hard to catch them. The conspicuous blue coloration shared by most Morpho species may be a case of Müllerian mimicry, or may be 'pursuit aposematism'. The eyespots on the undersides of the wings of both males and females may be a form of automimicry in which a spot on the body of an animal resembles an eye of a different animal to deceive potential predator or prey species, to draw a predator's attention away from the most vulnerable body parts, or to appear as an inedible or even dangerous animal. Predators include royal flycatchers, jacamars and other insectivorous birds, frogs, and lizards. Behavior Morphos have a very distinctive, slow, bouncy flight pattern due to the wing area being enormous relative to the body size. Life cycle The entire life cycle of the morpho butterfly, from egg to death, is about 115 days. The larvae hatch from pale-green, dewdrop-like eggs. The caterpillars have reddish-brown bodies with bright lime-green or yellow patches on their backs. Its hairs are irritating to human skin, and when disturbed it secretes a fluid that smells like rancid butter from eversible glands on the thorax. The strong odor is a defense against predators. They feed on a variety of plants. The caterpillar molts five times before entering the pupal stage. The bulbous chrysalis is pale green or jade green and emits a repulsive, ultrasonic sound when touched. It is suspended from a stem or leaf of the food plant. The adults live for about two to three weeks. They feed on the fluids of fermenting fruit, decomposing animals, tree sap, fungi, and nutrient-rich mud. They are poisonous to predators due to toxins they sequestered from plants on which they fed as caterpillars. The more common blue morphos are reared en masse in commercial breeding programs. The iridescent wings are used in the manufacture of jewelry and as inlay in woodworking. Papered specimens are sold with the abdomen removed to prevent its oily contents from staining the wings. Significant numbers of live specimens are exported as pupae from several Neotropical countries for exhibition in butterfly houses. Unfortunately, due to their irregular flight pattern and size, their wings are frequently damaged when in captivity. Host plants Morpho larvae, variously according to species and region, feed on Leguminosae, Gramineae, Canellaceae, Guttiferae, Erythroxylaceae, Myrtaceae, Moraceae, Lauraceae, Sapindaceae, Rhamnaceae, Euphorbiaceae, Musaceae, Palmae, Menispermaceae, Tiliaceae, Bignoniaceae, and Menispermaceae. According to Penz and DeVries the ancestral diet of larval Satyrinae is Poaceae or other monocots. Many morphos have switched to dicots on several occasions during their evolutionary history, but basal species have retained the monocot diets. Collectors Morpho butterflies, often very expensive, have always been prized by extremely wealthy collectors. Famous collections include those of the London jeweler Dru Drury and the Dutch merchant Pieter Teyler van der Hulst, the Paris diplomat Georges Rousseau-Decelle, the financier Walter Rothschild, the Romanov Grand Duke Nicholas Mikhailovich of Russia and the, English and German respectively, businessmen James John Joicey and Curt Eisner. In earlier years, Morphos graced cabinets of curiosities "Kunstkamera" and royal cabinets of natural history notably those of Tsar of Russia Peter the Great, the Austrian empress Maria Theresa and Ulrika Eleonora, Queen of Sweden. More famous is Maria Sibylla Merian, who was not wealthy. The people along the Rio Negro in Brazil once exploited the territorial habits of the blue morpho (M. menelaus) by luring them into clearings with bright blue decoys. The collected butterfly wings were used as embellishment for ceremonial masks. Adult morpho butterflies feed on the juices of fermenting fruit with which they may also be lured. The butterflies wobble in flight and are easy to catch. Gallery Illustrations
Biology and health sciences
Lepidoptera
Animals
416651
https://en.wikipedia.org/wiki/Continental%20shelf
Continental shelf
A continental shelf is a portion of a continent that is submerged under an area of relatively shallow water, known as a shelf sea. Much of these shelves were exposed by drops in sea level during glacial periods. The shelf surrounding an island is known as an "insular shelf." The continental margin, between the continental shelf and the abyssal plain, comprises a steep continental slope, surrounded by the flatter continental rise, in which sediment from the continent above cascades down the slope and accumulates as a pile of sediment at the base of the slope. Extending as far as 500 km (310 mi) from the slope, it consists of thick sediments deposited by turbidity currents from the shelf and slope. The continental rise's gradient is intermediate between the gradients of the slope and the shelf. Under the United Nations Convention on the Law of the Sea, the name continental shelf was given a legal definition as the stretch of the seabed adjacent to the shores of a particular country to which it belongs. Topography The shelf usually ends at a point of increasing slope (called the shelf break). The sea floor below the break is the continental slope. Below the slope is the continental rise, which finally merges into the deep ocean floor, the abyssal plain. The continental shelf and the slope are part of the continental margin. The shelf area is commonly subdivided into the inner continental shelf, mid continental shelf, and outer continental shelf, each with their specific geomorphology and marine biology. The character of the shelf changes dramatically at the shelf break, where the continental slope begins. With a few exceptions, the shelf break is located at a remarkably uniform depth of roughly ; this is likely a hallmark of past ice ages, when sea level was lower than it is now. The continental slope is much steeper than the shelf; the average angle is 3°, but it can be as low as 1° or as high as 10°. The slope is often cut with submarine canyons. The physical mechanisms involved in forming these canyons were not well understood until the 1960s. Geographical distribution Continental shelves cover an area of about 27 million km2 (10 million sq mi), equal to about 7% of the surface area of the oceans. The width of the continental shelf varies considerably—it is not uncommon for an area to have virtually no shelf at all, particularly where the forward edge of an advancing oceanic plate dives beneath continental crust in an offshore subduction zone such as off the coast of Chile or the west coast of Sumatra. The largest shelf—the Siberian Shelf in the Arctic Ocean—stretches to in width. The South China Sea lies over another extensive area of continental shelf, the Sunda Shelf, which joins Borneo, Sumatra, and Java to the Asian mainland. Other familiar bodies of water that overlie continental shelves are the North Sea and the Persian Gulf. The average width of continental shelves is about . The depth of the shelf also varies, but is generally limited to water shallower than . The slope of the shelf is usually quite low, on the order of 0.5°; vertical relief is also minimal, at less than . Though the continental shelf is treated as a physiographic province of the ocean, it is not part of the deep ocean basin proper, but the flooded margins of the continent. Passive continental margins such as most of the Atlantic coasts have wide and shallow shelves, made of thick sedimentary wedges derived from long erosion of a neighboring continent. Active continental margins have narrow, relatively steep shelves, due to frequent earthquakes that move sediment to the deep sea. Sediments The continental shelves are covered by terrigenous sediments; that is, those derived from erosion of the continents. However, little of the sediment is from current rivers; some 60–70% of the sediment on the world's shelves is relict sediment, deposited during the last ice age, when sea level was 100–120 m lower than it is now. Sediments usually become increasingly fine with distance from the coast; sand is limited to shallow, wave-agitated waters, while silt and clays are deposited in quieter, deep water far offshore. These accumulate every millennium, much faster than deep-sea pelagic sediments. Shelf seas "Shelf seas" are the ocean waters on the continental shelf. Their motion is controlled by the combined influences of the tides, wind-forcing and brackish water formed from river inflows (Regions of Freshwater Influence). These regions can often be biologically highly productive due to mixing caused by the shallower waters and the enhanced current speeds. Despite covering only about 8% of Earth's ocean surface area, shelf seas support 15–20% of global primary productivity. In temperate continental shelf seas, three distinctive oceanographic regimes are found, as a consequence of the interplay between surface heating, lateral buoyancy gradients (due to river inflow), and turbulent mixing by the tides and to a lesser extent the wind. In shallower water with stronger tides and away from river mouths, tidal turbulence overcomes the stratifying influence of surface heating, and the water column remains well mixed for the entire seasonal cycle. In contrast, in deeper water, the surface heating wins out in summer, to produce seasonal stratification with a warm surface layer overlying the isolated deep water. (The well mixed and seasonally stratifying regimes are separated by persistent features called tidal mixing fronts.) A third regime which links estuaries to shelf seas, Regions of Freshwater Influence (ROFIs), is found where estuaries enter shelf seas, for example in the Liverpool Bay area of the Irish Sea and Rhine Outflow region of the North Sea. Here, stratification can vary on timescales from the semidiurnal tidal cycle through to the springs-neap tidal cycle due to a process known as "tidal straining". While the North Sea and Irish Sea are two of the better studied shelf seas, they are not necessarily representative of all shelf seas as there is a wide variety of behaviours to be found: Indian Ocean shelf seas are dominated by major river systems, including the Ganges and Indus rivers. The shelf seas around New Zealand are complicated because the submerged continent of Zealandia creates wide plateaus. Shelf seas around Antarctica and the shores of the Arctic Ocean are influenced by sea ice production and polynya. There is evidence that changing wind, rainfall, and regional ocean currents in a warming ocean are having an effect on some shelf seas. Improved data collection via Integrated Ocean Observing Systems in shelf sea regions is making identification of these changes possible. Biota Continental shelves teem with life because of the sunlight available in shallow waters, in contrast to the biotic desert of the oceans' abyssal plain. The pelagic (water column) environment of the continental shelf constitutes the neritic zone, and the benthic (sea floor) province of the shelf is the sublittoral zone. The shelves make up less than 10% of the ocean, and a rough estimate suggests that only about 30% of the continental shelf sea floor receives enough sunlight to allow benthic photosynthesis. Though the shelves are usually fertile, if anoxic conditions prevail during sedimentation, the deposits may over geologic time become sources for fossil fuels. Economic significance The continental shelf is the best understood part of the ocean floor, as it is relatively accessible. Most commercial exploitation of the sea, such as extraction of metallic ore, non-metallic ore, and hydrocarbons, takes place on the continental shelf. Sovereign rights over their continental shelves down to a depth of or to a distance where the depth of waters admitted of resource exploitation were claimed by the marine nations that signed the Convention on the Continental Shelf drawn up by the UN's International Law Commission in 1958. This was partly superseded by the 1982 United Nations Convention on the Law of the Sea (UNCLOS). The 1982 convention created the exclusive economic zone, plus continental shelf rights for states with physical continental shelves that extend beyond that distance. The legal definition of a continental shelf differs significantly from the geological definition. UNCLOS states that the shelf extends to the limit of the continental margin, but no less than and no more than from the baseline. Thus inhabited volcanic islands such as the Canaries, which have no actual continental shelf, nonetheless have a legal continental shelf, whereas uninhabitable islands have no shelf.
Physical sciences
Oceanic and coastal landforms
null
416681
https://en.wikipedia.org/wiki/Reversible%20process%20%28thermodynamics%29
Reversible process (thermodynamics)
In thermodynamics, a reversible process is a process, involving a system and its surroundings, whose direction can be reversed by infinitesimal changes in some properties of the surroundings, such as pressure or temperature. Throughout an entire reversible process, the system is in thermodynamic equilibrium, both physical and chemical, and nearly in pressure and temperature equilibrium with its surroundings. This prevents unbalanced forces and acceleration of moving system boundaries, which in turn avoids friction and other dissipation. To maintain equilibrium, reversible processes are extremely slow (quasistatic). The process must occur slowly enough that after some small change in a thermodynamic parameter, the physical processes in the system have enough time for the other parameters to self-adjust to match the new, changed parameter value. For example, if a container of water has sat in a room long enough to match the steady temperature of the surrounding air, for a small change in the air temperature to be reversible, the whole system of air, water, and container must wait long enough for the container and air to settle into a new, matching temperature before the next small change can occur. While processes in isolated systems are never reversible, cyclical processes can be reversible or irreversible. Reversible processes are hypothetical or idealized but central to the second law of thermodynamics. Melting or freezing of ice in water is an example of a realistic process that is nearly reversible. Additionally, the system must be in (quasistatic) equilibrium with the surroundings at all time, and there must be no dissipative effects, such as friction, for a process to be considered reversible. Reversible processes are useful in thermodynamics because they are so idealized that the equations for heat and expansion/compression work are simple. This enables the analysis of model processes, which usually define the maximum efficiency attainable in corresponding real processes. Other applications exploit that entropy and internal energy are state functions whose change depends only on the initial and final states of the system, not on how the process occurred. Therefore, the entropy and internal-energy change in a real process can be calculated quite easily by analyzing a reversible process connecting the real initial and final system states. In addition, reversibility defines the thermodynamic condition for chemical equilibrium. Overview Thermodynamic processes can be carried out in one of two ways: reversibly or irreversibly. An ideal thermodynamically reversible process is free of dissipative losses and therefore the magnitude of work performed by or on the system would be maximized. The incomplete conversion of heat to work in a cyclic process, however, applies to both reversible and irreversible cycles. The dependence of work on the path of the thermodynamic process is also unrelated to reversibility, since expansion work, which can be visualized on a pressure–volume diagram as the area beneath the equilibrium curve, is different for different reversible expansion processes (e.g. adiabatic, then isothermal; vs. isothermal, then adiabatic) connecting the same initial and final states. Irreversibility In an irreversible process, finite changes are made; therefore the system is not at equilibrium throughout the process. In a cyclic process, the difference between the reversible work and the actual work for a process as shown in the following equation: Boundaries and states Simple reversible processes change the state of a system in such a way that the net change in the combined entropy of the system and its surroundings is zero. (The entropy of the system alone is conserved only in reversible adiabatic processes.) Nevertheless, the Carnot cycle demonstrates that the state of the surroundings may change in a reversible process as the system returns to its initial state. Reversible processes define the boundaries of how efficient heat engines can be in thermodynamics and engineering: a reversible process is one where the machine has maximum efficiency (see Carnot cycle). In some cases, it may be important to distinguish between reversible and quasistatic processes. Reversible processes are always quasistatic, but the converse is not always true. For example, an infinitesimal compression of a gas in a cylinder where there is friction between the piston and the cylinder is a quasistatic, but not reversible process. Although the system has been driven from its equilibrium state by only an infinitesimal amount, energy has been irreversibly lost to waste heat, due to friction, and cannot be recovered by simply moving the piston in the opposite direction by the infinitesimally same amount. Engineering archaisms Historically, the term Tesla principle was used to describe (among other things) certain reversible processes invented by Nikola Tesla. However, this phrase is no longer in conventional use. The principle stated that some systems could be reversed and operated in a complementary manner. It was developed during Tesla's research in alternating currents where the current's magnitude and direction varied cyclically. During a demonstration of the Tesla turbine, the disks revolved and machinery fastened to the shaft was operated by the engine. If the turbine's operation was reversed, the disks acted as a pump.
Physical sciences
Thermodynamics
Physics
416765
https://en.wikipedia.org/wiki/Application%20checkpointing
Application checkpointing
Checkpointing is a technique that provides fault tolerance for computing systems. It involves saving a snapshot of an application's state, so that it can restart from that point in case of failure. This is particularly important for long-running applications that are executed in failure-prone computing systems. Checkpointing in distributed systems In the distributed computing environment, checkpointing is a technique that helps tolerate failures that would otherwise force a long-running application to restart from the beginning. The most basic way to implement checkpointing is to stop the application, copy all the required data from the memory to reliable storage (e.g., parallel file system), then continue with execution. In the case of failure, when the application restarts, it does not need to start from scratch. Rather, it will read the latest state ("the checkpoint") from the stable storage and execute from that point. While there is ongoing debate on whether checkpointing is the dominant I/O workload on distributed computing systems, the general consensus is that checkpointing is one of the major I/O workloads. There are two main approaches for checkpointing in the distributed computing systems: coordinated checkpointing and uncoordinated checkpointing. In the coordinated checkpointing approach, processes must ensure that their checkpoints are consistent. This is usually achieved by some kind of two-phase commit protocol algorithm. In the uncoordinated checkpointing, each process checkpoints its own state independently. It must be stressed that simply forcing processes to checkpoint their state at fixed time intervals is not sufficient to ensure global consistency. The need for establishing a consistent state (i.e., no missing messages or duplicated messages) may force other processes to roll back to their checkpoints, which in turn may cause other processes to roll back to even earlier checkpoints, which in the most extreme case may mean that the only consistent state found is the initial state (the so-called domino effect). Implementations for applications Save State One of the original and now most common means of application checkpointing was a "save state" feature in interactive applications, in which the user of the application could save the state of all variables and other data and either continue working or exit the application and restart the application and restore the saved state at a later time. This was implemented through a "save" command or menu option in the application. In many cases, it became standard practice to ask the user, if they had unsaved work when exiting an application, if they wanted to save their work before doing so. This functionality became extremely important for usability in applications in which a particular task could not be completed in one sitting (such as playing a video game expected to take dozens of hours) or in which the work was being done over a long period of time (such as data entry into a document such as rows in a spreadsheet). The problem with save state is it requires the operator of a program to request the save. For non-interactive programs, including automated or batch processed workloads, the ability to checkpoint such applications also had to be automated. Checkpoint/Restart As batch applications began to handle tens to hundreds of thousands of transactions, where each transaction might process one record from one file against several different files, the need for the application to be restartable at some point without the need to rerun the entire job from scratch became imperative. Thus the "checkpoint/restart" capability was born, in which after a number of transactions had been processed, a "snapshot" or "checkpoint" of the state of the application could be taken. If the application failed before the next checkpoint, it could be restarted by giving it the checkpoint information and the last place in the transaction file where a transaction had successfully completed. The application could then restart at that point. Checkpointing tends to be expensive, so it was generally not done with every record, but at some reasonable compromise between the cost of a checkpoint vs. the value of the computer time needed to reprocess a batch of records. Thus the number of records processed for each checkpoint might range from 25 to 200, depending on cost factors, the relative complexity of the application and the resources needed to successfully restart the application. Fault Tolerance Interface (FTI) FTI is a library that aims to provide computational scientists with an easy way to perform checkpoint/restart in a scalable fashion. FTI leverages local storage plus multiple replications and erasures techniques to provide several levels of reliability and performance. FTI provides application-level checkpointing that allows users to select which data needs to be protected, in order to improve efficiency and avoid space, time and energy waste. It offers a direct data interface so that users do not need to deal with files and/or directory names. All metadata is managed by FTI in a transparent fashion for the user. If desired, users can dedicate one process per node to overlap fault tolerance workload and scientific computation, so that post-checkpoint tasks are executed asynchronously. Berkeley Lab Checkpoint/Restart (BLCR) The Future Technologies Group at the Lawrence National Laboratories are developing a hybrid kernel/user implementation of checkpoint/restart called BLCR. Their goal is to provide a robust, production quality implementation that checkpoints a wide range of applications, without requiring changes to be made to application code. BLCR focuses on checkpointing parallel applications that communicate through MPI, and on compatibility with the software suite produced by the SciDAC Scalable Systems Software ISIC. Its work is broken down into 4 main areas: Checkpoint/Restart for Linux (CR), Checkpointable MPI Libraries, Resource Management Interface to Checkpoint/Restart and Development of Process Management Interfaces. DMTCP DMTCP (Distributed MultiThreaded Checkpointing) is a tool for transparently checkpointing the state of an arbitrary group of programs spread across many machines and connected by sockets. It does not modify the user's program or the operating system. Among the applications supported by DMTCP are Open MPI, Python, Perl, and many programming languages and shell scripting languages. With the use of TightVNC, it can also checkpoint and restart X Window applications, as long as they do not use extensions (e.g. no OpenGL or video). Among the Linux features supported by DMTCP are open file descriptors, pipes, sockets, signal handlers, process id and thread id virtualization (ensure old pids and tids continue to work upon restart), ptys, fifos, process group ids, session ids, terminal attributes, and mmap/mprotect (including mmap-based shared memory). DMTCP supports the OFED API for InfiniBand on an experimental basis. Collaborative checkpointing Some recent protocols perform collaborative checkpointing by storing fragments of the checkpoint in nearby nodes. This is helpful because it avoids the cost of storing to a parallel file system (which often becomes a bottleneck for large-scale systems) and it uses storage that is closer. This has found use particularly in large-scale supercomputing clusters. The challenge is to ensure that when the checkpoint is needed when recovering from a failure, the nearby nodes with fragments of the checkpoints are available. Docker Docker and the underlying technology contain a checkpoint and restore mechanism. CRIU CRIU is a user space checkpoint library. Implementation for embedded and ASIC devices Mementos Mementos is a software system that transforms general-purpose tasks into interruptible programs for platforms with frequent interruptions such as power outages. It was designed for batteryless embedded devices such as RFID tags and smart cards which rely on harvesting energy from ambient background sources. Mementos frequently senses the available energy in the system and decides whether to checkpoint the program due to impending power loss versus continuing computation. If checkpointing, data will be stored in a non-volatile memory. When the energy becomes sufficient for reboot, the data is retrieved from non-volatile memory and the program continues from the stored state. Mementos has been implemented on the MSP430 family of microcontrollers. Mementos is named after Christopher Nolan's Memento. Idetic Idetic is a set of automatic tools which helps application-specific integrated circuit (ASIC) developers automatically embed checkpoints in their designs. It targets high-level synthesis tools and adds the checkpoints at the register-transfer level (Verilog code). It uses a dynamic programming approach to locate low overhead points in the state machine of the design. Since the checkpointing in hardware level involves sending the data of dependent registers to a non-volatile memory, the optimum points are required to have minimum number of registers to store. Idetic is deployed and evaluated on energy harvesting RFID tag device.
Technology
Software development: General
null
416954
https://en.wikipedia.org/wiki/Viral%20evolution
Viral evolution
Viral evolution is a subfield of evolutionary biology and virology that is specifically concerned with the evolution of viruses. Viruses have short generation times, and many—in particular RNA viruses—have relatively high mutation rates (on the order of one point mutation or more per genome per round of replication). Although most viral mutations confer no benefit and often even prove deleterious to viruses, the rapid rate of viral mutation combined with natural selection allows viruses to quickly adapt to changes in their host environment. In addition, because viruses typically produce many copies in an infected host, mutated genes can be passed on to many offspring quickly. Although the chance of mutations and evolution can change depending on the type of virus (e.g., double stranded DNA, double stranded RNA, single strand DNA), viruses overall have high chances for mutations. Viral evolution is an important aspect of the epidemiology of viral diseases such as influenza (influenza virus), AIDS (HIV), and hepatitis (e.g. HCV). The rapidity of viral mutation also causes problems in the development of successful vaccines and antiviral drugs, as resistant mutations often appear within weeks or months after the beginning of a treatment. One of the main theoretical models applied to viral evolution is the quasispecies model, which defines a viral quasispecies as a group of closely related viral strains competing within an environment. Origins Three classical hypotheses Viruses are ancient. Studies at the molecular level have revealed relationships between viruses infecting organisms from each of the three domains of life, suggesting viral proteins that pre-date the divergence of life and thus infecting the last universal common ancestor. This indicates that some viruses emerged early in the evolution of life, and that they have probably arisen multiple times. It has been suggested that new groups of viruses have repeatedly emerged at all stages of evolution, often through the displacement of ancestral structural and genome replication genes. There are three main hypotheses that aim to explain the origins of viruses: Regressive hypothesis Viruses may have once been small cells that parasitised larger cells. Over time, genes not required by their parasitism were lost. The bacteria rickettsia and chlamydia are living cells that, like viruses, can reproduce only inside host cells. They lend support to this hypothesis, as their dependence on parasitism is likely to have caused the loss of genes that enabled them to survive outside a cell. This is also called the "degeneracy hypothesis", or "reduction hypothesis". Cellular origin hypothesis Some viruses may have evolved from bits of DNA or RNA that "escaped" from the genes of a larger organism. The escaped DNA could have come from plasmids (pieces of naked DNA that can move between cells) or transposons (molecules of DNA that replicate and move around to different positions within the genes of the cell). Once called "jumping genes", transposons are examples of mobile genetic elements and could be the origin of some viruses. They were discovered in maize by Barbara McClintock in 1950. This is sometimes called the "vagrancy hypothesis", or the "escape hypothesis". Co-evolution hypothesis This is also called the "virus-first hypothesis" and proposes that viruses may have evolved from complex molecules of protein and nucleic acid at the same time that cells first appeared on Earth and would have been dependent on cellular life for billions of years. Viroids are molecules of RNA that are not classified as viruses because they lack a protein coat. They have characteristics that are common to several viruses and are often called subviral agents. Viroids are important pathogens of plants. They do not code for proteins but interact with the host cell and use the host machinery for their replication. The hepatitis delta virus of humans has an RNA genome similar to viroids but has a protein coat derived from hepatitis B virus and cannot produce one of its own. It is, therefore, a defective virus. Although hepatitis delta virus genome may replicate independently once inside a host cell, it requires the help of hepatitis B virus to provide a protein coat so that it can be transmitted to new cells. In similar manner, the sputnik virophage is dependent on mimivirus, which infects the protozoan Acanthamoeba castellanii. These viruses, which are dependent on the presence of other virus species in the host cell, are called "satellites" and may represent evolutionary intermediates of viroids and viruses. Later hypotheses Chimeric-origins hypothesis: Based on the analyses of the evolution of the replicative and structural modules of viruses, a chimeric scenario for the origin of viruses was proposed in 2019. According to this hypothesis, the replication modules of viruses originated from the primordial genetic pool, although the long course of their subsequent evolution involved many displacements by replicative genes from their cellular hosts. By contrast, the genes encoding major structural proteins evolved from functionally diverse host proteins throughout the evolution of the virosphere. This scenario is distinct from each of the three traditional scenarios but combines features of the Virus-first and Escape hypotheses. One of the problems for studying viral origins and evolution is the high rate of viral mutation, particularly the case in RNA retroviruses like HIV/AIDS. A recent study based on comparisons of viral protein folding structures, however, is offering some new evidence. Fold Super Families (FSFs) are proteins that show similar folding structures independent of the actual sequence of amino acids, and have been found to show evidence of viral phylogeny. The proteome of a virus, the viral proteome, still contains traces of ancient evolutionary history that can be studied today. The study of protein FSFs suggests the existence of ancient cellular lineages common to both cells and viruses before the appearance of the 'last universal cellular ancestor' that gave rise to modern cells. Evolutionary pressure to reduce genome and particle size may have eventually reduced viro-cells into modern viruses, whereas other coexisting cellular lineages eventually evolved into modern cells. Furthermore, the long genetic distance between RNA and DNA FSFs suggests that the RNA world hypothesis may have new experimental evidence, with a long intermediary period in the evolution of cellular life. Definitive exclusion of a hypothesis on the origin of viruses is difficult to make on Earth given the ubiquitous interactions between viruses and cells, and the lack of availability of rocks that are old enough to reveal traces of the earliest viruses on the planet. From an astrobiological perspective, it has therefore been proposed that on celestial bodies such as Mars not only cells but also traces of former virions or viroids should be actively searched for: possible findings of traces of virions in the apparent absence of cells could provide support for the virus-first hypothesis. Evolution Viruses do not form fossils in the traditional sense, because they are much smaller than the finest colloidal fragments forming sedimentary rocks that fossilize plants and animals. However, the genomes of many organisms contain endogenous viral elements (EVEs). These DNA sequences are the remnants of ancient virus genes and genomes that ancestrally 'invaded' the host germline. For example, the genomes of most vertebrate species contain hundreds to thousands of sequences derived from ancient retroviruses. These sequences are a valuable source of retrospective evidence about the evolutionary history of viruses, and have given birth to the science of paleovirology. The evolutionary history of viruses can to some extent be inferred from analysis of contemporary viral genomes. The mutation rates for many viruses have been measured, and application of a molecular clock allows dates of divergence to be inferred. Viruses evolve through changes in their RNA (or DNA), some quite rapidly, and the best adapted mutants quickly outnumber their less fit counterparts. In this sense their evolution is Darwinian. The way viruses reproduce in their host cells makes them particularly susceptible to the genetic changes that help to drive their evolution. The RNA viruses are especially prone to mutations. In host cells there are mechanisms for correcting mistakes when DNA replicates and these kick in whenever cells divide. These important mechanisms prevent potentially lethal mutations from being passed on to offspring. But these mechanisms do not work for RNA and when an RNA virus replicates in its host cell, changes in their genes are occasionally introduced in error, some of which are lethal. One virus particle can produce millions of progeny viruses in just one cycle of replication, therefore the production of a few "dud" viruses is not a problem. Most mutations are "silent" and do not result in any obvious changes to the progeny viruses, but others confer advantages that increase the fitness of the viruses in the environment. These could be changes to the virus particles that disguise them so they are not identified by the cells of the immune system or changes that make antiviral drugs less effective. Both of these changes occur frequently with HIV. Many viruses (for example, influenza A virus) can "shuffle" their genes with other viruses when two similar strains infect the same cell. This phenomenon is called genetic shift, and is often the cause of new and more virulent strains appearing. Other viruses change more slowly as mutations in their genes gradually accumulate over time, a process known as antigenic drift. Through these mechanisms new viruses are constantly emerging and present a continuing challenge in attempts to control the diseases they cause. Most species of viruses are now known to have common ancestors, and although the "virus first" hypothesis has yet to gain full acceptance, there is little doubt that the thousands of species of modern viruses have evolved from less numerous ancient ones. The morbilliviruses, for example, are a group of closely related, but distinct viruses that infect a broad range of animals. The group includes measles virus, which infects humans and primates; canine distemper virus, which infects many animals including dogs, cats, bears, weasels and hyaenas; rinderpest, which infected cattle and buffalo; and other viruses of seals, porpoises and dolphins. Although it is not possible to prove which of these rapidly evolving viruses is the earliest, for such a closely related group of viruses to be found in such diverse hosts suggests the possibility that their common ancestor is ancient. Bacteriophage Escherichia virus T4 (phage T4) is a species of bacteriophage that infects Escherichia coli bacteria. It is a double-stranded DNA virus in the family Myoviridae. Phage T4 is an obligate intracellular parasite that reproduces within the host bacterial cell and its progeny are released when the host is destroyed by lysis. The complete genome sequence of phage T4 encodes about 300 gene products. These virulent viruses are among the largest, most complex viruses that are known and one of the best studied model organisms. They have played a key role in the development of virology and molecular biology. The numbers of reported genetic homologies between phage T4 and bacteria and between phage T4 and eukaryotes are similar suggesting that phage T4 shares ancestry with both bacteria and eukaryotes and has about equal similarity to each. Phage T4 may have diverged in evolution from a common ancestor of bacteria and eukaryotes or from an early evolved member of either lineage. Most of the phage genes showing homology with bacteria and eukaryotes encode enzymes acting in the ubiquitous processes of DNA replication, DNA repair, recombination and nucleotide synthesis. These processes likely evolved very early. The adaptive features of the enzymes catalyzing these early processes may have been maintained in the phage T4, bacterial, and eukaryotic lineages because they were established well-tested solutions to basic functional problems by the time these lineages diverged. Transmission Viruses have been able to continue their infectious existence due to evolution. Their rapid mutation rates and natural selection has given viruses the advantage to continue to spread. One way that viruses have been able to spread is with the evolution of virus transmission. The virus can find a new host through: Droplet transmission- passed on through body fluids (sneezing on someone) An example is the influenza virus Airborne transmission- passed on through the air (brought in by breathing) An example would be how viral meningitis is passed on Vector transmission- picked up by a carrier and brought to a new host An example is viral encephalitis Waterborne transmission- leaving a host, infecting the water, and being consumed in a new host Poliovirus is an example for this Sit-and-wait-transmission- the virus is living outside a host for long periods of time The smallpox virus is also an example for this Virulence, or the harm that the virus does on its host, depends on various factors. In particular, the method of transmission tends to affect how the level of virulence will change over time. Viruses that transmit through vertical transmission (transmission to the offspring of the host) will evolve to have lower levels of virulence. Viruses that transmit through horizontal transmission (transmission between members of the same species that don't have a parent-child relationship) will usually evolve to have a higher virulence.
Biology and health sciences
Basics_4
Biology
417014
https://en.wikipedia.org/wiki/Passive%20transport
Passive transport
Passive transport is a type of membrane transport that does not require energy to move substances across cell membranes. Instead of using cellular energy, like active transport, passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranes. Fundamentally, substances follow Fick's first law, and move from an area of high concentration to an area of low concentration because this movement increases the entropy of the overall system. The rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteins. The four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis. Passive transport follows Fick's first law. Diffusion Diffusion is the net movement of material from an area of high concentration to an area with lower concentration. The difference of concentration between the two areas is often termed as the concentration gradient, and diffusion will continue until this gradient has been eliminated. Since diffusion moves materials from an area of higher concentration to an area of lower concentration, it is described as moving solutes "down the concentration gradient" (compared with active transport, which often moves material from area of low concentration to area of higher concentration, and therefore referred to as moving the material "against the concentration gradient"). However, in many cases (e.g. passive drug transport) the driving force of passive transport can not be simplified to the concentration gradient. If there are different solutions at the two sides of the membrane with different equilibrium solubility of the drug, the difference in the degree of saturation is the driving force of passive membrane transport. It is also true for supersaturated solutions which are more and more important owing to the spreading of the application of amorphous solid dispersions for drug bioavailability enhancement. Simple diffusion and osmosis are in some ways similar. Simple diffusion is the passive movement of solute from a high concentration to a lower concentration until the concentration of the solute is uniform throughout and reaches equilibrium. Osmosis is much like simple diffusion but it specifically describes the movement of water (not the solute) across a selectively permeable membrane until there is an equal concentration of water and solute on both sides of the membrane. Simple diffusion and osmosis are both forms of passive transport and require none of the cell's ATP energy. Speed of diffusion For passive diffusion, the law of diffusion states that the mean squared displacement is with d being the number of dimensions and D the diffusion coefficient). So to diffuse a distance of about takes time , and the "average speed" is . This means that in the same physical environment, diffusion is fast when the distance is small, but less when the distance is large. This can be seen in material transport within the cell. Prokaryotes typically have small bodies, allowing diffusion to suffice for material transport within the cell. Larger cells like eukaryotes would either have very low metabolic rate to accommodate the slowness of diffusion, or invest in complex cellular machinery to allow active transport within the cell, such as kinesin walking along microtubules. Example of diffusion: gas exchange A biological example of diffusion is the gas exchange that occurs during respiration within the human body. Upon inhalation, oxygen is brought into the lungs and quickly diffuses across the membrane of alveoli and enters the circulatory system by diffusing across the membrane of the pulmonary capillaries. Simultaneously, carbon dioxide moves in the opposite direction, diffusing across the membrane of the capillaries and entering into the alveoli, where it can be exhaled. The process of moving oxygen into the cells, and carbon dioxide out, occurs because of the concentration gradient of these substances, each moving away from their respective areas of higher concentration toward areas of lower concentration. Cellular respiration is the cause of the low concentration of oxygen and high concentration of carbon dioxide within the blood which creates the concentration gradient. Because the gasses are small and uncharged, they are able to pass directly through the cell membrane without any special membrane proteins. No energy is required because the movement of the gasses follows Fick's first law and the second law of thermodynamics. Facilitated diffusion Facilitated diffusion, also called carrier-mediated osmosis, is the movement of molecules across the cell membrane via special transport proteins that are embedded in the plasma membrane by actively taking up or excluding ions [14]. Through facilitated diffusion, energy is not required in order for molecules to pass through the cell membrane. Active transport of protons by H+ ATPases alters membrane potential allowing for facilitated passive transport of particular ions such as potassium down their charge gradient through high affinity transporters and channels. Example of facilitated diffusion: GLUT2 An example of facilitated diffusion is when glucose is absorbed into cells through Glucose transporter 2 (GLUT2) in the human body. There are many other types of glucose transport proteins, some that do require energy, and are therefore not examples of passive transport. Since glucose is a large molecule, it requires a specific channel to facilitate its entry across plasma membranes and into cells. When diffusing into a cell through GLUT2, the driving force that moves glucose into the cell is the concentration gradient. The main difference between simple diffusion and facilitated diffusion is that facilitated diffusion requires a transport protein to 'facilitate' or assist the substance through the membrane. After a meal, the cell is signaled to move GLUT2 into membranes of the cells lining the intestines called enterocytes. With GLUT2 in place after a meal and the relative high concentration of glucose outside of these cells as compared to within them, the concentration gradient drives glucose across the cell membrane through GLUT2. Filtration Filtration is movement of water and solute molecules across the cell membrane due to hydrostatic pressure generated by the cardiovascular system. Depending on the size of the membrane pores, only solutes of a certain size may pass through it. For example, the membrane pores of the Bowman's capsule in the kidneys are very small, and only albumins, the smallest of the proteins, have any chance of being filtered through. On the other hand, the membrane pores of liver cells are extremely large, but not forgetting cells are extremely small to allow a variety of solutes to pass through and be metabolized. Osmosis Osmosis is the net movement of water molecules across a selectively permeable membrane from an area of high water potential to an area of low water potential. A cell with a less negative water potential will draw in water, but this depends on other factors as well such as solute potential (pressure in the cell e.g. solute molecules) and pressure potential (external pressure e.g. cell wall). There are three types of Osmosis solutions: the isotonic solution, hypotonic solution, and hypertonic solution. Isotonic solution is when the extracellular solute concentration is balanced with the concentration inside the cell. In the Isotonic solution, the water molecules still move between the solutions, but the rates are the same from both directions, thus the water movement is balanced between the inside of the cell as well as the outside of the cell. A hypotonic solution is when the solute concentration outside the cell is lower than the concentration inside the cell. In hypotonic solutions, the water moves into the cell, down its concentration gradient (from higher to lower water concentrations). That can cause the cell to swell. Cells that don't have a cell wall, such as animal cells, could burst in this solution. A hypertonic solution is when the solute concentration is higher (think of hyper - as high) than the concentration inside the cell. In hypertonic solution, the water will move out, causing the cell to shrink.
Biology and health sciences
Cell processes
Biology
417048
https://en.wikipedia.org/wiki/Ground%20beetle
Ground beetle
Ground beetles are a large, cosmopolitan family of beetles, the Carabidae, with more than 40,000 species worldwide, around 2,000 of which are found in North America and 2,700 in Europe. As of 2015, it is one of the 10 most species-rich animal families. They belong to the Adephaga. Members of the family are primarily carnivorous, but some members are herbivorous or omnivorous. Description and ecology Although their body shapes and coloring vary somewhat, most are shiny black or metallic and have ridged wing covers (elytra). The elytra are fused in some species, particularly the large Carabinae, rendering the beetles unable to fly. The species Mormolyce phyllodes is known as violin beetle due to their peculiarly shaped elytra. All carabids except the quite primitive flanged bombardier beetles (Paussinae) have a groove on their fore leg tibiae bearing a comb of hairs used for cleaning their antennae. Defensive secretions Typical for the ancient beetle suborder Adephaga to which they belong, they have paired pygidial glands in the lower back of the abdomen. These are well developed in ground beetles, and produce noxious or even caustic secretions used to deter would-be predators. In some, commonly known as bombardier beetles, these secretions are mixed with volatile compounds and ejected by a small combustion, producing a loud popping sound and a cloud of hot and acrid gas that can injure small mammals, such as shrews, and is liable to kill invertebrate predators outright. To humans, getting "bombed" by a bombardier beetle is a decidedly unpleasant experience. This ability has evolved independently twice, as it seems, in the flanged bombardier beetles (Paussinae), which are among the most ancient ground beetles, and in the typical bombardier beetles (Brachininae), which are part of a more "modern" lineage. The Anthiini, though, can mechanically squirt their defensive secretions for considerable distances and are able to aim with a startling degree of accuracy; in Afrikaans, they are known as ("eye-pissers"). In one of the very few known cases of a vertebrate mimicking an arthropod, juvenile Heliobolus lugubris lizards are similar in color to the aposematic oogpister beetles, and move in a way that makes them look surprisingly similar to the insects at a casual glance. A folk story claims that Charles Darwin once found himself on the receiving end of a bombardier beetle's attack, based on a passage in his autobiography. Darwin stated in a letter to Leonard Jenyns that a beetle had attacked him on that occasion, but he did not know what kind: A Cychrus rostratus once squirted into my eye & gave me extreme pain; & I must tell you what happened to me on the banks of the Cam in my early entomological days; under a piece of bark I found two carabi (I forget which) & caught one in each hand, when lo & behold I saw a sacred Panagæus crux major; I could not bear to give up either of my Carabi, & to lose Panagæus was out of the question, so that in despair I gently seized one of the carabi between my teeth, when to my unspeakable disgust & pain the little inconsiderate beast squirted his acid down my throat & I lost both Carabi & Panagæus! Ecology Common habitats are under the bark of trees, under logs, or among rocks or sand by the edge of ponds and rivers. Most species are carnivorous and actively hunt for any invertebrate prey they can overpower. Some run swiftly to catch their prey; tiger beetles (Cicindelinae) can sustain speeds of – in relation to their body length they are among the fastest land animals on Earth. Unlike most Carabidae, which are nocturnal, the tiger beetles are active diurnal hunters and often brightly coloured; they have large eyes and hunt by sight. Ground beetles of the genus Promecognathus are specialised predators of the cyanide millipedes Harpaphe haydeniana and Xystocheir dissecta, countering the hydrogen cyanide that makes these millipedes poisonous to most carnivores. Relationship with humans As predators of invertebrates, including many pests, most ground beetles are considered beneficial organisms. The caterpillar hunters (Calosoma) are famous for their habit of devouring prey in quantity, eagerly feeding on tussock moth (Lymantriinae) caterpillars, processionary caterpillars (Thaumetopoeinae) and woolly worms (Arctiinae), which, due to their urticating hairs, are avoided by most insectivores. Large numbers of the forest caterpillar hunter (C. sycophanta), native to Europe, were shipped to New England for biological control of the gypsy moth (Lymantria dispar) as early as 1905. A few species are nuisance pests. Zabrus is one of the few herbivorous ground beetle genera, and on rare occasions Zabrus tenebrioides, for example, occurs abundantly enough to cause some damage to grain crops. Large species, usually the Carabinae, can become a nuisance if present in large numbers, particularly during outdoor activities such as camping; they void their defensive secretions when threatened, and in hiding among provisions, their presence may spoil food. Since ground beetles are generally reluctant or even unable to fly, mechanically blocking their potential routes of entry is usually easy. The use of insecticides specifically for carabid intrusion may lead to unfortunate side effects, such as the release of their secretions, so it generally is not a good idea unless the same applications are intended to exclude ants, parasites or other crawling pests. Especially in the 19th century and to a lesser extent today, their large size and conspicuous coloration, as well as the odd morphology of some (e.g. the Lebiini), made many ground beetles a popular object of collection and study for professional and amateur coleopterologists. High prices were paid for rare and exotic specimens, and in the early to mid-19th century, a veritable "beetle craze" occurred in England. As mentioned above, Charles Darwin was an ardent collector of beetles when he was about 20 years old, to the extent that he would rather scour the countryside for rare specimens with William Darwin Fox, John Stevens Henslow, and Henry Thompson than to study theology as his father wanted him to do. In his autobiography, he fondly recalled his experiences with Licinus and Panagaeus, and wrote: No poet ever felt more delight at seeing his first poem published than I did at seeing in Stephen's Illustrations of British Insects the magic words, "captured by C. Darwin, Esq." Evolution and systematics The Adephaga are documented since the end of the Permian, about (Mya). Ground beetles evolved in the latter Triassic, having separated from their closest relatives by 200 Mya. The family diversified throughout the Jurassic, and the more advanced lineages, such as the Harpalinae, underwent a vigorous radiation starting in the Cretaceous. The closest living relatives of the ground beetles are the false ground beetles (Trachypachidae) and the tiger beetles (Cicindelidae). They are sometimes even included in the Carabidae as subfamilies or as tribes incertae sedis, but more preferably they are united with the ground beetles in the superfamily Caraboidea, or Geadephaga. Much research has been done on elucidating the phylogeny of the ground beetles and adjusting systematics and taxonomy accordingly. While no completely firm consensus exists, a few points are generally accepted: The ground beetles seemingly consist of a number of more basal lineages and the extremely diverse Harpalinae, which contain over half the described species and into which several formerly independent families had to be subsumed. Subfamilies The taxonomy used here is primarily based on the Catalogue of Life and the Carabcat Database. Other classifications, while generally agreeing with the division into a basal radiation of more primitive lineages and the more advanced group informally called "Carabidae Conjunctae", differ in details. For example, the system used by the Tree of Life Web Project makes little use of subfamilies, listing most tribes as incertae sedis as to subfamily. Fauna Europaea, though, splits rather than lumps the Harpalinae, restricting them to what in the system used here is the tribe Harpalini. The exclusion of Trachypachidae as a separate family is now amply supported, as is the inclusion of Rhysodidae as a subfamily, closely related to Paussinae and Siagoninae. The exclusive Harpalinae is presented here, because the majority of authors presently use this system, following the Carabidae of the World, Catalogue of Palaearctic Coleoptera, or the Carabcat Database (which is reflected the Catalogue of Life). Tiger Beetles have historically been treated as a subfamily of Carabidae under the name Cicindelinae, but several studies since 2020 indicated that they should be treated as a family, Cicindelidae, a sister group to Carabidae. Anthiinae Bonelli, 1813 Tribe Anthiini Bonelli, 1813 Tribe Helluonini Hope, 1838 Tribe Physocrotaphini Chaudoir, 1863 Apotominae LeConte, 1853 Brachininae Bonelli, 1810 Tribe Brachinini Bonelli, 1810 Tribe Crepidogastrini Jeannel, 1949 Broscinae Hope, 1838 Tribe Broscini Hope, 1838 Carabinae Linnaeus, 1802 Tribe Carabini Linnaeus, 1802 Tribe Cychrini Perty, 1830 Ctenodactylinae Laporte, 1834 Tribe Ctenodactylini Laporte, 1834 Tribe Hexagoniini G.Horn, 1881 Dryptinae Bonelli, 1810 Tribe Dryptini Bonelli, 1810 Tribe Galeritini Kirby, 1825 Tribe Zuphiini Bonelli, 1810 Elaphrinae Latreille, 1802 Gineminae Ball & Shpeley, 2002 Harpalinae Bonelli, 1810 Tribe Anisodactylini Lacordaire, 1854 Tribe Harpalini Bonelli, 1810 Tribe Pelmatellini Bates, 1882 Tribe Stenolophini Kirby, 1837 Hiletinae Schiödte, 1847 Lebiinae Bonelli, 1810 Tribe Cyclosomini Laporte, 1834 Tribe Lachnophorini LeConte, 1853 Tribe Lebiini Bonelli, 1810 Tribe Odacanthini Laporte, 1834 Tribe Perigonini G.Horn, 1881 Licininae Bonelli, 1810 Tribe Chaetogenyini Emden, 1958 Tribe Chlaeniini Brullé, 1834 Tribe Licinini Bonelli, 1810 Tribe Oodini LaFerté-Sénectère, 1851 Loricerinae Bonelli, 1810 Melaeninae Csiki, 1933 Migadopinae Chaudoir, 1861 Tribe Amarotypini Erwin, 1985 Tribe Migadopini Chaudoir, 1861 Nebriinae Laporte, 1834 Tribe Cicindini Csiki, 1927 Tribe Nebriini Laporte, 1834 Tribe Notiokasiini Kavanaugh & Nègre, 1983 Tribe Notiophilini Motschulsky, 1850 Tribe Opisthiini Dupuis, 1912 Tribe Pelophilini Kavanaugh, 1996 Nototylinae Bänninger, 1927 Omophroninae Bonelli, 1810 Orthogoniinae Schaum, 1857 Tribe Amorphomerini Sloane, 1923 Tribe Idiomorphini Bates, 1891 Tribe Orthogoniini Schaum, 1857 Panagaeinae Bonelli, 1810 Tribe Brachygnathini Basilewsky, 1946 Tribe Panagaeini Bonelli, 1810 Tribe Peleciini Chaudoir, 1880 Patrobinae Kirby, 1837 Tribe Lissopogonini Zamotajlov, 2000 Tribe Patrobini Kirby, 1837 Paussinae Latreille, 1806 Tribe Metriini LeConte, 1853 Tribe Ozaenini Hope, 1838 Tribe Paussini Latreille, 1806 Tribe Protopaussini Gestro, 1892 Platyninae Bonelli, 1810 Tribe Omphreini Ganglbauer, 1891 Tribe Platynini Bonelli, 1810 Tribe Sphodrini Laporte, 1834 Promecognathinae LeConte, 1853 Tribe Axinidiini Basilewsky, 1963 Tribe Dalyatini Mateu, 2002 Tribe Promecognathini LeConte, 1853 Tribe †Palaeoaxinidiini McKay, 1991 Pseudomorphinae Hope, 1838 Psydrinae LeConte, 1853 Tribe Gehringiini Darlington, 1933 Tribe Moriomorphini Sloane, 1890 Tribe Psydrini LeConte, 1853 Pterostichinae Bonelli, 1810 Tribe Chaetodactylini Tschitscherine, 1903 Tribe Cnemalobini Germain, 1911 Tribe Cratocerini Lacordaire, 1854 Tribe Microcheilini Jeannel, 1948 Tribe Morionini Brullé, 1837 Tribe Pterostichini Bonelli, 1810 Tribe Zabrini Bonelli, 1810 Rhysodinae Laporte, 1840 Tribe Clinidiini R.T. & J.R.Bell, 1978 Tribe Dhysorini R.T. & J.R.Bell, 1978 Tribe Leoglymmiini R.T. & J.R.Bell, 1978 Tribe Medisorini R.T. & J.R.Bell, 1987 Tribe Omoglymmiini R.T. & J.R.Bell, 1978 Tribe Rhysodini Laporte, 1840 Tribe Sloanoglymmiini R.T. & J.R.Bell, 1991 Scaritinae Bonelli, 1810 Tribe Clivinini Rafinesque, 1815 Tribe Corintascarini Basilewsky, 1973 Tribe Dyschiriini Kolbe, 1880 Tribe Salcediini Alluaud, 1930 Tribe Scaritini Bonelli, 1810 Siagoninae Bonelli, 1813 Tribe Enceladini G.Horn, 1881 Tribe Siagonini Bonelli, 1813 Trechinae Bonelli, 1810 Tribe Bembidarenini Maddison et al., 2019 Tribe Bembidiini Stephens, 1827 Tribe Pogonini Laporte, 1834 Tribe Sinozolini Deuve, 1997 Tribe Trechini Bonelli, 1810 Tribe Zolini Sharp, 1886 Xenaroswellianinae Erwin, 2007 †Conjunctiinae Ponomarenko, 1977 †Protorabinae Ponomarenko, 1977 Unassigned, extinct genera: †Agatoides Motschulsky, 1856 †Amphoxyne Bode, 1953 †Carabites Heer, 1852 †Cavicarabus Hong, 1991 †Conexicoxa Lin, 1986 †Cymatopterus Lomnicki, 1894 †Fangshania Hong, 1981 †Glenopterus Heer, 1847 †Hebeicarabus Hong, 1983 †Megacarabus Hong, 1983 †Meileyingia Hong, 1987 †Miocarabus Hong, 1983 †Neothanes Scudder, 1890 †Procarabus Oppenheim, 1888 †Prosynactus Bode, 1953 †Shanwangicarabus Hong, 1985 †Sinis Heer, 1862 †Sinocalosoma Hong & Wang, 1986 †Sinocaralosoma Hong, 1984 †Sunocarabus Hong, 1987 †Tauredon Handlirsch, 1910 †Wuchangicarabus Hong, 1991 †Xishanocarabus Hong, 1984 †Yunnanocarabus Lin, 1977
Biology and health sciences
Beetles (Coleoptera)
Animals
417068
https://en.wikipedia.org/wiki/Anesthesiology
Anesthesiology
Anesthesiology, anaesthesiology or anaesthesia is the medical specialty concerned with the total perioperative care of patients before, during and after surgery. It encompasses anesthesia, intensive care medicine, critical emergency medicine, and pain medicine. A physician specialized in anesthesiology is called an anesthesiologist, anaesthesiologist, or anaesthetist, depending on the country. In some countries, the terms are synonymous, while in other countries, they refer to different positions and anesthetist is only used for non-physicians, such as nurse anesthetists. The core element of the specialty is the prevention and mitigation of pain and distress using various anesthetic agents, as well as the monitoring and maintenance of a patient's vital functions throughout the perioperative period. Since the 19th century, anesthesiology has developed from an experimental area with non-specialist practitioners using novel, untested drugs and techniques into what is now a highly refined, safe and effective field of medicine. In some countries anesthesiologists comprise the largest single cohort of doctors in hospitals, and their role can extend far beyond the traditional role of anesthesia care in the operating room, including fields such as providing pre-hospital emergency medicine, running intensive care units, transporting critically ill patients between facilities, management of hospice and palliative care units, and prehabilitation programs to optimize patients for surgery. Scope As a specialty, the core element of anesthesiology is the practice of anesthesia. This comprises the use of various injected and inhaled medications to produce a loss of sensation in patients, making it possible to carry out procedures that would otherwise cause intolerable pain or be technically unfeasible. Safe anesthesia requires in-depth knowledge of various invasive and non-invasive organ support techniques that are used to control patients' vital functions while under the effects of anaesthetic drugs; these include advanced airway management, invasive and non-invasive hemodynamic monitors, and diagnostic techniques like ultrasonography and echocardiography. Anesthesiologists are expected to have expert knowledge of human physiology, medical physics, and pharmacology as well as a broad general knowledge of all areas of medicine and surgery in all ages of patients, with a particular focus on those aspects which may impact on a surgical procedure. In recent decades, the role of anesthesiologists has broadened to focus not just on administering anesthetics during the surgical procedure itself, but also beforehand in order to identify high-risk patients and optimize their fitness, during the procedure to maintain situational awareness of the surgery itself so as to improve safety, and afterwards to promote and enhance recovery. This has been termed "perioperative medicine". The concept of intensive care medicine arose in the 1950s and 1960s, with anesthesiologists taking organ support techniques that had traditionally been used only for short periods during surgical procedures (such as positive pressure ventilation) and applying these therapies to patients with organ failure, who might require vital function support for extended periods until the effects of the illness could be reversed. The first intensive care unit was opened by Bjørn Aage Ibsen in Copenhagen in 1953, prompted by a polio epidemic during which many patients required prolonged artificial ventilation. In many countries, intensive care medicine is considered to be a subspecialty of anesthesiology, and anesthesiologists often rotate between duties in the operating room and the intensive care unit. This allows continuity of care when patients are admitted to the ICU after their surgery, and it also means that anesthesiologists can maintain their expertise at invasive procedures and vital function support in the controlled setting of the operating room, while then applying those skills in the more dangerous setting of the critically ill patient. In other countries, intensive care medicine has evolved further to become a separate medical specialty in its own right, or has become a "supra-specialty" which may be practiced by doctors from various base specialties such as anesthesiology, emergency medicine, general medicine, surgery or neurology. Anesthesiologists have key roles in major trauma, resuscitation, airway management, and caring for other patients outside the operating theatre who have critical emergencies that pose an immediate threat to life, again reflecting transferable skills from the operating room, and allowing continuity of care when patients are brought for surgery or intensive care. This branch of anesthesiology is collectively termed critical emergency medicine, and includes provision of pre-hospital emergency medicine as part of air ambulance or emergency medical services, as well as safe transfer of critically ill patients from one part of a hospital to another, or between healthcare facilities. Anesthesiologists commonly form part of cardiac arrest teams and rapid response teams composed of senior clinicians that are immediately summoned when a patient's heart stops beating, or when they deteriorate acutely while in hospital. Different models for emergency medicine exist internationally: in the Anglo-American model, the patient is rapidly transported by non-physician providers to definitive care such as an emergency department in a hospital. Conversely, the Franco-German approach has a physician, often an anesthesiologist, come to the patient and provide stabilizing care in the field. The patient is then triaged directly to the appropriate department of a hospital. The role of anesthesiologists in ensuring adequate pain relief for patients in the immediate postoperative period as well as their expertise in regional anesthesia and nerve blocks has led to the development of pain medicine as a subspecialty in its own right. The field comprises individualized strategies for all forms of analgesia, including pain management during childbirth, neuromodulatory technological methods such as transcutaneous electrical nerve stimulation or implanted spinal cord stimulators, and specialized pharmacological regimens. Anesthesiologists often perform interhospital transfers of critically ill patients, both on short range helicopter or ground based missions, as well as longer range national transports to specialized centra or international missions to retrieve citizens injured abroad. Ambulance services employ units staffed by anesthesiologists that can be called out to provide advanced airway management, blood transfusion, thoracotomy, ECMO, and ultrasound capabilities outside the hospital. Anesthesiologists often (along with general surgeons and orthopedic surgeons) make up part of military medical teams to provide anesthesia and intensive care to trauma victims during armed conflicts. Terminology Various names and spellings are used to describe this specialty and the individuals who practice it in different parts of the world. In North America, the specialty is referred to as anesthesiology (omitting the "ae" diphthong), and a physician of that specialty is therefore called an anesthesiologist. In these countries, the term anesthestist is used to refer to non-physician providers of anesthesia services such as certified registered nurse anesthetists (CRNAs) and anesthesiologist assistants (AAs). In other countries – such as United Kingdom, Australia, New Zealand, and South Africa – the medical specialty is referred to as anaesthesia or anaesthetics, with the "ae" diphthong. Contrary to the terminology in North America, anaesthetist is used only to refer to a physician practicing in the field; non-physicians use other titles such as physician assistant. At this time, the spelling anaesthesiology is most commonly used in written English, and a physician practicing in the field is termed an anaesthesiologist. This is the spelling adopted by the World Federation of Societies of Anaesthesiologists and the European Society of Anaesthesiology, as well as the majority of their member societies. It is the also the most commonly used spelling found in the titles of medical journals. In fact, many countries, such as Ireland and Hong Kong, which formerly used anaesthesia and anaesthetist have now transitioned to anaesthesiology and anaesthesiologist. History Throughout human history, efforts have been made by almost every civilization to mitigate pain associated with surgical procedures, ranging from techniques such as acupuncture or phlebotomy to administration of substances such as mandrake, opium, or alcohol. However, by the mid-nineteenth century the study and administration of anesthesia had become far more complex as physicians began experimenting with compounds such as chloroform and nitrous oxide, albeit with mixed results. On October 16, 1846, a day that would thereafter be referred to as "Ether Day", in the Bullfinch Auditorium at Massachusetts General Hospital, which would later be nicknamed the "Ether Dome", New England Dentist William Morton successfully demonstrated the use of diethyl ether using an inhaler of his own design to induce general anesthesia for a patient undergoing removal of a neck tumor. Reportedly, following the quick procedure, operating surgeon John Warren affirmed to the audience that had gathered to watch the exhibition, "Gentlemen, this is no humbug!", although this report has been disputed. The term Anaesthesia was first used by the Greek philosopher Dioscorides, derived from the Ancient Greek roots an-, "not", aísthēsis, "sensation" to describe the insensibility that accompanied the narcotic-like effect produced by the mandrake plant. However, following Morton's successful exhibition, Oliver Wendell Holmes Sr. sent a letter to Morton in which he first to suggested anesthesia to denote the medically induced state of amnesia, insensibility, and stupor that enabled physicians to operate with minimal pain or trauma to the patient. The original term had simply been "etherization" because at the time this was the only agent discovered that was capable of inducing such a state. Over the next one hundred-plus years the specialty of anesthesiology developed rapidly as further scientific advancements meant that physicians' means of controlling peri-operative pain and monitoring patients' vital functions grew more sophisticated. With the isolation of cocaine in the mid-nineteenth century there began to be drugs available for local anesthesia. By the end of the nineteenth century, the number of pharmacological options had increased and had begun to be applied both peripherally and neuraxially. Then in the twentieth century neuromuscular blockade allowed the anesthesiologist to completely paralyze the patient pharmacologically and breathe for him or her via mechanical ventilation. With these new tools, the anesthetist could intensively manage the patient's physiology, bringing about critical care medicine, which, in many countries, is intimately connected to anesthesiology. Historically anesthesia providers were almost solely utilized during surgery to administer general anesthesia in which a person is placed in a pharmacologic coma. This is performed to permit surgery without the individual responding to pain (analgesia) during surgery or remembering (amnesia) the surgery. Investigations Effective practice of anesthesiology requires several areas of knowledge by the practitioner, some of which are: Pharmacology of commonly used drugs including inhalational anaesthetics, topical anesthetics, and vasopressors as well as numerous other drugs used in association with anesthetics (e.g., ondansetron, glycopyrrolate) Monitors: electrocardiography, electroencephalography, electromyography, entropy monitoring, neuromuscular monitoring, cortical stimulation mapping, and neuromorphology Mechanical ventilation Anatomical knowledge of the nervous system for nerve blocks, etc. Other areas of medicine (e.g., cardiology, pulmonology, obstetrics) to assess the risk of anesthesia to adequately have informed consent, and knowledge of anesthesia regarding how it affects certain age groups (neonates, pediatrics, geriatrics) Treatments Many procedures or diagnostic tests do not require "general anesthesia" and can be performed using various forms of sedation or regional anesthesia, which can be performed to induce analgesia in a region of the body. For example, epidural administration of a local anesthetic is commonly performed on the mother during childbirth to reduce labor pain while permitting the mother to be awake and active in labor and delivery. In the United States, anesthesiologists may also perform non-surgical pain management (termed pain medicine) and provide care for patients in intensive care units (termed critical care medicine). Training International standards for the safe practice of anesthesia, jointly endorsed by the World Health Organization and the World Federation of Societies of Anaesthesiologists, define anesthesiologist as a graduate of a medical school who has completed a nationally recognized specialist anesthesia training program. The length and format of anesthesiology training programs varies from country to country, as noted below. A candidate must first have completed medical school training to be awarded a medical degree, before embarking on a program of postgraduate specialist training or residency which can range from four to nine years. Anesthesiologists in training spend this time gaining experience in various different subspecialties of anesthesiology and undertake various advanced postgraduate examinations and skill assessments. These lead to the award of a specialist qualification at the end of their training indicating that they are an expert in the field and may be licensed to practice independently. Argentina In Argentina, specialized training in the field of anesthesiology is overseen by the Argentine Federation of Associations of Anaesthesia, Analgesia and Reanimation (in Spanish, or FAAAAR). Residency programs are four to five years long. Australia and New Zealand In Australia and New Zealand, the medical specialty is referred to as anaesthesia or anaesthetics; note the extra "a" (or diphthong). Specialist training is supervised by the Australian and New Zealand College of Anaesthetists (ANZCA), while anaesthetists are represented by the Australian Society of Anaesthetists and the New Zealand Society of Anaesthetists. The ANZCA-approved training course encompasses an initial two-year long Pre-vocational Medical Education and Training (PMET), which may include up to 12 months training in anaesthesia or ICU medicine, plus at least five years of supervised clinical training at approved training sites. Trainees must pass both the primary and final examinations which consist of both written (multiple choice questions and short-answer questions) and, if successful in the written exams, oral examinations (viva voce). In the final written examination, there are many questions of clinical scenarios (including interpretation of radiological exams, EKGs and other special investigations). There are also two cases of real patients with complex medical conditions – for clinical examination and a following discussion. The course has a program of 12 modules such as obstetric anaesthesia, pediatric anaesthesia, cardiothoracic and vascular anaesthesia, neurosurgical anaesthesia and pain management. Trainees also have to complete an advanced project, such as a research publication or paper. They also undergo an EMAC (Effective Management of Anaesthetic Crises) or EMST (Early Management of Severe Trauma) course. On completion of training, the trainees are awarded the Diploma of Fellowship and are entitled to use the qualification of FANZCA – Fellow of the Australian and New Zealand College of Anaesthetists. Brazil In Brazil, anesthesiology training is overseen by the National Commission for Medical Residency (CNRM) and the Brazilian Society of Anesthesiology (SBA). Approximately 650 physicians are admitted yearly to a three-year specialization program with a duty hour limit of 60 hours per week. The residency programs can take place at training centers in university hospitals. These training centers are accredited by the Brazilian Society of Anesthesiology (SBA), or other referral hospitals accredited by the Ministry of Health. Most of the residents are trained in different areas, including ICU, pain management, and anesthesiology sub-specialties, including transplants and pediatrics. Residents may elect to pursue further specialization via a fellowship post-residency, but this is optional and only offered at few training centers. In order to be a certified anesthesiologist in Brazil, the residents must undergo exams (conducted by the SBA) throughout the residency program and at the end of the program. In order to be an instructor of a residency program certified by the SBA, the anesthesiologists must have the superior title in anaesthesia, in which the specialist undergoes a multiple choice test followed by an oral examination conducted by a board assigned by the national society. Canada In Canada, training is supervised by 17 universities approved by the Royal College of Physicians and Surgeons of Canada. Residency programs are typically five years long, consisting of 1.5 years of general medicine training followed by 3.5 years of anesthesia specific training. Canada, like the United States, uses a competency-based curriculum along with an evaluation method called "Entrustable Professional Activities" or "EPA" in which a resident is assessed based on their ability to perform certain tasks that are specific to the field of anesthesiology. Upon completion of a residency program, the candidate is required to pass a comprehensive objective examination consisting of a written component (two three-hour papers: one featuring 'multiple choice' questions, and the other featuring 'short-answer' questions) and an oral component (a two-hour session relating to topics on the clinical aspects of anesthesiology). The examination of a patient is not required. Upon completion of training, the anaesthesia graduate is then entitled to become a "Fellow of the Royal College of Physicians of Canada" and to use the post-nominal letters "FRCPC". Germany In Germany, after earning the right to practice medicine (), German physicians who want to become anaesthesiologists must undergo five years of additional training as outlined by the German Society of Anaesthesiology and Intensive Care Medicine (Deutsche Gesellschaft für Anästhesiologie und Intensivmedizin, or DGAI). This specialist training consists of anaesthesiology, emergency medicine, intensive care and pain medicine, and also palliative care medicine. Similar to many other countries, the training includes rotations serving in the operation theatres to perform anaesthesia on a variety of patients being treated by various surgical subspecialties (e.g. general surgery, neurosurgery, invasive urological and gynecological procedures), followed by a rotation through various intensive-care units. Many German anaesthesiologists choose to complete an additional curriculum in emergency medicine, which once completed, enables them to be referred to as , an emergency physician working pre-clinically with the emergency medical service. In pre-clinical settings the emergency physician is assisted by paramedics. Netherlands In the Netherlands, anaesthesiologists must complete medical school training, which takes six years. After successfully completing medical school training, they start a five-year residency training in anaesthesiology. In their fifth year they can choose to spend the year doing research, or to specialize in a certain area, including general anaesthesiology, critical care medicine, pain and palliative medicine, paediatric anaesthesiology, cardiothoracic anaesthesiology, neuroanaesthesiology or obstetric anaesthesiology. Guatemala In Guatemala, a student with a medical degree must complete a residency of six years. This consists of five years in residency and one year of practice with an expert anaesthetist. After residency, students take a board examination conducted by the college of medicine of Guatemala, the Universidad de San Carlos de Guatemala (Medicine Faculty Examination Board), and a chief physician who represents the health care ministry of the government of Guatemala. The examination includes a written section, an oral section, and a special examination of skills and knowledge relating to anaesthetic instruments, emergency treatment, pre-operative care, post-operative care, intensive care units, and pain medicine. After passing the examination, the college of medicine of Guatemala, Universidad de San Carlos de Guatemala and the health care ministry of the government of Guatemala grants the candidate a special license to practice anaesthesia as well as a diploma issued by the Universidad de San Carlos de Guatemala granting the degree of physician with specialization in anaesthesia. Anaesthetists in Guatemala are also subject to yearly examinations and mandatory participation in yearly seminars on the latest developments in anaesthetic practice. Hong Kong To be qualified as an anesthesiologist in Hong Kong, medical practitioners must undergo a minimum of six years of postgraduate training and pass three professional examinations. Upon completion of training, the Fellowship of Hong Kong College of Anesthesiologists and subsequently the Fellowship of Hong Kong Academy of Medicine is awarded. Practicing anesthesiologists are required to register in the Specialist register of the Medical Council of Hong Kong and hence are under the regulation of the Medical Council. Italy In Italy, a medical school graduate must complete an accredited five-year residency in anesthesiology. Anesthesia training is overseen by the Italian Society of Anaesthesia, Analgesia, Resuscitation, and Intensive Care (SIAARTI). The Nordic countries In Denmark, Finland, Iceland, Norway, and Sweden, anesthesiologists' training is supervised by the respective national societies of anesthesiology as well as the Scandinavian Society of Anaesthesiology and Intensive Care Medicine. In the Nordic countries, anesthesiology is the medical specialty that is engaged in the fields of anesthesia, intensive care medicine, pain control medicine, pre-hospital and in-hospital emergency medicine. Medical school graduates must complete a twelve-month internship, followed by a five-year residency program. SSAI currently hosts six training programs for anesthesiologists in the Nordics. These are Intensive care, Pediatric anesthesiology and intensive care, Advanced pain medicine, Critical care medicine, Critical emergency medicine, and Advanced obstetric anesthesiology. Sweden In Sweden one speciality entails both anesthesiology and intensive care, i.e. one cannot become and anesthetist without also becoming an intensivist and vice versa. The Swedish Board of Health and Welfare regulates specialization for medical doctors in the country and defines the speciality of anesthesiology and intensive care as being: A medical doctor can enter training as a resident in anesthesiology and intensive care after obtaining a license to practice medicine, following an 18–24 month internship. The residency program then lasts at least five years, not including the internship.
Biology and health sciences
Fields of medicine
Health
1752439
https://en.wikipedia.org/wiki/Commelinaceae
Commelinaceae
Commelinaceae is a family of flowering plants. In less formal contexts, the group is referred to as the dayflower family or spiderwort family. It is one of five families in the order Commelinales and by far the largest of these with about 731 known species in 41 genera. Well known genera include Commelina (dayflowers) and Tradescantia (spiderworts). The family is diverse in both the Old World tropics and the New World tropics, with some genera present in both. The variation in morphology, especially that of the flower and inflorescence, is considered to be exceptionally high amongst the angiosperms. The family has always been recognized by most taxonomists. The APG III system of 2009 (unchanged from the APG system of 1998), also recognizes this family, and assigns it to the order Commelinales in the clade commelinids in the monocots. The family counts several hundred species of herbaceous plants. Many are cultivated as ornamentals. The stems of these plants are generally well-developed, and often swollen at the nodes. Flowers are often short-lived, lasting for a day or less. The flowers of Commelinaceae are ephemeral, lack nectar, and offer only pollen as a reward to their pollinators. Most species are hermaphroditic, meaning each flower contains male and female organs, or andromonoecious, meaning that both bisexual and male flowers occur on the same plant. Floral dimorphism may be accompanied by variable pedicel length, filament length and/or curvature, or stamen number and/or position. Species tend to have specific flowering seasons, though local environmental factors tend to effect exact timing, sometimes considerably. Species tend to flower at a specific time of day as well, with these periods being well defined enough to presumably isolate different species reproductively. Furthermore, some species exhibit differential opening times for male and bisexual flowers. Commelinaceae flowers tend to deceive pollinators by appearing to offer a larger reward than is actually present. This is accomplished with various adaptations such as yellow hairs or broad anther connectives that mimic pollen, or staminodes that lack pollen but appear like fertile stamens. Description Plants in the Commelinaceae are usually perennials, but a smaller number of species are annuals. They are always terrestrial except for plants in the genus Cochliostema, which are epiphytes. Plants typically have an erect or scrambling but ascending habit, often spreading by rooting at the nodes or by stolons. Some have rhizomes, and the genera Streptolirion, Aetheolirion, and some species of Spatholirion are climbers. The roots are either fibrous or form tubers. Leaves form sheaths at their bases that surround the stem, much like the leaves of grasses, except that the sheaths are closed and do not have a ligule. The leaves alternate up the stem and may be two-ranked or spirally arranged. The leaf blades are simple and entire (that is, they lack any teeth or lobes), they sometimes narrow at the base, and they are often succulent. The way in which the leaves typically unfurl from bud is a distinctive feature of the family: it is termed involute, and means that the margins at the leaf base are rolled in when they first emerge. However, some groups are supervolute or convolute. The inflorescences occur either as a terminal shoot at the top of the plant, or as terminal and axillary shoots arising from lower nodes, or rarely as only axillary shoots that pierce through the leaf sheath such as in Coleotrype and Amischotolype. The inflorescence is classed as a thyrse, and each subunit is made up of cincinni; this basically means that flowers are grouped in scorpion's tail-like clusters along a central axis, although this basic ground plan can become highly modified or reduced. Inflorescences or their subunit are sometimes enclosed in a leaf-like bract often called a spathe. Flowers can have either one or many planes of symmetry; that is either zygomorphic or actinomorphic. They remain open for only a few hours after opening, after which they deliquesce. The flowers are usually all bisexual (hermaphrodite), but some species have both male and bisexual flowers (andromonoecious), the single species Callisia repens has bisexual and female flowers (gynomonoecious), and some have bisexual, male, and female flowers (polygamomonoecious). Nectaries are not found in any species within the family. There are always three sepals, although they may be equal or unequal, unfused or basally fused, petal-like or green. Likewise, there are always three petals, but these may be equal or in two forms, free or basally fused, white or coloured. The petals are sometimes clawed, meaning they narrow to stalk at the base where they attach to the rest of the flower. There are almost always six stamens in two whorls, but these occur in a myriad of arrangements and forms. They may be all fertile and equal or unequal, but in many genera two to four are staminodes (i.e. infertile, non-pollen producing stamens). Staminodes can alternate with the fertile stamens or they can all occur in the upper or lower hemisphere of the flower. The stalks of the stamens are bearded in many genera, although in some of these only some are bearded while others are hairless. Sometimes one to three stamens are absent altogether. Pollen is usually released from slits that open on the sides of the anthers from top to bottom, but some species have pores that open at the tips. Phylogeny The Commelinaceae is a well supported monophyletic group according to the analysis of Burns et al. (2011). The following is a phylogeny, or evolutionary tree, of most of the genera in Commelinaceae based on DNA sequences from the plastid gene rbcL All clades shown have 80% bootstrap support or better.
Biology and health sciences
Commelinales
Plants
1753667
https://en.wikipedia.org/wiki/Galvanic%20series
Galvanic series
The galvanic series (or electropotential series) determines the nobility of metals and semi-metals. When two metals are submerged in an electrolyte, while also electrically connected by some external conductor, the less noble (base) will experience galvanic corrosion. The rate of corrosion is determined by the electrolyte, the difference in nobility, and the relative areas of the anode and cathode exposed to the electrolyte. The difference can be measured as a difference in voltage potential: the less noble metal is the one with a lower (that is, more negative) electrode potential than the nobler one, and will function as the anode (electron or anion attractor) within the electrolyte device functioning as described above (a galvanic cell). Galvanic reaction is the principle upon which batteries are based. See the table of standard electrode potentials for more details. Galvanic series (most noble at top) The following is the galvanic series for stagnant (that is, low oxygen content) seawater. The order may change in different environments. Graphite Palladium Platinum Gold Silver Titanium Stainless steel 316 (passive) Stainless Steel 304 (passive) Silicon bronze Stainless Steel 316 (active) Monel 400 Phosphor bronze Admiralty brass Cupronickel Molybdenum Red brass Brass plating Yellow brass Naval brass 464 Uranium 8% Mo Niobium 1% Zr Tungsten Tin Lead Stainless Steel 304 (active) Tantalum Chromium plating Nickel (passive) Copper Nickel (active) Cast iron Steel Indium Aluminum Uranium (pure) Cadmium Beryllium Zinc plating (see galvanization) Magnesium Visual Representation The unshaded bars indicate the location on the chart of those steels when in acidic/stagnant water ( like in the bilge ), where crevice-corrosion happens. Notice how the *same* steel has much different galvanic-series location, depending on the electrolyte it's in, making prevention of corrosion .. more difficult. This chart is from the link, below, to the Australian site's document..
Physical sciences
Electrochemistry
Chemistry
19166474
https://en.wikipedia.org/wiki/Hand
Hand
A hand is a prehensile, multi-fingered appendage located at the end of the forearm or forelimb of primates such as humans, chimpanzees, monkeys, and lemurs. A few other vertebrates such as the koala (which has two opposable thumbs on each "hand" and fingerprints extremely similar to human fingerprints) are often described as having "hands" instead of paws on their front limbs. The raccoon is usually described as having "hands" though opposable thumbs are lacking. Some evolutionary anatomists use the term hand to refer to the appendage of digits on the forelimb more generally—for example, in the context of whether the three digits of the bird hand involved the same homologous loss of two digits as in the dinosaur hand. The human hand usually has five digits: four fingers plus one thumb; these are often referred to collectively as five fingers, however, whereby the thumb is included as one of the fingers. It has 27 bones, not including the sesamoid bone, the number of which varies among people, 14 of which are the phalanges (proximal, intermediate and distal) of the fingers and thumb. The metacarpal bones connect the fingers and the carpal bones of the wrist. Each human hand has five metacarpals and eight carpal bones. Fingers contain some of the densest areas of nerve endings in the body, and are the richest source of tactile feedback. They also have the greatest positioning capability of the body; thus, the sense of touch is intimately associated with hands. Like other paired organs (eyes, feet, legs) each hand is dominantly controlled by the opposing brain hemisphere, so that handedness—the preferred hand choice for single-handed activities such as writing with a pencil—reflects individual brain functioning. Among humans, the hands play an important function in body language and sign language. Likewise, the ten digits of two hands and the twelve phalanges of four fingers (touchable by the thumb) have given rise to number systems and calculation techniques. Structure Many mammals and other animals have grasping appendages similar in form to a hand such as paws, claws, and talons, but these are not scientifically considered to be grasping hands. The scientific use of the term hand in this sense to distinguish the terminations of the front paws from the hind ones is an example of anthropomorphism. The only true grasping hands appear in the mammalian order of primates. Hands must also have opposable thumbs, as described later in the text. The hand is located at the distal end of each arm. Apes and monkeys are sometimes described as having four hands, because the toes are long and the hallux is opposable and looks more like a thumb, thus enabling the feet to be used as hands. The word "hand" is sometimes used by evolutionary anatomists to refer to the appendage of digits on the forelimb such as when researching the homology between the three digits of the bird hand and the dinosaur hand. An adult human male's hand weighs about a pound. Areas Areas of the human hand include: The palm (volar), which is the central region of the anterior part of the hand, located superficially to the metacarpus. The skin in this area contains dermal papillae to increase friction, such as are also present on the fingers and used for fingerprints. The opisthenar area (dorsal) is the corresponding area on the posterior part of the hand. The heel of the hand is the area anteriorly to the bases of the metacarpal bones, located in the proximal part of the palm. It is the area that sustains most pressure when using the palm of the hand for support, such as in handstand. Its skeletal foundation is formed by the distal row of carpal bones (specifically the hamate, capitate, trapezoid, and trapezium) and the bases of the metacarpal bones. The skin is thick and tough, adapted for pressure and friction, a layer of subcutaneous fat and connective tissue provides cushioning, and palmar fascia contributes to the palm's shape and stability. There are five digits attached to the hand, notably with a nail fixed to the end in place of the normal claw. The four fingers can be folded over the palm which allows the grasping of objects. Each finger, starting with the one closest to the thumb, has a colloquial name to distinguish it from the others: index finger, pointer finger, forefinger, or 2nd digit middle finger or long finger or 3rd digit ring finger or 4th digit little finger, pinky finger, small finger, baby finger, or 5th digit The thumb (connected to the first metacarpal bone and trapezium) is located on one of the sides, parallel to the arm. A reliable way of identifying human hands is from the presence of opposable thumbs. Opposable thumbs are identified by the ability to be brought opposite to the fingers, a muscle action known as opposition. Bones The skeleton of the human hand consists of 27 bones: the eight short carpal bones of the wrist are organized into a proximal row (scaphoid, lunate, triquetral and pisiform) which articulates with the bones of the forearm, and a distal row (trapezium, trapezoid, capitate and hamate), which articulates with the bases of the five metacarpal bones of the hand. The heads of the metacarpals will each in turn articulate with the bases of the proximal phalanx of the fingers and thumb. These articulations with the fingers are the metacarpophalangeal joints known as the knuckles. At the palmar aspect of the first metacarpophalangeal joints are small, almost spherical bones called the sesamoid bones. The fourteen phalanges make up the fingers and thumb, and are numbered I-V (thumb to little finger) when the hand is viewed from an anatomical position (palm up). The four fingers each consist of three phalanx bones: proximal, middle, and distal. The thumb only consists of a proximal and distal phalanx. Together with the phalanges of the fingers and thumb these metacarpal bones form five rays or poly-articulated chains. Because supination and pronation (rotation about the axis of the forearm) are added to the two axes of movements of the wrist, the ulna and radius are sometimes considered part of the skeleton of the hand. There are numerous sesamoid bones in the hand, small ossified nodes embedded in tendons; the exact number varies between people: whereas a pair of sesamoid bones are found at virtually all thumb metacarpophalangeal joints, sesamoid bones are also common at the interphalangeal joint of the thumb (72.9%) and at the metacarpophalangeal joints of the little finger (82.5%) and the index finger (48%). In rare cases, sesamoid bones have been found in all the metacarpophalangeal joints and all distal interphalangeal joints except that of the long finger. The articulations are: interphalangeal articulations of hand (the hinge joints between the bones of the digits) metacarpophalangeal joints (where the digits meet the palm) intercarpal articulations (where the palm meets the wrist) wrist (may also be viewed as belonging to the forearm). Arches The fixed and mobile parts of the hand adapt to various everyday tasks by forming bony arches: longitudinal arches (the rays formed by the finger bones and their associated metacarpal bones), transverse arches (formed by the carpal bones and distal ends of the metacarpal bones), and oblique arches (between the thumb and four fingers): Of the longitudinal arches or rays of the hand, that of the thumb is the most mobile (and the least longitudinal). While the ray formed by the little finger and its associated metacarpal bone still offers some mobility, the remaining rays are firmly rigid. The phalangeal joints of the index finger, however, offer some independence to its finger, due to the arrangement of its flexor and extension tendons. The carpal bones form two transversal rows, each forming an arch concave on the palmar side. Because the proximal arch simultaneously has to adapt to the articular surface of the radius and to the distal carpal row, it is by necessity flexible. In contrast, the capitate, the "keystone" of the distal arch, moves together with the metacarpal bones and the distal arch is therefore rigid. The stability of these arches is more dependent of the ligaments and capsules of the wrist than of the interlocking shapes of the carpal bones, and the wrist is therefore more stable in flexion than in extension. The distal carpal arch affects the function of the CMC joints and the hands, but not the function of the wrist or the proximal carpal arch. The ligaments that maintain the distal carpal arches are the transverse carpal ligament and the intercarpal ligaments (also oriented transversally). These ligaments also form the carpal tunnel and contribute to the deep and superficial palmar arches. Several muscle tendons attaching to the TCL and the distal carpals also contribute to maintaining the carpal arch. Compared to the carpal arches, the arch formed by the distal ends of the metacarpal bones is flexible due to the mobility of the peripheral metacarpals (thumb and little finger). As these two metacarpals approach each other, the palmar gutter deepens. The central-most metacarpal (middle finger) is the most rigid. It and its two neighbors are tied to the carpus by the interlocking shapes of the metacarpal bones. The thumb metacarpal only articulates with the trapezium and is therefore completely independent, while the fifth metacarpal (little finger) is semi-independent with the fourth metacarpal (ring finger) which forms a transitional element to the fifth metacarpal. Together with the thumb, the four fingers form four oblique arches, of which the arch of the index finger functionally is the most important, especially for precision grip, while the arch of the little finger contribute an important locking mechanism for power grip. The thumb is undoubtedly the "master digit" of the hand, giving value to all the other fingers. Together with the index and middle finger, it forms the dynamic tridactyl configuration responsible for most grips not requiring force. The ring and little fingers are more static, a reserve ready to interact with the palm when great force is needed. Muscles The muscles acting on the hand can be subdivided into two groups: the extrinsic and intrinsic muscle groups. The extrinsic muscle groups are the long flexors and extensors. They are called extrinsic because the muscle belly is located on the forearm. Intrinsic The intrinsic muscle groups are the thenar (thumb) and hypothenar (little finger) muscles; the interosseous muscles (four dorsally and three volarly) originating between the metacarpal bones; and the lumbrical muscles arising from the deep flexor (and are special because they have no bony origin) to insert on the dorsal extensor hood mechanism. Extrinsic The fingers have two long flexors, located on the underside of the forearm. They insert by tendons to the phalanges of the fingers. The deep flexor attaches to the distal phalanx, and the superficial flexor attaches to the middle phalanx. The flexors allow for the actual bending of the fingers. The thumb has one long flexor and a short flexor in the thenar muscle group. The human thumb also has other muscles in the thenar group (opponens and abductor brevis muscle), moving the thumb in opposition, making grasping possible. The extensors are located on the back of the forearm and are connected in a more complex way than the flexors to the dorsum of the fingers. The tendons unite with the interosseous and lumbrical muscles to form the extensorhood mechanism. The primary function of the extensors is to straighten out the digits. The thumb has two extensors in the forearm; the tendons of these form the anatomical snuff box. Also, the index finger and the little finger have an extra extensor used, for instance, for pointing. The extensors are situated within 6 separate compartments. The first four compartments are located in the grooves present on the dorsum of inferior side of radius while the 5th compartment is in between radius and ulna. The 6th compartment is in the groove on the dorsum of inferior side of ulna. Nerve supply The hand is innervated by the radial, median, and ulnar nerves. Motor The radial nerve supplies the finger extensors and the thumb abductor, thus the muscles that extends at the wrist and metacarpophalangeal joints (knuckles); and that abducts and extends the thumb. The median nerve supplies the flexors of the wrist and digits, the abductors and opponens of the thumb, the first and second lumbrical. The ulnar nerve supplies the remaining intrinsic muscles of the hand. All muscles of the hand are innervated by the brachial plexus (C5–T1) and can be classified by innervation: Sensory The radial nerve supplies the skin on the back of the hand from the thumb to the ring finger and the dorsal aspects of the index, middle, and half ring fingers as far as the proximal interphalangeal joints. The median nerve supplies the palmar side of the thumb, index, middle, and half ring fingers. Dorsal branches innervates the distal phalanges of the index, middle, and half ring fingers. The ulnar nerve supplies the ulnar third of the hand, both at the palm and the back of the hand, and the little and half ring fingers. There is a considerable variation to this general pattern, except for the little finger and volar surface of the index finger. For example, in some individuals, the ulnar nerve supplies the entire ring finger and the ulnar side of the middle finger, whilst, in others, the median nerve supplies the entire ring finger. Blood supply The hand is supplied with blood from two arteries, the ulnar artery and the radial artery. These arteries form three arches over the dorsal and palmar aspects of the hand, the dorsal carpal arch (across the back of the hand), the deep palmar arch, and the superficial palmar arch. Together these three arches and their anastomoses provide oxygenated blood to the palm, the fingers, and the thumb. The hand is drained by the dorsal venous network of the hand with deoxygenated blood leaving the hand via the cephalic vein and the basilic vein. Skin The glabrous (hairless) skin on the front of the hand, the palm, is relatively thick and can be bent along the hand's flexure lines where the skin is tightly bound to the underlying tissue and bones. Compared to the rest of the body's skin, the hands' palms (as well as the soles of the feet) are usually lighter—and even much lighter in dark-skinned individuals, compared to the other side of the hand. Indeed, genes specifically expressed in the dermis of palmoplantar skin inhibit melanin production and thus the ability to tan, and promote the thickening of the stratum lucidum and stratum corneum layers of the epidermis. All parts of the skin involved in grasping are covered by papillary ridges (fingerprints) acting as friction pads. In contrast, the hairy skin on the dorsal side is thin, soft, and pliable, so that the skin can recoil when the fingers are stretched. On the dorsal side, the skin can be moved across the hand up to ; an important input the cutaneous mechanoreceptors. The web of the hand is a "fold of skin which connects the digits". These webs, located between each set of digits, are known as skin folds (interdigital folds or plica interdigitalis). They are defined as "one of the folds of skin, or rudimentary web, between the fingers and toes". Variation The ratio of the length of the index finger to the length of the ring finger in adults is affected by the level of exposure to male sex hormones of the embryo in utero. This digit ratio is below 1 for both sexes but it is lower in males than in females on average. Clinical significance A number of genetic disorders affect the hand. Polydactyly is the presence of more than the usual number of fingers. One of the disorders that can cause this is Catel-Manzke syndrome. The fingers may be fused in a disorder known as syndactyly. Or there may be an absence of one or more central fingers—a condition known as ectrodactyly. Additionally, some people are born without one or both hands (amelia). Hereditary multiple exostoses of the forearm—also known as hereditary multiple osteochondromas—is another cause of hand and forearm deformity in children and adults. There are several cutaneous conditions that can affect the hand including the nails. The autoimmune disease rheumatoid arthritis can affect the hand, particularly the joints of the fingers. Some conditions can be treated by hand surgery. These include carpal tunnel syndrome, a painful condition of the hand and fingers caused by compression of the median nerve, and Dupuytren's contracture, a condition in which fingers bend towards the palm and cannot be straightened. Similarly, injury to the ulnar nerve may result in a condition in which some of the fingers cannot be flexed. A common fracture of the hand is a scaphoid fracture—a fracture of the scaphoid bone, one of the carpal bones. This is the commonest carpal bone fracture and can be slow to heal due to a limited blood flow to the bone. There are various types of fracture to the base of the thumb; these are known as Rolando fractures, Bennet's fracture, and Gamekeeper's thumb. Another common fracture, known as Boxer's fracture, is to the neck of a metacarpal. One can also have a broken finger. Evolution The prehensile hands and feet of primates evolved from the mobile hands of semi-arboreal tree shrews that lived about . This development has been accompanied by important changes in the brain and the relocation of the eyes to the front of the face, together allowing the muscle control and stereoscopic vision necessary for controlled grasping. This grasping, also known as power grip, is supplemented by the precision grip between the thumb and the distal finger pads made possible by the opposable thumbs. Hominidae (great apes including humans) acquired an erect bipedal posture about , which freed the hands from the task of locomotion and paved the way for the precision and range of motion in human hands. Functional analyses of the features unique to the hand of modern humans have shown that they are consistent with the stresses and requirements associated with the effective use of paleolithic stone tools. It is possible that the refinement of the bipedal posture in the earliest hominids evolved to facilitate the use of the trunk as leverage in accelerating the hand. While the human hand has unique anatomical features, including a longer thumb and fingers that can be controlled individually to a higher degree, the hands of other primates are anatomically similar and the dexterity of the human hand can not be explained solely on anatomical factors. The neural machinery underlying hand movements is a major contributing factor; primates have evolved direct connections between neurons in cortical motor areas and spinal motoneurons, giving the cerebral cortex monosynaptic control over the motoneurons of the hand muscles; placing the hands "closer" to the brain. The recent evolution of the human hand is thus a direct result of the development of the central nervous system, and the hand, therefore, is a direct tool of our consciousness—the main source of differentiated tactile sensations—and a precise working organ enabling gestures—the expressions of our personalities. There are nevertheless several primitive features left in the human hand, including pentadactyly (having five fingers), the hairless skin of the palm and fingers, and the os centrale found in human embryos, prosimians, and apes. Furthermore, the precursors of the intrinsic muscles of the hand are present in the earliest fishes, reflecting that the hand evolved from the pectoral fin and thus is much older than the arm in evolutionary terms. The proportions of the human hand are plesiomorphic (shared by both ancestors and extant primate species); the elongated thumbs and short hands more closely resemble the hand proportions of Miocene apes than those of extant primates. Humans did not evolve from knuckle-walking apes, and chimpanzees and gorillas independently acquired elongated metacarpals as part of their adaptation to their modes of locomotion. Several primitive hand features most likely present in the chimpanzee–human last common ancestor (CHLCA) and absent in modern humans are still present in the hands of Australopithecus, Paranthropus, and Homo floresiensis. This suggests that the derived changes in modern humans and Neanderthals did not evolve until or after the appearance of the earliest Acheulian stone tools, and that these changes are associated with tool-related tasks beyond those observed in other hominins. The thumbs of Ardipithecus ramidus, an early hominin, are almost as robust as in humans, so this may be a primitive trait, while the palms of other extant higher primates are elongated to the extent that some of the thumb's original function has been lost (most notably in highly arboreal primates such as the spider monkey). In humans, the big toe is thus more derived than the thumb. There is a hypothesis suggesting the form of the modern human hand is especially conducive to the formation of a compact fist, presumably for fighting purposes. The fist is compact and thus effective as a weapon. It also provides protection for the fingers. However, this is not widely accepted to be one of the primary selective pressures acting on hand morphology throughout human evolution, with tool use and production being thought to be far more influential. Additional images
Biology and health sciences
Human anatomy
null
19167553
https://en.wikipedia.org/wiki/Goat
Goat
The goat or domestic goat (Capra hircus) is a species of goat-antelope that is mostly kept as livestock. It was domesticated from the wild goat (C. aegagrus) of Southwest Asia and Eastern Europe. The goat is a member of the family Bovidae, meaning it is closely related to the sheep. It was one of the first animals to be domesticated, in Iran around 10,000 years ago. Goats have been used for milk, meat, wool, and skins across much of the world. Milk from goats is often turned into cheese. In 2022, there were more than 1.1 billion goats living in the world, of which 150 million were in India. Goats feature in mythology, folklore, and religion in many parts of the world, including in the classical myth of Amalthea, in the goats that pulled the chariot of the Norse god Thor, in the Scandinavian Yule goat, and in Hinduism's goat-headed Daksha. In Christianity and Satanism, the devil is sometimes depicted as a goat. Etymology The Modern English word goat comes from Old English gāt "goat, she-goat", which in turn derives from Proto-Germanic *gaitaz (cf. Dutch/Frisian/Icelandic/Norwegian geit, German Geiß, and Gothic gaits), ultimately from Proto-Indo-European *ǵʰaidos meaning "young goat" (cf. Latin haedus "kid"). To refer to the male goat, Old English used bucca (cf. Dutch/Frisian bok, modern English buck) until ousted by hegote, hegoote ('he-goat') in the late 12th century. Nanny goat (adult female) originated in the 18th century, and billy goat (adult male) in the 19th century. Castrated males are called wethers. While the words hircine and caprine both refer to anything having a goat-like quality, hircine is used most often to emphasize the distinct smell of domestic goats. History Goats are among the earliest animals to have been domesticated by humans. A genetic analysis confirms the archaeological evidence that the wild bezoar ibex, found today in the Zagros Mountains, but formerly widespread in Anatolia, is the likely original ancestor of all or most domestic goats today. Neolithic farmers began to herd wild goats primarily for easy access to milk and meat, as well as to their dung, which was used as fuel; and their bones, hair, and sinew were used for clothing, building, and tools. The earliest remnants of domesticated goats dating 10,000 years Before Present are found in Ganj Dareh in Iran. Goat remains have been found at archaeological sites in Jericho, Choga Mami, Djeitun, and Çayönü, dating the domestication of goats in Western Asia at between 8,000 and 9,000 years ago. DNA evidence suggests that goats were domesticated around 10,000 years ago. Historically, goat hide has been used for water and wine bottles in both traveling and transporting wine for sale, and to produce parchment. Biology Description Each breed of goat has specific weight ranges, which vary from more than for bucks of larger breeds such as the Boer, to for smaller does. Within each breed, different strains or bloodlines may have different recognized sizes. At the bottom of the size range are miniature breeds such as the African Pygmy, which stand at the shoulder as adults. Most goats naturally have two horns, their shape and size depending on the breed. There have been incidents of polycerate goats (having as many as eight horns), although this is a genetic rarity. Unlike cattle, goats have not been successfully bred to be reliably polled, as the genes determining sex and those determining horns are closely linked. Breeding together two genetically polled goats results in a high number of intersex individuals among the offspring, which are typically sterile. Their horns are made of living bone surrounded by keratin and other proteins, and are used for defense, dominance, territoriality, and thermoregulation. Both male and female goats may have beards, and many types of goat (most commonly dairy goats, dairy-cross Boers, and pygmy goats) may have wattles, one dangling from each side of the neck. Goats have horizontal, slit-shaped pupils, allowing them to see well by both night and day, and giving them a wide field of vision on either side to detect predators, while avoiding being dazzled by sunlight from above. Goats have no tear ducts. Goats are ruminants. They have a four-chambered stomach consisting of the rumen, the reticulum, the omasum, and the abomasum. As with other mammal ruminants, they are even-toed ungulates. The females have an udder consisting of two teats, in contrast to cattle, which have four teats. An exception to this is the Boer goat, which sometimes may have up to eight teats. Goats are diploid with two sets of 30 chromosomes. Comparison with sheep Sheep and goats are closely related: both are in the subfamily Caprinae. However, they are separate species, so hybrids rarely occur and are always infertile. A hybrid of a ewe and a buck is called a sheep-goat hybrid. Visual differences between sheep and goats include the beard of goats and the divided upper lip of sheep. Sheep tails hang down, even when short or docked, while the short tails of goats are held upwards. Sheep breeds are often naturally polled (either in both sexes or just in the female), while naturally polled goats are rare (though many are polled artificially). Males of the two species differ in that buck goats acquire a unique and strong odor during the rut, whereas rams do not. Behavior and ecology Goats are naturally curious. They are agile and able to climb and balance in precarious places. This makes them the only ruminant to regularly climb trees. These behaviours have made them notorious for escaping their pens by testing fences and enclosures. If any of the fencing can be overcome, goats almost inevitably escape. Goats are as intelligent as dogs by some studies. When handled as a group, goats display less herding behavior than sheep. When grazing undisturbed, they spread across the field or range, rather than feed side by side as do sheep. When nursing young, goats leave their kids separated ("lying out") rather than clumped, as do sheep. They generally turn and face an intruder, and bucks are more likely to charge or butt at humans than are rams. A 2016 study reports that goats try to communicate with people like domesticated animals such as dogs and horses. They look to a human for assistance when faced with a newly-modified challenge. Reproduction Goats reach puberty between three and 15 months of age, depending on breed and nutritional status. Many breeders prefer to postpone breeding until the doe has reached 70% of the adult weight, but this separation is rarely possible in extensively managed, open-range herds. Bucks (uncastrated males) of Swiss and northern breeds come into rut in the fall as with the does' heat cycles. Bucks of equatorial breeds may show seasonal reduced fertility, but as with the does, are capable of breeding at all times. Rut is characterized by a decrease in appetite and obsessive interest in the does. A buck in rut displays flehmen lip curling and urinates on his forelegs and face. Sebaceous scent glands at the base of the horns add to the male goat's odor, which is important to make him attractive to the female. Some does will not mate with a buck which has had its scent glands removed. Gestation length is approximately 150 days. Twins are the usual result, with single and triplet births also common. Less frequent are litters of quadruplet, quintuplet, and even sextuplet kids. Birthing, known as kidding, generally occurs uneventfully. Just before kidding, the doe will have a sunken area around the tail and hip, as well as heavy breathing. She may have a worried look, become restless and display great affection for her keeper. The mother often eats the placenta, which gives her much-needed nutrients, helps stanch her bleeding, and parallels the behavior of wild herbivores, such as deer, to reduce the lure of the birth scent for predators. Freshening (coming into milk production) usually occurs at kidding, although milk production is also relatively common in unbred doelings of dairy breeds. Milk production varies with the breed, age, quality, and diet of the doe; dairy goats generally produce between of milk per 305-day lactation. On average, a good quality dairy doe will give at least of milk per day while she is in milk. A first-time milker may produce less, or as much as , or more of milk in exceptional cases. After the lactation, the doe will "dry off", typically after she has been bred. Occasionally, goats that have not been bred and are continuously milked will continue lactation beyond the typical 305 days. Male lactation sometimes occurs in goats. Diet Goats are reputed to be willing to eat almost anything. They are browsing animals, not grazers like cattle and sheep, and (coupled with their highly curious nature) will chew on and taste anything resembling plant matter to decide whether it is good to eat, including cardboard, clothing and paper. The digestive physiology of a very young kid (like the young of other ruminants) is essentially the same as that of a monogastric animal. Milk digestion begins in the abomasum, the milk having bypassed the rumen via closure of the reticuloesophageal groove during suckling. At birth, the rumen is undeveloped, but as the kid begins to consume solid feed, the rumen soon increases in size and in its capacity to absorb nutrients. The adult size of a particular goat is a product of its breed (genetic potential) and its diet while growing (nutritional potential). As with all livestock, increased protein diets (10 to 14%) and sufficient calories during the prepuberty period yield higher growth rates and larger eventual size than lower protein rates and limited calories. Large-framed goats, with a greater skeletal size, reach mature weight at a later age (36 to 42 months) than small-framed goats (18 to 24 months) if both are fed to their full potential. Large-framed goats need more calories than small-framed goats for maintenance of daily functions. Diseases and life expectancy While goats are hardy animals and often need little medical care, they are subject to a number of diseases. Among the conditions affecting goats are respiratory diseases including pneumonia, foot rot, internal parasites, pregnancy toxicosis, and feed toxicity. Goats can become infected with various viral and bacterial diseases, such as foot-and-mouth disease, caprine arthritis encephalitis, caseous lymphadenitis, pinkeye, mastitis, and pseudorabies. They can transmit a number of zoonotic diseases to people, such as tuberculosis, brucellosis, Q fever, and rabies. Life expectancy for goats is between 15 and 18 years. An instance of a goat reaching the age of 24 has been reported. Several factors can reduce this average expectancy; problems during kidding can lower a doe's expected life span to 10 or 11, and stresses of going into rut can lower a buck's expected life span to eight to 10 years. Agriculture Husbandry Husbandry, or animal care and use, varies by region and culture. The minimal requirements for goats include a grazing area or the bringing of fodder to penned animals, with enough hayracks for all of them to feed simultaneously; fresh water; salt licks; space for the animals to exercise; and disposal of soiled bedding. In Africa and the Middle East, goats are typically run in flocks with sheep. This maximizes the production per acre, as goats and sheep prefer different food plants. Multiple types of goat-raising are found in Ethiopia, where four main types have been identified: pastured in annual crop systems, in perennial crop systems, with cattle, and in arid areas, under pastoral (nomadic) herding systems. In all four systems, however, goats were typically kept in extensive systems, with few purchased inputs. In Nigeria and in parts of Latin America, some goats are allowed to wander the homestead or village, while others are kept penned and fed in a 'cut-and-carry' system. This involves cutting grasses, maize or cane for feed rather than allowing the animal access to the field. The system is well suited for crops like maize that are sensitive to trampling. Worldwide population In 2022, there were more than 1,100 million goats living in the world, led by India with 150 million and China with 132 million, and followed by Nigeria with 88 million and Pakistan with 82.5 million. Over 93% of the world's goats live in Africa and Asia. The top producers of goat milk in 2022 were India (6.25 million metric tons), Bangladesh (0.91 million metric tons), and South Sudan (0.52 million metric tons). , India slaughters 41% of 124.4 million goats each year. The 0.6 million metric tonnes of goat meat make up 8% of India's annual meat production. Approximately 440 million goats are slaughtered each year for meat worldwide, yielding 6.37 million metric tons of meat. Feral goats Goats readily revert to the wild (become feral) if given the opportunity. Feral goats have established themselves in many areas: they occur in Australia, New Zealand, Great Britain, the Galapagos and many other places. When feral goats reach large populations in habitats that provide unlimited water supply and do not contain sufficient large predators or are otherwise vulnerable to goats' aggressive grazing habits, they may have serious effects, such as removing native scrub and trees. Feral goats are extremely common in Australia, with an estimated 2.6 million in the mid-1990s. Uses Goats are used to provide milk and specialty wools, and as meat and goatskin. Some charities provide goats to impoverished people in poor countries, in the belief that having useful things alleviates poverty better than cash. The cost of obtaining goats and then distributing them can however be high. Meat The taste of goat kid meat is similar to that of spring lamb meat; in fact, in the English-speaking islands of the Caribbean, and in South Asia, the word 'mutton' denotes both goat and sheep meat. However, some compare the taste of goat meat to veal or venison, depending on the age and condition of the goat. Its flavor is said to be primarily linked to the presence of 4-methyloctanoic and 4-methylnonanoic acid. The meat is made into dishes such as goat curry, mutton satay, and capra e fagioli. Milk, butter, and cheese Goats produce about 2% of the world's total annual milk supply. Dairy goats produce an average of of milk during an average 284-day lactation. The milk can contain between around 3.5% and 5% butterfat according to breed. Goat milk is processed into products including cheese and Dulce de leche. Mohair and cashmere wool Most goats have soft insulating hairs nearer the skin, and long guard hairs on the surface. The soft hairs are the ones valued by the textile industry; the material goes by names such as down, cashmere and pashmina. The coarse guard hairs are of little value as they are too coarse, difficult to spin and to dye. The cashmere goat produces a commercial quantity of fine and soft cashmere wool, one of the most expensive natural fibers commercially produced. It is harvested once a year. The Angora breed of goats produces long, curling, lustrous locks of mohair. The entire body of the goat is covered with mohair and there are no guard hairs. The locks constantly grow to four inches or more in length. Angora crossbreeds, such as the pygora and the nigora, have been created to produce mohair and/or cashgora on a smaller, easier-to-manage animal. The wool is shorn twice a year, with an average yield of about . Land clearing Goats have been used by humans to clear unwanted vegetation for centuries. They have been described as "eating machines" and "biological control agents". There has been a resurgence of this in North America since 1990, when herds were used to clear dry brush from California hillsides thought to be endangered by potential wildfires. This form of using goats to clear land is sometimes known as conservation grazing. Since then, numerous public and private agencies have hired private herds from companies such as Rent A Goat to perform similar tasks. This may be expensive and their smell may be a nuisance. This practice has become popular in the Pacific Northwest, where they are used to remove invasive species not easily removed by humans, including (thorned) blackberry vines and poison oak. Chattanooga, TN and Spartanburg, SC have used goats to control kudzu, an invasive plant species prevalent in the southeastern United States. Medical training Some countries' militaries use goats to train combat medics. In the United States, goats have become the main animal species used for this purpose after the Pentagon phased out using dogs for medical training in the 1980s. While modern mannequins used in medical training are quite efficient in simulating the behavior of a human body, trainees feel that "the goat exercise provide[s] a sense of urgency that only real life trauma can provide". The practice has elicited outcry from animal-rights groups. Pets Some people choose goats as a pet because of their ability to form close bonds with their human guardians. Goats are social animals and usually prefer the company of other goats, but because of their herd mentality, they will follow their owner and form close bonds with them, hence their continuing popularity. Goats are similar to deer with regard to nutrition and need a wide range of food, including things like hay, grain feed or pelleted grain mix, and loose minerals. Goats generally either inherit certain feeding preferences or learn them after birth. In culture Mythology, folklore and astrology In classical myth, Amalthea is either a nymph who fed the infant god Jupiter with goat's milk, or the goat who suckled the infant. In another legend, the god broke one of the goat's horns, endowing it with the power to fill itself with whatever its owner wanted, making it the cornucopia or horn of plenty. The ancient city of Ebla in Syria contains a tomb with a throne decorated with bronze goat heads, now called "The Tomb of the Lord of the Goats". According to Norse mythology, the god of thunder, Thor, has a chariot that is pulled by the goats Tanngrisnir and Tanngnjóstr At night when he sets up camp, Thor eats the meat of the goats, but takes care that all bones remain whole. Then he wraps the remains up, and in the morning, the goats always come back to life to pull the chariot. When a farmer's son who is invited to share the meal breaks one of the goats' leg bones to suck the marrow, the animal's leg remains broken in the morning, and the boy is forced to serve Thor as a servant to compensate for the damage. Possibly related, the Yule goat (Julbocken) is a Scandinavian Christmas tradition. It originally denoted the goat that was slaughtered around Yule, now more often a goat figure made out of straw. It is used for the custom of going door-to-door singing carols and getting food and drinks in return, often fruit, cakes and sweets. The Gävle goat is a giant version of the yule goat, erected every year in the Swedish city of Gävle. In Finland the tradition of Nuutinpäivä—St. Knut's Day, January 13—involves young men dressed as goats (Finnish: Nuuttipukki) who visit houses. Usually the dress was an inverted fur jacket, a leather or birch bark mask, and horns. Unlike the analogues Santa Claus, Nuuttipukki was a scary character (cf. Krampus). The men dressed as Nuuttipukki wandered from house to house, came in, and typically demanded food from the household and especially leftover alcohol. In Finland the Nuuttipukki tradition is kept alive in areas of Satakunta, Southwest Finland and Ostrobothnia. Nowadays the character is usually played by children and involves a happy encounter. The goat is one of the 12-year cycle of animals which appear in the Chinese zodiac. Several mythological hybrid creatures contain goat parts, including the Chimera. The Capricorn constellation sign in the Western zodiac is usually depicted as a goat with a fish's tail. Fauns and satyrs are mythological creatures with human bodies and goats' legs. The lustful Greek god Pan similarly has the upper body of a man and the horns and lower body of a goat. A goatee is a tuft of facial hair on a man's chin, named for its resemblance to a goat's beard. Religion In Hinduism, Daksha, one of the prajapati, is sometimes depicted with the head of a male goat. A legend states that Daksha failed to invite Shiva to a sacrifice; Shiva beheaded Daksha, but when asked by Vishnu, restored Daksha to life with the head of a goat. Goats are mentioned many times in the Bible. Their importance in ancient Israel is indicated by the seven different Hebrew and three Greek terms used in the Bible. A goat is considered a "clean" animal by Jewish dietary laws and a kid was slaughtered for an honored guest. It was also acceptable for some kinds of sacrifices. Goat-hair curtains were used in the tent that contained the tabernacle (Exodus 25:4). Its horns can be used instead of sheep's horn to make a shofar. On Yom Kippur, the festival of the Day of Atonement, two goats were chosen and lots were drawn for them. One was sacrificed and the other allowed to escape into the wilderness, symbolically carrying with it the sins of the community. From this comes the word "scapegoat". In Matthew 25:31–46, Jesus said that like a shepherd he will separate the nations, rewarding the sheep, those who have shown kindness, but punishing the goats. The devil is sometimes depicted, like Baphomet, as a goat, making the animal a significant symbol throughout Satanism. The inverted pentagram of Satanism is sometimes depicted with a goat's head of Baphomet, which originated from the Church of Satan.
Biology and health sciences
Artiodactyla
null
19167644
https://en.wikipedia.org/wiki/Toilet
Toilet
A toilet is a piece of sanitary hardware that collects human waste (urine and feces), and sometimes toilet paper, usually for disposal. Flush toilets use water, while dry or non-flush toilets do not. They can be designed for a sitting position popular in Europe and North America with a toilet seat, with additional considerations for those with disabilities, or for a squatting posture more popular in Asia, known as a squat toilet. In urban areas, flush toilets are usually connected to a sewer system; in isolated areas, to a septic tank. The waste is known as blackwater and the combined effluent, including other sources, is sewage. Dry toilets are connected to a pit, removable container, composting chamber, or other storage and treatment device, including urine diversion with a urine-diverting toilet. The technology used for modern toilets varies. Toilets are commonly made of ceramic (porcelain), concrete, plastic, or wood. Newer toilet technologies include dual flushing, low flushing, toilet seat warming, self-cleaning, female urinals and waterless urinals. Japan is known for its toilet technology. Airplane toilets are specially designed to operate in the air. The need to maintain anal hygiene post-defecation is universally recognized and toilet paper (often held by a toilet roll holder), which may also be used to wipe the vulva after urination, is widely used (as well as bidets). In private homes, depending on the region and style, the toilet may exist in the same bathroom as the sink, bathtub, and shower. Another option is to have one room for body washing (also called "bathroom") and a separate one for the toilet and handwashing sink (toilet room). Public toilets (restrooms) consist of one or more toilets (and commonly single urinals or trough urinals) which are available for use by the general public. Products like urinal blocks and toilet blocks help maintain the smell and cleanliness of toilets. Toilet seat covers are sometimes used. Portable toilets (frequently chemical "porta johns") may be brought in for large and temporary gatherings. Historically, sanitation has been a concern from the earliest stages of human settlements. However, many poor households in developing countries use very basic, and often unhygienic, toilets – and nearly one billion people have no access to a toilet at all; they must openly defecate and urinate. These issues can lead to the spread of diseases transmitted via the fecal-oral route, or the transmission of waterborne diseases such as cholera and dysentery. Therefore, the United Nations Sustainable Development Goal 6 wants to "achieve access to adequate and equitable sanitation and hygiene for all and end open defecation". Overview The number of different types of toilets used worldwide is large, but can be grouped by: Having water (which seals in odor) or not (which usually relates to e.g. flush toilet versus dry toilet) Being used in a sitting or squatting position (sitting toilet versus squat toilet) Being located in the private household or in public (toilet room versus public toilet) Toilets can be designed to be used either in a standing (urinatiing), sitting or in a squatting posture (defecating). Each type has its benefits. The "sitting toilet", however, is essential for those who are movement impaired. Sitting toilets are often referred to as "western-style toilets". Sitting toilets are more convenient than squat toilets for people with disabilities and the elderly. People use different toilet types based on the country that they are in. In developing countries, access to toilets is also related to people's socio-economic status. Poor people in low-income countries often have no toilets at all and resort to open defecation instead. This is part of the sanitation crisis which international initiatives (such as World Toilet Day) draw attention to. With water Flush toilet A typical flush toilet is a ceramic bowl (pan) connected on the "up" side to a cistern (tank) that enables rapid filling with water, and on the "down" side to a drain pipe that removes the effluent. When a toilet is flushed, the sewage should flow into a septic tank or into a system connected to a sewage treatment plant. However, in many developing countries, this treatment step does not take place. The water in the toilet bowl is connected to a pipe shaped like an upside-down U. One side of the U channel is arranged as a siphon tube longer than the water in the bowl is high. The siphon tube connects to the drain. The bottom of the drain pipe limits the height of the water in the bowl before it flows down the drain. The water in the bowl acts as a barrier to sewer gas entering the building. Sewer gas escapes through a vent pipe attached to the sewer line. The amount of water used by conventional flush toilets usually makes up a significant portion of personal daily water usage. However, modern low flush toilet designs allow the use of much less water per flush. Dual flush toilets allow the user to select between a flush for urine or feces, saving a significant amount of water over conventional units. One type of dual flush system allows the flush handle to be pushed up for one kind of flush and down for the other, whereas another design is to have two buttons, one for urination and the other for defecation. In some places, users are encouraged not to flush after urination. Flushing toilets can be plumbed to use greywater (water that was previously used for washing dishes, laundry, and bathing) rather than potable water (drinking water). Some modern toilets pressurize the water in the tank, which initiates flushing action with less water usage. Another variant is the pour-flush toilet. This type of flush toilet has no cistern but is flushed manually with a few liters of a small bucket. The flushing can use as little as . This type of toilet is common in many Asian countries. The toilet can be connected to one or two pits, in which case it is called a "pour flush pit latrine" or a "twin pit pour flush to pit latrine". It can also be connected to a septic tank. Flush toilets on ships are typically flushed with seawater. Twin pit designs Twin pit latrines use two pits used alternatively, when one pit gets full over a few months or years. The pits are of an adequate size to accommodate a volume of waste generated over one or two years. This allows the contents of the full pit enough time to transform into a partially sanitized, soil-like material that can be manually excavated. There is a risk of groundwater pollution when pits are located in areas with a high or variable water table, and/or fissures or cracks in the bedrock. Vacuum toilet A vacuum toilet is a flush toilet that is connected to a vacuum sewer system, and removes waste by suction. They may use very little water (less than a quarter of a liter per flush) or none, (as in waterless urinals). Some flush with coloured disinfectant solution rather than with water. They may be used to separate blackwater and greywater, and process them separately (for instance, the fairly dry blackwater can be used for biogas production, or in a composting toilet). Passenger train toilets, aircraft lavatories, bus toilets, and ships with plumbing often use vacuum toilets. The lower water usage saves weight, and avoids water slopping out of the toilet bowl in motion. Aboard vehicles, a portable collection chamber is used; if it is filled by positive pressure from an intermediate vacuum chamber, it need not be kept under vacuum. Floating toilet A floating toilet is essentially a toilet on a platform built above or floating on the water. Instead of excreta going into the ground they are collected in a tank or barrel. To reduce the amount of excreta that needs to hauled to shore, many use urine diversion. The floating toilet was developed for residents without quick access to land or connection to a sewer systems. It is also used in areas subjected to prolonged flooding. The need for this type of toilet is high in areas like Cambodia. Without water Pit latrine Vault toilet A vault toilet is a non-flush toilet with a sealed container (or vault) buried in the ground to receive the excreta, all of which is contained underground until it is removed by pumping. A vault toilet is distinguished from a pit latrine because the waste accumulates in the vault instead of seeping into the underlying soil. Urine-diverting toilet Portable toilet Chemical toilet Toilet fed to animals The pig toilet, which consists of a toilet linked to a pigsty by a chute, is still in use to a limited extent. It was common in rural China, and was known in Japan, Korea, and India. The fish pond toilet depends on the same principle, of livestock (often carp) eating human excreta directly. "Flying toilet" Squat toilets Usage Urination There are cultural differences in socially accepted and preferred voiding positions for urination around the world: in the Middle East and Asia, the squatting position is more prevalent, while in the Western world the standing and sitting position are more common. Anal cleansing habits In the Western world, the most common method of cleaning the anal area after defecation is by toilet paper or sometimes by using a bidet. In many Muslim countries, the facilities are designed to enable people to follow Islamic toilet etiquette . For example, a bidet shower may be plumbed in. The left hand is used for cleansing, for which reason that hand is considered impolite or polluted in many Asian countries. The use of water in many Christian countries is due in part to the biblical toilet etiquette which encourages washing after all instances of defecation. The bidet is common in predominantly Catholic countries where water is considered essential for anal cleansing, and in some traditionally Orthodox and Lutheran countries such as Greece and Finland respectively, where bidet showers are common. There are toilets on the market with seats having integrated spray mechanisms for anal and genital water sprays (see for example Toilets in Japan). This can be useful for the elderly or people with disabilities. Accessible toilets An accessible toilet is designed to accommodate people with physical disabilities, such as age related limited mobility or inability to walk due to impairments. Additional measures to add toilet accessibility are providing more space and grab bars to ease transfer to and from the toilet seat, including enough room for a caregiver if necessary. Public toilets Communication through toilets In prisons, inmates may utilize toilets and the associated plumbing to communicate messages and pass products. The acoustic properties of communicating through the toilet bowl, known as toilet talk, potty talk, toilet telephone is influenced by flush patterns and bowl water volumes. Prisoners may also send binary signals by ringing the sewage or water pipes. Toilet talk enables communication for those in solitary confinement. Toilets have been subject to wiretaps. Public health aspects To this day, 1 billion people in developing countries have no toilets in their homes and are resorting to open defecation instead. Therefore, it is one of the targets of Sustainable Development Goal 6 to provide toilets (sanitation services) to everyone by 2030. Toilets are one important element of a sanitation system, although other elements are also needed: transport, treatment, disposal, or reuse. Diseases, including Cholera, which still affects some 3 million people each year, can be largely prevented when effective sanitation and water treatment prevents fecal matter from contaminating waterways, groundwater, and drinking water supplies. History Ancient history The fourth millennium BC would witness the invention of clay pipes, sewers, and toilets, in Mesopotamia, with the city of Uruk today exhibiting the earliest known internal pit toilet, from . The Neolithic village of Skara Brae contains examples, , of internal small rooms over a communal drain, rather than pit. The Indus Valley Civilisation in northwestern India and Pakistan was home to the world's first known urban sanitation systems. In Mohenjo-Daro (), toilets were built into the outer walls of homes. These toilets had vertical chutes, via which waste was disposed of into cesspits or street drains. In the Indus city of Lothal (), houses belonging to the upper class had private toilets connected to a covered sewer network constructed of brickwork held together with a gypsum-based mortar that emptied either into the surrounding water bodies or alternatively into cesspits, the latter of which were regularly emptied and cleaned. Other very early toilets that used flowing water to remove the waste are found at Skara Brae in Orkney, Scotland, which was occupied from about 3100 BC until 2500 BC. Some of the houses there have a drain running directly beneath them, and some of these had a cubicle over the drain. Around the 18th century BC, toilets started to appear in Minoan Crete, Pharaonic Egypt, and ancient Persia. In 2012, archaeologists found what is believed to be Southeast Asia's earliest latrine during the excavation of a neolithic village in the Rạch Núi archaeological site, southern Vietnam. The toilet, dating back 1500 BC, yielded important clues about early Southeast Asian society. More than 30 coprolites, containing fish and shattered animal bones, provided information on the diet of humans and dogs, and on the types of parasites each had to contend with. In Sri Lanka, the techniques of the construction of toilets and lavatories developed over several stages. A highly developed stage in this process is discernible in the constructions at the Abhayagiri complex in Anuradhapura where toilets and baths dating back to 2nd century BC to 3rd century CE are known, later forms of toilets from 5th century CE to 13th century CE in Polonnaruwa and Anuradhapura had elaborate decorative motifs carved around the toilets. Several types of toilets were developed; these include lavatories with ring-well pits, underground terracotta pipes that lead to septic pits, urinary pits with large bottomless clay pots of decreasing size placed one above the other. These pots under urinals contained "sand, lime and charcoal" through which urine filtered down to the earth in a somewhat purified form. In Roman civilization, latrines using flowing water were sometimes part of public bath houses. Roman latrines, like the ones pictured here, are commonly thought to have been used in the sitting position. The Roman toilets were probably elevated to raise them above open sewers which were periodically "flushed" with flowing water, rather than elevated for sitting. Romans and Greeks also used chamber pots, which they brought to meals and drinking sessions. Johan J. Mattelaer said, "Plinius has described how there were large receptacles in the streets of cities such as Rome and Pompeii into which chamber pots of urine were emptied. The urine was then collected by fullers." (Fulling was a vital step in textile manufacture.) The Han dynasty in China two thousand years ago used pig toilets. Post-classical history Garderobes were toilets used in the Post-classical history, most commonly found in upper-class dwellings. Essentially, they were flat pieces of wood or stone spanning from one wall to the other, with one or more holes to sit on. These were above chutes or pipes that discharged outside the castle or Manor house. Garderobes would be placed in areas away from bedrooms because of the smell and also near kitchens or fireplaces to keep their enclosures warm. The other main way of handling toilet needs was the chamber pot, a receptacle, usually of ceramic or metal, into which one would excrete waste. This method was used for hundreds of years; shapes, sizes, and decorative variations changed throughout the centuries. Chamber pots were in common use in Europe from ancient times, even being taken to the Middle East by medieval pilgrims. Modern history By the Early Modern era, chamber pots were frequently made of china or copper and could include elaborate decoration. They were emptied into the gutter of the street nearest to the home. In pre-modern Denmark, people generally defecated on farmland or other places where the human waste could be collected as fertilizer. The Old Norse language had several terms for referring to outhouses, including garðhús (yard house), náð-/náða-hús (house of rest), and annat hús (the other house). In general, toilets were functionally non-existent in rural Denmark until the 18th century. By the 16th century, cesspits and cesspools were increasingly dug into the ground near houses in Europe as a means of collecting waste, as urban populations grew and street gutters became blocked with the larger volume of human waste. Rain was no longer sufficient to wash away waste from the gutters. A pipe connected the latrine to the cesspool, and sometimes a small amount of water washed waste through. Cesspools were cleaned out by tradesmen, known in English as gong farmers, who pumped out liquid waste, then shovelled out the solid waste and collected it during the night. This solid waste, euphemistically known as nightsoil, was sold as fertilizer for agricultural production (similarly to the closing-the-loop approach of ecological sanitation). In the early 19th century, public officials and public hygiene experts studied and debated sanitation for several decades. The construction of an underground network of pipes to carry away solid and liquid waste was only begun in the mid 19th-century, gradually replacing the cesspool system, although cesspools were still in use in some parts of Paris into the 20th century. Even London, at that time the world's largest city, did not require indoor toilets in its building codes until after the First World War. The water closet, with its origins in Tudor times, started to assume its currently known form, with an overhead cistern, s-bends, soil pipes and valves around 1770. This was the work of Alexander Cumming and Joseph Bramah. Water closets only started to be moved from outside to inside of the home around 1850. The integral water closet started to be built into middle-class homes in the 1860s and 1870s, firstly on the principal bedroom floor and in larger houses in the maids' accommodation, and by 1900 a further one in the hallway. A toilet would also be placed outside the back door of the kitchen for use by gardeners and other outside staff such as those working with the horses. The speed of introduction was varied, so that in 1906 the predominantly working-class town of Rochdale had 750 water closets for a population of 10,000. The working-class home had transitioned from the rural cottage, to the urban back-to-back terraces with external rows of privies, to the through terraced houses of the 1880 with their sculleries and individual external WC. It was the Tudor Walters Report of 1918 that recommended that semi-skilled workers should be housed in suburban cottages with kitchens and internal WC. As recommended floor standards waxed and waned in the building standards and codes, the bathroom with a water closet and later the low-level suite became more prominent in the home. Before the introduction of indoor toilets, it was common to use the chamber pot under one's bed at night and then to dispose of its contents in the morning. During the Victorian era, British housemaids collected all of the household's chamber pots and carried them to a room known as the housemaids' cupboard. This room contained a "slop sink", made of wood with a lead lining to prevent chipping china chamber pots, for washing the "bedroom ware" or "chamber utensils". Once running water and flush toilets were plumbed into British houses, servants were sometimes given their own lavatory downstairs, separate from the family lavatory. The practice of emptying one's own chamber pot, known as slopping out, continued in British prisons until as recently as 2014 and was still in use in 85 cells in Ireland in July 2017. With rare exceptions, chamber pots are no longer used. Modern related implements are bedpans and commodes, used in hospitals and the homes of invalids. Long-established sanitary wear manufacturers in the United Kingdom include Adamsez, founded in Newcastle-upon-Tyne in 1880, by M.J. and S.H. Adams, and Twyfords, founded in Hanley, Stoke-on-Trent in 1849, by Thomas Twyford and his son Thomas William Twyford. Development of dry earth closets Before the widespread adoption of the flush toilet, there were inventors, scientists, and public health officials who supported the use of "dry earth closets" – nowadays known either as dry toilets or composting toilets. Development of flush toilets Although a precursor to the flush toilet system which is widely used nowadays was designed in 1596 by John Harington, such systems did not come into widespread use until the late nineteenth century. With the onset of the Industrial Revolution and related advances in technology, the flush toilet began to emerge into its modern form. A crucial advance in plumbing, was the S-trap, invented by the Scottish mechanic Alexander Cummings in 1775, and still in use today. This device uses the standing water to seal the outlet of the bowl, preventing the escape of foul air from the sewer. It was only in the mid-19th century, with growing levels of urbanisation and industrial prosperity, that the flush toilet became a widely used and marketed invention. This period coincided with the dramatic growth in the sewage system, especially in London, which made the flush toilet particularly attractive for health and sanitation reasons. Flush toilets were also known as "water closets", as opposed to the earth closets described above. WCs first appeared in Britain in the 1880s, and soon spread to Continental Europe. In America, the chain-pull indoor toilet was introduced in the homes of the wealthy and in hotels in the 1890s. William Elvis Sloan invented the Flushometer in 1906, which used pressurized water directly from the supply line for faster recycle time between flushes. High-tech toilet "High-tech" toilets, which can be found in countries like Japan, include features such as automatic-flushing mechanisms; water jets or "bottom washers"; blow dryers, or artificial flush sounds to mask noises. Others include medical monitoring features such as urine and stool analysis and the checking of blood pressure, temperature, and blood sugar. Some toilets have automatic lid operation, heated seats, deodorizing fans, or automated replacement of paper toilet-seat-covers. Interactive urinals have been developed in several countries, allowing users to play video games. The "Toylet", produced by Sega, uses pressure sensors to detect the flow of urine and translates that into on-screen action. Astronauts on the International Space Station use a space toilet with urine diversion which can recover potable water. Names Etymology Toilet was originally a French loanword (first attested in 1540) that referred to the ("little cloth") draped over one's shoulders during hairdressing. During the late 17th century, the term came to be used by metonymy in both languages for the whole complex of grooming and body care that centered at a dressing table (also covered by a cloth) and for the equipment composing a toilet service, including a mirror, hairbrushes, and containers for powder and makeup. The time spent at such a table also came to be known as one's "toilet"; it came to be a period during which close friends or tradesmen were received as "toilet-calls". The use of "toilet" to describe a special room for grooming came much later (first attested in 1819), following the French . Similar to "powder room", "toilet" then came to be used as a euphemism for rooms dedicated to urination and defecation, particularly in the context of signs for public toilets, as on trains. Finally, it came to be used for the plumbing fixtures in such rooms (apparently first in the United States) as these replaced chamber pots, outhouses, and latrines. These two uses, the fixture and the room, completely supplanted the other senses of the word during the 20th century except in the form "toiletries". Contemporary use The word "toilet" was by etymology a euphemism, but is no longer understood as such. As old euphemisms have become the standard term, they have been progressively replaced by newer ones, an example of the euphemism treadmill at work. The choice of word relies not only on regional variation, but also on social situation and level of formality (register) or social class. American manufacturers show an uneasiness with the word and its class attributes: American Standard, the largest firm, sells them as "toilets", yet the higher-priced products of the Kohler Company, often installed in more expensive housing, are sold as commodes or closets, words which also carry other meanings. Confusingly, products imported from Japan such as TOTO are referred to as "toilets", even though they carry the cachet of higher cost and quality. Toto (an abbreviation of Tōyō Tōki, 東洋陶器, Oriental Ceramics) is used in Japanese comics to visually indicate toilets or other things that look like toilets (see Toilets in Japan). Regional variants Different dialects use "bathroom" and "restroom" (American English), "bathroom" and "washroom" (Canadian English), and "WC" (an initialism for "water closet"), "lavatory" and its abbreviation "lav" (British English). Euphemisms for the toilet that bear no direct reference to the activities of urination and defecation are ubiquitous in modern Western languages, reflecting a general attitude of unspeakability about such bodily function. These euphemistic practices appear to have become pronounced following the emergence of European colonial practices, which frequently denigrated colonial subjects in Africa, Asia and South America as 'unclean'. Euphemisms "Crapper" was already in use as a coarse name for a toilet, but it gained currency from the work of Thomas Crapper, who popularized flush toilets in England and held several patents on toilet improvements. "The Jacks" is Irish slang for toilet. It perhaps derives from "jacques" and "jakes", an old English term. "Loo" – The etymology of loo is obscure. The Oxford English Dictionary notes the 1922 appearance of "How much cost? Waterloo. Watercloset." in James Joyce's novel Ulysses and defers to Alan S. C. Ross's arguments that it derived in some fashion from the site of Napoleon's 1815 defeat. In the 1950s the use of the word "loo" was considered one of the markers of British upper-class speech, featuring in a famous essay, "U and non-U English". "Loo" may have derived from a corruption of French ("water"), – whence Scots gardy loo – ("mind the water", used in reference to emptying chamber pots into the street from an upper-story window), ("place"), ("place of ease", used euphemistically for a toilet), or ("English place", used from around 1770 to refer to English-style toilets installed for travelers). Other proposed etymologies include a supposed tendency to place toilets in room 100 (hence "loo") in English hotels, a sailors' dialectal corruption of the nautical term "lee" in reference to the shipboard need to urinate and defecate with the wind prior to the advent of head pumps, or the 17th-century preacher Louis Bourdaloue, whose long sermons at Paris's Saint-Paul-Saint-Louis prompted his parishioners to bring along chamber pots, and his surname was applied to the pots themselves. Gallery
Technology
Household appliances
null
19167679
https://en.wikipedia.org/wiki/Virus
Virus
A virus is a submicroscopic infectious agent that replicates only inside the living cells of an organism. Viruses infect all life forms, from animals and plants to microorganisms, including bacteria and archaea. Viruses are found in almost every ecosystem on Earth and are the most numerous type of biological entity. Since Dmitri Ivanovsky's 1892 article describing a non-bacterial pathogen infecting tobacco plants and the discovery of the tobacco mosaic virus by Martinus Beijerinck in 1898, more than 11,000 of the millions of virus species have been described in detail. The study of viruses is known as virology, a subspeciality of microbiology. When infected, a host cell is often forced to rapidly produce thousands of copies of the original virus. When not inside an infected cell or in the process of infecting a cell, viruses exist in the form of independent viral particles, or virions, consisting of (i) genetic material, i.e., long molecules of DNA or RNA that encode the structure of the proteins by which the virus acts; (ii) a protein coat, the capsid, which surrounds and protects the genetic material; and in some cases (iii) an outside envelope of lipids. The shapes of these virus particles range from simple helical and icosahedral forms to more complex structures. Most virus species have virions too small to be seen with an optical microscope and are one-hundredth the size of most bacteria. The origins of viruses in the evolutionary history of life are still unclear. Some viruses may have evolved from plasmids, which are pieces of DNA that can move between cells. Other viruses may have evolved from bacteria. In evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. Viruses are considered by some biologists to be a life form, because they carry genetic material, reproduce, and evolve through natural selection, although they lack some key characteristics, such as cell structure, that are generally considered necessary criteria for defining life. Because they possess some but not all such qualities, viruses have been described as "organisms at the edge of life" and as replicators. Viruses spread in many ways. One transmission pathway is through disease-bearing organisms known as vectors: for example, viruses are often transmitted from plant to plant by insects that feed on plant sap, such as aphids; and viruses in animals can be carried by blood-sucking insects. Many viruses spread in the air by coughing and sneezing, including influenza viruses, SARS-CoV-2, chickenpox, smallpox, and measles. Norovirus and rotavirus, common causes of viral gastroenteritis, are transmitted by the faecal–oral route, passed by hand-to-mouth contact or in food or water. The infectious dose of norovirus required to produce infection in humans is fewer than 100 particles. HIV is one of several viruses transmitted through sexual contact and by exposure to infected blood. The variety of host cells that a virus can infect is called its host range: this is narrow for viruses specialized to infect only a few species, or broad for viruses capable of infecting many. Viral infections in animals provoke an immune response that usually eliminates the infecting virus. Immune responses can also be produced by vaccines, which confer an artificially acquired immunity to the specific viral infection. Some viruses, including those that cause HIV/AIDS, HPV infection, and viral hepatitis, evade these immune responses and result in chronic infections. Several classes of antiviral drugs have been developed. Etymology The English word "virus" comes from the Latin , which refers to poison and other noxious liquids. comes from the same Indo-European root as Sanskrit , Avestan , and Ancient Greek (), which all mean "poison". The first attested use of "virus" in English appeared in 1398 in John Trevisa's translation of Bartholomeus Anglicus's De Proprietatibus Rerum. Virulent, from Latin virulentus ('poisonous'), dates to . A meaning of 'agent that causes infectious disease' is first recorded in 1728, long before the discovery of viruses by Dmitri Ivanovsky in 1892. The English plural is viruses (sometimes also vira), whereas the Latin word is a mass noun, which has no classically attested plural (vīra is used in Neo-Latin). The adjective viral dates to 1948. The term virion (plural virions), which dates from 1959, is also used to refer to a single viral particle that is released from the cell and is capable of infecting other cells of the same type. Origins Viruses are found wherever there is life and have probably existed since living cells first evolved. The origin of viruses is unclear because they do not form fossils, so molecular techniques are used to infer how they arose. In addition, viral genetic material occasionally integrates into the germline of the host organisms, by which they can be passed on vertically to the offspring of the host for many generations. This provides an invaluable source of information for paleovirologists to trace back ancient viruses that existed as far back as millions of years ago. There are three main hypotheses that aim to explain the origins of viruses: Regressive hypothesis Viruses may have once been small cells that parasitised larger cells. Over time, genes not required by their parasitism were lost. The bacteria rickettsia and chlamydia are living cells that, like viruses, can reproduce only inside host cells. They lend support to this hypothesis, as their dependence on parasitism is likely to have caused the loss of genes that enabled them to survive outside a cell. This is also called the "degeneracy hypothesis", or "reduction hypothesis". Cellular origin hypothesis Some viruses may have evolved from bits of DNA or RNA that "escaped" from the genes of a larger organism. The escaped DNA could have come from plasmids (pieces of naked DNA that can move between cells) or transposons (molecules of DNA that replicate and move around to different positions within the genes of the cell). Once called "jumping genes", transposons are examples of mobile genetic elements and could be the origin of some viruses. They were discovered in maize by Barbara McClintock in 1950. This is sometimes called the "vagrancy hypothesis", or the "escape hypothesis". Co-evolution hypothesis This is also called the "virus-first hypothesis" and proposes that viruses may have evolved from complex molecules of protein and nucleic acid at the same time that cells first appeared on Earth and would have been dependent on cellular life for billions of years. Viroids are molecules of RNA that are not classified as viruses because they lack a protein coat. They have characteristics that are common to several viruses and are often called subviral agents. Viroids are important pathogens of plants. They do not code for proteins but interact with the host cell and use the host machinery for their replication. The hepatitis delta virus of humans has an RNA genome similar to viroids but has a protein coat derived from hepatitis B virus and cannot produce one of its own. It is, therefore, a defective virus. Although hepatitis delta virus genome may replicate independently once inside a host cell, it requires the help of hepatitis B virus to provide a protein coat so that it can be transmitted to new cells. In similar manner, the sputnik virophage is dependent on mimivirus, which infects the protozoan Acanthamoeba castellanii. These viruses, which are dependent on the presence of other virus species in the host cell, are called "satellites" and may represent evolutionary intermediates of viroids and viruses. In the past, there were problems with all of these hypotheses: the regressive hypothesis did not explain why even the smallest of cellular parasites do not resemble viruses in any way. The escape hypothesis did not explain the complex capsids and other structures on virus particles. The virus-first hypothesis contravened the definition of viruses in that they require host cells. Viruses are now recognised as ancient and as having origins that pre-date the divergence of life into the three domains. This discovery has led modern virologists to reconsider and re-evaluate these three classical hypotheses. The evidence for an ancestral world of RNA cells and computer analysis of viral and host DNA sequences give a better understanding of the evolutionary relationships between different viruses and may help identify the ancestors of modern viruses. To date, such analyses have not proved which of these hypotheses is correct. It seems unlikely that all currently known viruses have a common ancestor, and viruses have probably arisen numerous times in the past by one or more mechanisms. Microbiology Discovery The first evidence of the existence of viruses came from experiments with filters that had pores small enough to retain bacteria. In 1892, Dmitri Ivanovsky used one of these filters to show that sap from a diseased tobacco plant remained infectious to healthy tobacco plants despite having been filtered. Martinus Beijerinck called the filtered, infectious substance a "virus" and this discovery is considered to be the beginning of virology. The subsequent discovery and partial characterization of bacteriophages by Frederick Twort and Félix d'Herelle further catalyzed the field, and by the early 20th century many viruses had been discovered. In 1926, Thomas Milton Rivers defined viruses as obligate parasites. Viruses were demonstrated to be particles, rather than a fluid, by Wendell Meredith Stanley, and the invention of the electron microscope in 1931 allowed their complex structures to be visualised. Life properties Scientific opinions differ on whether viruses are a form of life or organic structures that interact with living organisms. They have been described as "organisms at the edge of life", since they resemble organisms in that they possess genes, evolve by natural selection, and reproduce by creating multiple copies of themselves through self-assembly. Although they have genes, they do not have a cellular structure, which is often seen as the basic unit of life. Viruses do not have their own metabolism and require a host cell to make new products. They therefore cannot naturally reproduce outside a host cell—although some bacteria such as rickettsia and chlamydia are considered living organisms despite the same limitation. Accepted forms of life use cell division to reproduce, whereas viruses spontaneously assemble within cells. They differ from autonomous growth of crystals as they inherit genetic mutations while being subject to natural selection. Virus self-assembly within host cells has implications for the study of the origin of life, as it lends further credence to the hypothesis that life could have started as self-assembling organic molecules. The virocell model first proposed by Patrick Forterre considers the infected cell to be the "living form" of viruses and that virus particles (virions) are analogous to spores. Although the living versus non-living debate continues, the virocell model has gained some acceptance. Structure Viruses display a wide diversity of sizes and shapes, called 'morphologies'. In general, viruses are much smaller than bacteria and more than a thousand bacteriophage viruses would fit inside an Escherichia coli bacterium's cell. Many viruses that have been studied are spherical and have a diameter between 20 and 300 nanometres. Some filoviruses, which are filaments, have a total length of up to 1400 nm; their diameters are only about 80 nm. Most viruses cannot be seen with an optical microscope, so scanning and transmission electron microscopes are used to visualise them. To increase the contrast between viruses and the background, electron-dense "stains" are used. These are solutions of salts of heavy metals, such as tungsten, that scatter the electrons from regions covered with the stain. When virions are coated with stain (positive staining), fine detail is obscured. Negative staining overcomes this problem by staining the background only. A complete virus particle, known as a virion, consists of nucleic acid surrounded by a protective coat of protein called a capsid. These are formed from protein subunits called capsomeres. Viruses can have a lipid "envelope" derived from the host cell membrane. The capsid is made from proteins encoded by the viral genome and its shape serves as the basis for morphological distinction. Virally-coded protein subunits will self-assemble to form a capsid, in general requiring the presence of the virus genome. Complex viruses code for proteins that assist in the construction of their capsid. Proteins associated with nucleic acid are known as nucleoproteins, and the association of viral capsid proteins with viral nucleic acid is called a nucleocapsid. The capsid and entire virus structure can be mechanically (physically) probed through atomic force microscopy. In general, there are five main morphological virus types: Helical These viruses are composed of a single type of capsomere stacked around a central axis to form a helical structure, which may have a central cavity, or tube. This arrangement results in virions which can be short and highly rigid rods, or long and very flexible filaments. The genetic material (typically single-stranded RNA, but single-stranded DNA in some cases) is bound into the protein helix by interactions between the negatively charged nucleic acid and positive charges on the protein. Overall, the length of a helical capsid is related to the length of the nucleic acid contained within it, and the diameter is dependent on the size and arrangement of capsomeres. The well-studied tobacco mosaic virusand inovirus are examples of helical viruses. Icosahedral Most animal viruses are icosahedral or near-spherical with chiral icosahedral symmetry. A regular icosahedron is the optimum way of forming a closed shell from identical subunits. The minimum number of capsomeres required for each triangular face is 3, which gives 60 for the icosahedron. Many viruses, such as rotavirus, have more than 60 capsomers and appear spherical but they retain this symmetry. To achieve this, the capsomeres at the apices are surrounded by five other capsomeres and are called pentons. Capsomeres on the triangular faces are surrounded by six others and are called hexons. Hexons are in essence flat and pentons, which form the 12 vertices, are curved. The same protein may act as the subunit of both the pentamers and hexamers or they may be composed of different proteins. Prolate This is an icosahedron elongated along the fivefold axis and is a common arrangement of the heads of bacteriophages. This structure is composed of a cylinder with a cap at either end. EnvelopedSome species of virus envelop themselves in a modified form of one of the cell membranes, either the outer membrane surrounding an infected host cell or internal membranes such as a nuclear membrane or endoplasmic reticulum, thus gaining an outer lipid bilayer known as a viral envelope. This membrane is studded with proteins coded for by the viral genome and host genome; the lipid membrane itself and any carbohydrates present originate entirely from the host. Influenza virus, HIV (which causes AIDS), and severe acute respiratory syndrome coronavirus 2 (which causes COVID-19) use this strategy. Most enveloped viruses are dependent on the envelope for their infectivity. Complex These viruses possess a capsid that is neither purely helical nor purely icosahedral, and that may possess extra structures such as protein tails or a complex outer wall. Some bacteriophages, such as Enterobacteria phage T4, have a complex structure consisting of an icosahedral head bound to a helical tail, which may have a hexagonal base plate with protruding protein tail fibres. This tail structure acts like a molecular syringe, attaching to the bacterial host and then injecting the viral genome into the cell. The poxviruses are large, complex viruses that have an unusual morphology. The viral genome is associated with proteins within a central disc structure known as a nucleoid. The nucleoid is surrounded by a membrane and two lateral bodies of unknown function. The virus has an outer envelope with a thick layer of protein studded over its surface. The whole virion is slightly pleomorphic, ranging from ovoid to brick-shaped. Giant viruses Mimivirus is one of the largest characterised viruses, with a capsid diameter of 400 nm. Protein filaments measuring 100 nm project from the surface. The capsid appears hexagonal under an electron microscope, therefore the capsid is probably icosahedral. In 2011, researchers discovered the largest then known virus in samples of water collected from the ocean floor off the coast of Las Cruces, Chile. Provisionally named Megavirus chilensis, it can be seen with a basic optical microscope. In 2013, the Pandoravirus genus was discovered in Chile and Australia, and has genomes about twice as large as Megavirus and Mimivirus. All giant viruses have dsDNA genomes and they are classified into several families: Mimiviridae, Pithoviridae, Pandoraviridae, Phycodnaviridae, and the Mollivirus genus. Some viruses that infect Archaea have complex structures unrelated to any other form of virus, with a wide variety of unusual shapes, ranging from spindle-shaped structures to viruses that resemble hooked rods, teardrops or even bottles. Other archaeal viruses resemble the tailed bacteriophages, and can have multiple tail structures. Genome An enormous variety of genomic structures can be seen among viral species; as a group, they contain more structural genomic diversity than plants, animals, archaea, or bacteria. There are millions of different types of viruses, although fewer than 7,000 types have been described in detail.As of January 2021, the NCBI Virus genome database has more than 193,000 complete genome sequences, but there are doubtlessly many more to be discovered. A virus has either a DNA or an RNA genome and is called a DNA virus or an RNA virus, respectively. Most viruses have RNA genomes. Plant viruses tend to have single-stranded RNA genomes and bacteriophages tend to have double-stranded DNA genomes. Viral genomes are circular, as in the polyomaviruses, or linear, as in the adenoviruses. The type of nucleic acid is irrelevant to the shape of the genome. Among RNA viruses and certain DNA viruses, the genome is often divided into separate parts, in which case it is called segmented. For RNA viruses, each segment often codes for only one protein and they are usually found together in one capsid. All segments are not required to be in the same virion for the virus to be infectious, as demonstrated by brome mosaic virus and several other plant viruses. A viral genome, irrespective of nucleic acid type, is almost always either single-stranded (ss) or double-stranded (ds). Single-stranded genomes consist of an unpaired nucleic acid, analogous to one-half of a ladder split down the middle. Double-stranded genomes consist of two complementary paired nucleic acids, analogous to a ladder. The virus particles of some virus families, such as those belonging to the Hepadnaviridae, contain a genome that is partially double-stranded and partially single-stranded. For most viruses with RNA genomes and some with single-stranded DNA (ssDNA) genomes, the single strands are said to be either positive-sense (called the 'plus-strand') or negative-sense (called the 'minus-strand'), depending on if they are complementary to the viral messenger RNA (mRNA). Positive-sense viral RNA is in the same sense as viral mRNA and thus at least a part of it can be immediately translated by the host cell. Negative-sense viral RNA is complementary to mRNA and thus must be converted to positive-sense RNA by an RNA-dependent RNA polymerase before translation. DNA nomenclature for viruses with genomic ssDNA is similar to RNA nomenclature, in that positive-strand viral ssDNA is identical in sequence to the viral mRNA and is thus a coding strand, while negative-sense viral ssDNA is complementary to the viral mRNA and is thus a template strand. Several types of ssDNA and ssRNA viruses have genomes that are ambisense in that transcription can occur off both strands in a double-stranded replicative intermediate. Examples include geminiviruses, which are ssDNA plant viruses and arenaviruses, which are ssRNA viruses of animals. Genome size Genome size varies greatly between species. The smallest—the ssDNA circoviruses, family Circoviridae—code for only two proteins and have a genome size of only two kilobases; the largest—the pandoraviruses—have genome sizes of around two megabases which code for about 2500 proteins. Virus genes rarely have introns and often are arranged in the genome so that they overlap. In general, RNA viruses have smaller genome sizes than DNA viruses because of a higher error-rate when replicating, and have a maximum upper size limit. Beyond this, errors when replicating render the virus useless or uncompetitive. To compensate, RNA viruses often have segmented genomes—the genome is split into smaller molecules—thus reducing the chance that an error in a single-component genome will incapacitate the entire genome. In contrast, DNA viruses generally have larger genomes because of the high fidelity of their replication enzymes. Single-strand DNA viruses are an exception to this rule, as mutation rates for these genomes can approach the extreme of the ssRNA virus case. Genetic mutation and recombination Viruses undergo genetic change by several mechanisms. These include a process called antigenic drift where individual bases in the DNA or RNA mutate to other bases. Most of these point mutations are "silent"—they do not change the protein that the gene encodes—but others can confer evolutionary advantages such as resistance to antiviral drugs. Antigenic shift occurs when there is a major change in the genome of the virus. This can be a result of recombination or reassortment. The Influenza A virus is highly prone to reassortment; occasionally this has resulted in novel strains which have caused pandemics. RNA viruses often exist as quasispecies or swarms of viruses of the same species but with slightly different genome nucleoside sequences. Such quasispecies are a prime target for natural selection. Segmented genomes confer evolutionary advantages; different strains of a virus with a segmented genome can shuffle and combine genes and produce progeny viruses (or offspring) that have unique characteristics. This is called reassortment or 'viral sex'. Genetic recombination is a process by which a strand of DNA (or RNA) is broken and then joined to the end of a different DNA (or RNA) molecule. This can occur when viruses infect cells simultaneously and studies of viral evolution have shown that recombination has been rampant in the species studied. Recombination is common to both RNA and DNA viruses. Coronaviruses have a single-strand positive-sense RNA genome. Replication of the genome is catalyzed by an RNA-dependent RNA polymerase. The mechanism of recombination used by coronaviruses likely involves template switching by the polymerase during genome replication. This process appears to be an adaptation for coping with genome damage. Replication cycle Viral populations do not grow through cell division, because they are acellular. Instead, they use the machinery and metabolism of a host cell to produce multiple copies of themselves, and they assemble in the cell. When infected, the host cell is forced to rapidly produce thousands of copies of the original virus. Their life cycle differs greatly between species, but there are six basic stages in their life cycle: Attachment is a specific binding between viral capsid proteins and specific receptors on the host cellular surface. This specificity determines the host range and type of host cell of a virus. For example, HIV infects a limited range of human leucocytes. This is because its surface protein, gp120, specifically interacts with the CD4 molecule—a chemokine receptor—which is most commonly found on the surface of CD4+ T-Cells. This mechanism has evolved to favour those viruses that infect only cells in which they are capable of replication. Attachment to the receptor can induce the viral envelope protein to undergo changes that result in the fusion of viral and cellular membranes, or changes of non-enveloped virus surface proteins that allow the virus to enter. Penetration or viral entry follows attachment: Virions enter the host cell through receptor-mediated endocytosis or membrane fusion. The infection of plant and fungal cells is different from that of animal cells. Plants have a rigid cell wall made of cellulose, and fungi one of chitin, so most viruses can get inside these cells only after trauma to the cell wall. Nearly all plant viruses (such as tobacco mosaic virus) can also move directly from cell to cell, in the form of single-stranded nucleoprotein complexes, through pores called plasmodesmata. Bacteria, like plants, have strong cell walls that a virus must breach to infect the cell. Given that bacterial cell walls are much thinner than plant cell walls due to their much smaller size, some viruses have evolved mechanisms that inject their genome into the bacterial cell across the cell wall, while the viral capsid remains outside. Uncoating is a process in which the viral capsid is removed: This may be by degradation by viral enzymes or host enzymes or by simple dissociation; the end-result is the releasing of the viral genomic nucleic acid. Replication of viruses involves primarily multiplication of the genome. Replication involves the synthesis of viral messenger RNA (mRNA) from "early" genes (with exceptions for positive-sense RNA viruses), viral protein synthesis, possible assembly of viral proteins, then viral genome replication mediated by early or regulatory protein expression. This may be followed, for complex viruses with larger genomes, by one or more further rounds of mRNA synthesis: "late" gene expression is, in general, of structural or virion proteins. Assembly – Following the structure-mediated self-assembly of the virus particles, some modification of the proteins often occurs. In viruses such as HIV, this modification (sometimes called maturation) occurs after the virus has been released from the host cell. Release – Viruses can be released from the host cell by lysis, a process that kills the cell by bursting its membrane and cell wall if present: this is a feature of many bacterial and some animal viruses. Some viruses undergo a lysogenic cycle where the viral genome is incorporated by genetic recombination into a specific place in the host's chromosome. The viral genome is then known as a "provirus" or, in the case of bacteriophages a "prophage". Whenever the host divides, the viral genome is also replicated. The viral genome is mostly silent within the host. At some point, the provirus or prophage may give rise to the active virus, which may lyse the host cells. Enveloped viruses (e.g., HIV) typically are released from the host cell by budding. During this process, the virus acquires its envelope, which is a modified piece of the host's plasma or other, internal membrane. Genome replication The genetic material within virus particles, and the method by which the material is replicated, varies considerably between different types of viruses. DNA viruses The genome replication of most DNA viruses takes place in the cell's nucleus. If the cell has the appropriate receptor on its surface, these viruses enter the cell either by direct fusion with the cell membrane (e.g., herpesviruses) or—more usually—by receptor-mediated endocytosis. Most DNA viruses are entirely dependent on the host cell's DNA and RNA synthesising machinery and RNA processing machinery. Viruses with larger genomes may encode much of this machinery themselves. In eukaryotes, the viral genome must cross the cell's nuclear membrane to access this machinery, while in bacteria it need only enter the cell. RNA viruses Replication of RNA viruses usually takes place in the cytoplasm. RNA viruses can be placed into four different groups depending on their modes of replication. The polarity (whether or not it can be used directly by ribosomes to make proteins) of single-stranded RNA viruses largely determines the replicative mechanism; the other major criterion is whether the genetic material is single-stranded or double-stranded. All RNA viruses use their own RNA replicase enzymes to create copies of their genomes. Reverse transcribing viruses Reverse transcribing viruses have ssRNA (Retroviridae, Metaviridae, Pseudoviridae) or dsDNA (Caulimoviridae, and Hepadnaviridae) in their particles. Reverse transcribing viruses with RNA genomes (retroviruses) use a DNA intermediate to replicate, whereas those with DNA genomes (pararetroviruses) use an RNA intermediate during genome replication. Both types use a reverse transcriptase, or RNA-dependent DNA polymerase enzyme, to carry out the nucleic acid conversion. Retroviruses integrate the DNA produced by reverse transcription into the host genome as a provirus as a part of the replication process; pararetroviruses do not, although integrated genome copies of especially plant pararetroviruses can give rise to infectious virus. They are susceptible to antiviral drugs that inhibit the reverse transcriptase enzyme, e.g. zidovudine and lamivudine. An example of the first type is HIV, which is a retrovirus. Examples of the second type are the Hepadnaviridae, which includes Hepatitis B virus. Cytopathic effects on the host cell The range of structural and biochemical effects that viruses have on the host cell is extensive. These are called 'cytopathic effects'. Most virus infections eventually result in the death of the host cell. The causes of death include cell lysis, alterations to the cell's surface membrane and apoptosis. Often cell death is caused by cessation of its normal activities because of suppression by virus-specific proteins, not all of which are components of the virus particle. The distinction between cytopathic and harmless is gradual. Some viruses, such as Epstein–Barr virus, can cause cells to proliferate without causing malignancy, while others, such as papillomaviruses, are established causes of cancer. Dormant and latent infections Some viruses cause no apparent changes to the infected cell. Cells in which the virus is latent and inactive show few signs of infection and often function normally. This causes persistent infections and the virus is often dormant for many months or years. This is often the case with herpes viruses. Host range Viruses are by far the most abundant biological entities on Earth and they outnumber all the others put together. They infect all types of cellular life including animals, plants, bacteria and fungi. Different types of viruses can infect only a limited range of hosts and many are species-specific. Some, such as smallpox virus for example, can infect only one species—in this case humans, and are said to have a narrow host range. Other viruses, such as rabies virus, can infect different species of mammals and are said to have a broad range. The viruses that infect plants are harmless to animals, and most viruses that infect other animals are harmless to humans. The host range of some bacteriophages is limited to a single strain of bacteria and they can be used to trace the source of outbreaks of infections by a method called phage typing. The complete set of viruses in an organism or habitat is called the virome; for example, all human viruses constitute the human virome. Novel viruses A novel virus is one that has not previously been recorded. It can be a virus that is isolated from its natural reservoir or isolated as the result of spread to an animal or human host where the virus had not been identified before. It can be an emergent virus, one that represents a new virus, but it can also be an extant virus that has not been previously identified. The SARS-CoV-2 coronavirus that caused the COVID-19 pandemic is an example of a novel virus. Classification Classification seeks to describe the diversity of viruses by naming and grouping them on the basis of similarities. In 1962, André Lwoff, Robert Horne, and Paul Tournier were the first to develop a means of virus classification, based on the Linnaean hierarchical system. This system based classification on phylum, class, order, family, genus, and species. Viruses were grouped according to their shared properties (not those of their hosts) and the type of nucleic acid forming their genomes. In 1966, the International Committee on Taxonomy of Viruses (ICTV) was formed. The system proposed by Lwoff, Horne and Tournier was initially not accepted by the ICTV because the small genome size of viruses and their high rate of mutation made it difficult to determine their ancestry beyond order. As such, the Baltimore classification system has come to be used to supplement the more traditional hierarchy. Starting in 2018, the ICTV began to acknowledge deeper evolutionary relationships between viruses that have been discovered over time and adopted a 15-rank classification system ranging from realm to species. Additionally, some species within the same genus are grouped into a genogroup. ICTV classification The ICTV developed the current classification system and wrote guidelines that put a greater weight on certain virus properties to maintain family uniformity. A unified taxonomy (a universal system for classifying viruses) has been established. Only a small part of the total diversity of viruses has been studied. As of 2022, 6 realms, 10 kingdoms, 17 phyla, 2 subphyla, 40 classes, 72 orders, 8 suborders, 264 families, 182 subfamilies, 2,818 genera, 84 subgenera, and 11,273 species of viruses have been defined by the ICTV. The general taxonomic structure of taxon ranges and the suffixes used in taxonomic names are shown hereafter. As of 2022, the ranks of subrealm, subkingdom, and subclass are unused, whereas all other ranks are in use. Realm (-viria) Subrealm (-vira) Kingdom (-virae) Subkingdom (-virites) Phylum (-viricota) Subphylum (-viricotina) Class (-viricetes) Subclass (-viricetidae) Order (-virales) Suborder (-virineae) Family (-viridae) Subfamily (-virinae) Genus (-virus) Subgenus (-virus) Species Baltimore classification The Nobel Prize-winning biologist David Baltimore devised the Baltimore classification system. The ICTV classification system is used in conjunction with the Baltimore classification system in modern virus classification. The Baltimore classification of viruses is based on the mechanism of mRNA production. Viruses must generate mRNAs from their genomes to produce proteins and replicate themselves, but different mechanisms are used to achieve this in each virus family. Viral genomes may be single-stranded (ss) or double-stranded (ds), RNA or DNA, and may or may not use reverse transcriptase (RT). In addition, ssRNA viruses may be either sense (+) or antisense (−). This classification places viruses into seven groups: Role in human disease Examples of common human diseases caused by viruses include the common cold, influenza, chickenpox, and cold sores. Many serious diseases such as rabies, Ebola virus disease, AIDS (HIV), avian influenza, and SARS are caused by viruses. The relative ability of viruses to cause disease is described in terms of virulence. Other diseases are under investigation to discover if they have a virus as the causative agent, such as the possible connection between human herpesvirus 6 (HHV6) and neurological diseases such as multiple sclerosis and chronic fatigue syndrome. There is controversy over whether the bornavirus, previously thought to cause neurological diseases in horses, could be responsible for psychiatric illnesses in humans. Viruses have different mechanisms by which they produce disease in an organism, which depends largely on the viral species. Mechanisms at the cellular level primarily include cell lysis, the breaking open and subsequent death of the cell. In multicellular organisms, if enough cells die, the whole organism will start to suffer the effects. Although viruses cause disruption of healthy homeostasis, resulting in disease, they may exist relatively harmlessly within an organism. An example would include the ability of the herpes simplex virus, which causes cold sores, to remain in a dormant state within the human body. This is called latency and is a characteristic of the herpes viruses, including Epstein–Barr virus, which causes glandular fever, and varicella zoster virus, which causes chickenpox and shingles. Most people have been infected with at least one of these types of herpes virus. These latent viruses might sometimes be beneficial, as the presence of the virus can increase immunity against bacterial pathogens, such as Yersinia pestis. Some viruses can cause lifelong or chronic infections, where the viruses continue to replicate in the body despite the host's defence mechanisms. This is common in hepatitis B virus and hepatitis C virus infections. People chronically infected are known as carriers, as they serve as reservoirs of infectious virus. In populations with a high proportion of carriers, the disease is said to be endemic. Epidemiology Viral epidemiology is the branch of medical science that deals with the transmission and control of virus infections in humans. Transmission of viruses can be vertical, which means from mother to child, or horizontal, which means from person to person. Examples of vertical transmission include hepatitis B virus and HIV, where the baby is born already infected with the virus. Another, more rare, example is the varicella zoster virus, which, although causing relatively mild infections in children and adults, can be fatal to the foetus and newborn baby. Horizontal transmission is the most common mechanism of spread of viruses in populations. Horizontal transmission can occur when body fluids are exchanged during sexual activity, by exchange of saliva or when contaminated food or water is ingested. It can also occur when aerosols containing viruses are inhaled or by insect vectors such as when infected mosquitoes penetrate the skin of a host. Most types of viruses are restricted to just one or two of these mechanisms and they are referred to as "respiratory viruses" or "enteric viruses" and so forth. The rate or speed of transmission of viral infections depends on factors that include population density, the number of susceptible individuals, (i.e., those not immune), the quality of healthcare and the weather. Epidemiology is used to break the chain of infection in populations during outbreaks of viral diseases. Control measures are used that are based on knowledge of how the virus is transmitted. It is important to find the source, or sources, of the outbreak and to identify the virus. Once the virus has been identified, the chain of transmission can sometimes be broken by vaccines. When vaccines are not available, sanitation and disinfection can be effective. Often, infected people are isolated from the rest of the community, and those that have been exposed to the virus are placed in quarantine. To control the outbreak of foot-and-mouth disease in cattle in Britain in 2001, thousands of cattle were slaughtered. Most viral infections of humans and other animals have incubation periods during which the infection causes no signs or symptoms. Incubation periods for viral diseases range from a few days to weeks, but are known for most infections. Somewhat overlapping, but mainly following the incubation period, there is a period of communicability—a time when an infected individual or animal is contagious and can infect another person or animal. This, too, is known for many viral infections, and knowledge of the length of both periods is important in the control of outbreaks. When outbreaks cause an unusually high proportion of cases in a population, community, or region, they are called epidemics. If outbreaks spread worldwide, they are called pandemics. Epidemics and pandemics A pandemic is a worldwide epidemic. The 1918 flu pandemic, which lasted until 1919, was a category 5 influenza pandemic caused by an unusually severe and deadly influenza A virus. The victims were often healthy young adults, in contrast to most influenza outbreaks, which predominantly affect juvenile, elderly, or otherwise-weakened patients. Older estimates say it killed 40–50 million people, while more recent research suggests that it may have killed as many as 100 million people, or 5% of the world's population in 1918. Although viral pandemics are rare events, HIV—which evolved from viruses found in monkeys and chimpanzees—has been pandemic since at least the 1980s. During the 20th century there were four pandemics caused by influenza virus and those that occurred in 1918, 1957 and 1968 were severe. Most researchers believe that HIV originated in sub-Saharan Africa during the 20th century; it is now a pandemic, with an estimated 37.9 million people now living with the disease worldwide. There were about 770,000 deaths from AIDS in 2018. The Joint United Nations Programme on HIV/AIDS (UNAIDS) and the World Health Organization (WHO) estimate that AIDS has killed more than 25 million people since it was first recognised on 5 June 1981, making it one of the most destructive epidemics in recorded history. In 2007 there were 2.7 million new HIV infections and 2 million HIV-related deaths. Several highly lethal viral pathogens are members of the Filoviridae. Filoviruses are filament-like viruses that cause viral hemorrhagic fever, and include ebolaviruses and marburgviruses. Marburg virus, first discovered in 1967, attracted widespread press attention in April 2005 for an outbreak in Angola. Ebola virus disease has also caused intermittent outbreaks with high mortality rates since 1976 when it was first identified. The worst and most recent one is the 2013–2016 West Africa epidemic. Except for smallpox, most pandemics are caused by newly evolved viruses. These "emergent" viruses are usually mutants of less harmful viruses that have circulated previously either in humans or other animals. Severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) are caused by new types of coronaviruses. Other coronaviruses are known to cause mild infections in humans, so the virulence and rapid spread of SARS infections—that by July 2003 had caused around 8,000 cases and 800 deaths—was unexpected and most countries were not prepared. A related coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-Cov-2), thought to have originated in bats, emerged in Wuhan, China in November 2019 and spread rapidly around the world. Infections with the virus caused the COVID-19 pandemic that started in 2020. Unprecedented restrictions in peacetime were placed on international travel, and curfews were imposed in several major cities worldwide in response to the pandemic. Cancer Viruses are an established cause of cancer in humans and other species. Viral cancers occur only in a minority of infected persons (or animals). Cancer viruses come from a range of virus families, including both RNA and DNA viruses, and so there is no single type of "oncovirus" (an obsolete term originally used for acutely transforming retroviruses). The development of cancer is determined by a variety of factors such as host immunity and mutations in the host. Viruses accepted to cause human cancers include some genotypes of human papillomavirus, hepatitis B virus, hepatitis C virus, Epstein–Barr virus, Kaposi's sarcoma-associated herpesvirus and human T-lymphotropic virus. The most recently discovered human cancer virus is a polyomavirus (Merkel cell polyomavirus) that causes most cases of a rare form of skin cancer called Merkel cell carcinoma. Hepatitis viruses can develop into a chronic viral infection that leads to liver cancer. Infection by human T-lymphotropic virus can lead to tropical spastic paraparesis and adult T-cell leukaemia. Human papillomaviruses are an established cause of cancers of cervix, skin, anus, and penis. Within the Herpesviridae, Kaposi's sarcoma-associated herpesvirus causes Kaposi's sarcoma and body-cavity lymphoma, and Epstein–Barr virus causes Burkitt's lymphoma, Hodgkin's lymphoma, B lymphoproliferative disorder, and nasopharyngeal carcinoma. Merkel cell polyomavirus closely related to SV40 and mouse polyomaviruses that have been used as animal models for cancer viruses for over 50 years. Host defence mechanisms The body's first line of defence against viruses is the innate immune system. This comprises cells and other mechanisms that defend the host from infection in a non-specific manner. This means that the cells of the innate system recognise, and respond to, pathogens in a generic way, but, unlike the adaptive immune system, it does not confer long-lasting or protective immunity to the host. RNA interference is an important innate defence against viruses. Many viruses have a replication strategy that involves double-stranded RNA (dsRNA). When such a virus infects a cell, it releases its RNA molecule or molecules, which immediately bind to a protein complex called a dicer that cuts the RNA into smaller pieces. A biochemical pathway—the RISC complex—is activated, which ensures cell survival by degrading the viral mRNA. Rotaviruses have evolved to avoid this defence mechanism by not uncoating fully inside the cell, and releasing newly produced mRNA through pores in the particle's inner capsid. Their genomic dsRNA remains protected inside the core of the virion. When the adaptive immune system of a vertebrate encounters a virus, it produces specific antibodies that bind to the virus and often render it non-infectious. This is called humoral immunity. Two types of antibodies are important. The first, called IgM, is highly effective at neutralising viruses but is produced by the cells of the immune system only for a few weeks. The second, called IgG, is produced indefinitely. The presence of IgM in the blood of the host is used to test for acute infection, whereas IgG indicates an infection sometime in the past. IgG antibody is measured when tests for immunity are carried out. Antibodies can continue to be an effective defence mechanism even after viruses have managed to gain entry to the host cell. A protein that is in cells, called TRIM21, can attach to the antibodies on the surface of the virus particle. This primes the subsequent destruction of the virus by the enzymes of the cell's proteosome system. A second defence of vertebrates against viruses is called cell-mediated immunity and involves immune cells known as T cells. The body's cells constantly display short fragments of their proteins on the cell's surface, and, if a T cell recognises a suspicious viral fragment there, the host cell is destroyed by 'killer T' cells and the virus-specific T-cells proliferate. Cells such as the macrophage are specialists at this antigen presentation. The production of interferon is an important host defence mechanism. This is a hormone produced by the body when viruses are present. Its role in immunity is complex; it eventually stops the viruses from reproducing by killing the infected cell and its close neighbours. Not all virus infections produce a protective immune response in this way. HIV evades the immune system by constantly changing the amino acid sequence of the proteins on the surface of the virion. This is known as "escape mutation" as the viral epitopes escape recognition by the host immune response. These persistent viruses evade immune control by sequestration, blockade of antigen presentation, cytokine resistance, evasion of natural killer cell activities, escape from apoptosis, and antigenic shift. Other viruses, called 'neurotropic viruses', are disseminated by neural spread where the immune system may be unable to reach them due to immune privilege. Prevention and treatment Because viruses use vital metabolic pathways within host cells to replicate, they are difficult to eliminate without using drugs that cause toxic effects to host cells in general. The most effective medical approaches to viral diseases are vaccinations to provide immunity to infection, and antiviral drugs that selectively interfere with viral replication. Vaccines Vaccination is a cheap and effective way of preventing infections by viruses. Vaccines were used to prevent viral infections long before the discovery of the actual viruses. Their use has resulted in a dramatic decline in morbidity (illness) and mortality (death) associated with viral infections such as polio, measles, mumps and rubella. Smallpox infections have been eradicated. Vaccines are available to prevent over thirteen viral infections of humans, and more are used to prevent viral infections of animals. Vaccines can consist of live-attenuated or killed viruses, viral proteins (antigens), or RNA. Live vaccines contain weakened forms of the virus, which do not cause the disease but, nonetheless, confer immunity. Such viruses are called attenuated. Live vaccines can be dangerous when given to people with a weak immunity (who are described as immunocompromised), because in these people, the weakened virus can cause the original disease. Biotechnology and genetic engineering techniques are used to produce subunit vaccines. These vaccines use only the capsid proteins of the virus. Hepatitis B vaccine is an example of this type of vaccine. Subunit vaccines are safe for immunocompromised patients because they cannot cause the disease. The yellow fever virus vaccine, a live-attenuated strain called 17D, is probably the safest and most effective vaccine ever generated. Antiviral drugs Antiviral drugs are often nucleoside analogues (fake DNA building-blocks), which viruses mistakenly incorporate into their genomes during replication. The life-cycle of the virus is then halted because the newly synthesised DNA is inactive. This is because these analogues lack the hydroxyl groups, which, along with phosphorus atoms, link together to form the strong "backbone" of the DNA molecule. This is called DNA chain termination. Examples of nucleoside analogues are aciclovir for Herpes simplex virus infections and lamivudine for HIV and hepatitis B virus infections. Aciclovir is one of the oldest and most frequently prescribed antiviral drugs. Other antiviral drugs in use target different stages of the viral life cycle. HIV is dependent on a proteolytic enzyme called the HIV-1 protease for it to become fully infectious. There is a large class of drugs called protease inhibitors that inactivate this enzyme. There are around thirteen classes of antiviral drugs each targeting different viruses or stages of viral replication. Hepatitis C is caused by an RNA virus. In 80% of people infected, the disease is chronic, and without treatment, they are infected for the remainder of their lives. There are effective treatments that use direct-acting antivirals. The treatment of chronic carriers of the hepatitis B virus has also been developed by using similar strategies that include lamivudine and other anti-viral drugs. Infection in other species Viruses infect all cellular life and, although viruses occur universally, each cellular species has its own specific range that often infects only that species. Some viruses, called satellites, can replicate only within cells that have already been infected by another virus. Animal viruses Viruses are important pathogens of livestock. Diseases such as foot-and-mouth disease and bluetongue are caused by viruses. Companion animals such as cats, dogs, and horses, if not vaccinated, are susceptible to serious viral infections. Canine parvovirus is caused by a small DNA virus and infections are often fatal in pups. Like all invertebrates, the honey bee is susceptible to many viral infections. Most viruses co-exist harmlessly in their host and cause no signs or symptoms of disease. Plant viruses There are many types of plant viruses, but often they cause only a loss of yield, and it is not economically viable to try to control them. Plant viruses are often spread from plant to plant by organisms, known as vectors. These are usually insects, but some fungi, nematode worms, single-celled organisms, and parasitic plants are vectors. When control of plant virus infections is considered economical, for perennial fruits, for example, efforts are concentrated on killing the vectors and removing alternate hosts such as weeds. Plant viruses cannot infect humans and other animals because they can reproduce only in living plant cells. Originally from Peru, the potato has become a staple crop worldwide. The potato virus Y causes disease in potatoes and related species including tomatoes and peppers. In the 1980s, this virus acquired economical importance when it proved difficult to control in seed potato crops. Transmitted by aphids, this virus can reduce crop yields by up to 80 per cent, causing significant losses to potato yields. Plants have elaborate and effective defence mechanisms against viruses. One of the most effective is the presence of so-called resistance (R) genes. Each R gene confers resistance to a particular virus by triggering localised areas of cell death around the infected cell, which can often be seen with the unaided eye as large spots. This stops the infection from spreading. RNA interference is also an effective defence in plants.> When they are infected, plants often produce natural disinfectants that kill viruses, such as salicylic acid, nitric oxide, and reactive oxygen molecules. Plant virus particles or virus-like particles (VLPs) have applications in both biotechnology and nanotechnology. The capsids of most plant viruses are simple and robust structures and can be produced in large quantities either by the infection of plants or by expression in a variety of heterologous systems. Plant virus particles can be modified genetically and chemically to encapsulate foreign material and can be incorporated into supramolecular structures for use in biotechnology. Bacterial viruses Bacteriophages are a common and diverse group of viruses and are the most abundant biological entity in aquatic environments—there are up to ten times more of these viruses in the oceans than there are bacteria, reaching levels of 250,000,000 bacteriophages per millilitre of seawater. These viruses infect specific bacteria by binding to surface receptor molecules and then entering the cell. Within a short amount of time, in some cases, just minutes, bacterial polymerase starts translating viral mRNA into protein. These proteins go on to become either new virions within the cell, helper proteins, which help assembly of new virions, or proteins involved in cell lysis. Viral enzymes aid in the breakdown of the cell membrane, and, in the case of the T4 phage, in just over twenty minutes after injection over three hundred phages could be released. The major way bacteria defend themselves from bacteriophages is by producing enzymes that destroy foreign DNA. These enzymes, called restriction endonucleases, cut up the viral DNA that bacteriophages inject into bacterial cells. Bacteria also contain a system that uses CRISPR sequences to retain fragments of the genomes of viruses that the bacteria have come into contact with previously, which allows them to block the virus's replication through a form of RNA interference. This genetic system provides bacteria with acquired immunity to infection. Some bacteriophages are called "temperate" because they cause latent infections and do not immediately destroy their host cells. Instead, their DNA is incorporated with the host cell's as a prophage. These latent infections become productive when the prophage DNA is activated by stimuli such as changes in the environment. The intestines of animals, including humans, contain temperate bacteriophages, which are activated by various stimuli including changes in diet and antibiotics. Although first observed in bacteriophages, many other viruses are known to form proviruses including HIV. Archaeal viruses Some viruses replicate within archaea: these are DNA viruses with unusual and sometimes unique shapes. These viruses have been studied in most detail in the thermophilic archaea, particularly the orders Sulfolobales and Thermoproteales. Defences against these viruses involve RNA interference from repetitive DNA sequences within archaean genomes that are related to the genes of the viruses. Most archaea have CRISPR–Cas systems as an adaptive defence against viruses. These enable archaea to retain sections of viral DNA, which are then used to target and eliminate subsequent infections by the virus using a process similar to RNA interference. Role in aquatic ecosystems Viruses are the most abundant biological entity in aquatic environments. There are about ten million of them in a teaspoon of seawater. Most of these viruses are bacteriophages infecting heterotrophic bacteria and cyanophages infecting cyanobacteria and they are essential to the regulation of saltwater and freshwater ecosystems. Bacteriophages are harmless to plants and animals, and are essential to the regulation of marine and freshwater ecosystems are important mortality agents of phytoplankton, the base of the foodchain in aquatic environments. They infect and destroy bacteria in aquatic microbial communities, and are one of the most important mechanisms of recycling carbon and nutrient cycling in marine environments. The organic molecules released from the dead bacterial cells stimulate fresh bacterial and algal growth, in a process known as the viral shunt. In particular, lysis of bacteria by viruses has been shown to enhance nitrogen cycling and stimulate phytoplankton growth. Viral activity may also affect the biological pump, the process whereby carbon is sequestered in the deep ocean. Microorganisms constitute more than 90% of the biomass in the sea. It is estimated that viruses kill approximately 20% of this biomass each day and that there are 10 to 15 times as many viruses in the oceans as there are bacteria and archaea. Viruses are also major agents responsible for the destruction of phytoplankton including harmful algal blooms, The number of viruses in the oceans decreases further offshore and deeper into the water, where there are fewer host organisms. In January 2018, scientists reported that 800 million viruses, mainly of marine origin, are deposited daily from the Earth atmosphere onto every square meter of the planet's surface, as the result of a global atmospheric stream of viruses, circulating above the weather system but below the altitude of usual airline travel, distributing viruses around the planet. Like any organism, marine mammals are susceptible to viral infections. In 1988 and 2002, thousands of harbour seals were killed in Europe by phocine distemper virus. Many other viruses, including caliciviruses, herpesviruses, adenoviruses and parvoviruses, circulate in marine mammal populations. In December 2022, scientists reported the first observation of virovory via an experiment on pond water containing chlorovirus, which commonly infects green algae in freshwater environments. When all other microbial food sources were removed from the water, the ciliate Halteria was observed to have increased in number due to the active consumption of chlorovirus as a food source instead of its typical bacterivore diet. Role in evolution Viruses are an important natural means of transferring genes between different species, which increases genetic diversity and drives evolution. It is thought that viruses played a central role in early evolution, before the diversification of the last universal common ancestor into bacteria, archaea and eukaryotes. Viruses are still one of the largest reservoirs of unexplored genetic diversity on Earth. Applications Life sciences and medicine Viruses are important to the study of molecular and cell biology as they provide simple systems that can be used to manipulate and investigate the functions of cells. The study and use of viruses have provided valuable information about aspects of cell biology. For example, viruses have been useful in the study of genetics and helped our understanding of the basic mechanisms of molecular genetics, such as DNA replication, transcription, RNA processing, translation, protein transport, and immunology. Geneticists often use viruses as vectors to introduce genes into cells that they are studying. This is useful for making the cell produce a foreign substance, or to study the effect of introducing a new gene into the genome. Similarly, virotherapy uses viruses as vectors to treat various diseases, as they can specifically target cells and DNA. It shows promising use in the treatment of cancer and in gene therapy. Eastern European scientists have used phage therapy as an alternative to antibiotics for some time, and interest in this approach is increasing, because of the high level of antibiotic resistance now found in some pathogenic bacteria. The expression of heterologous proteins by viruses is the basis of several manufacturing processes that are currently being used for the production of various proteins such as vaccine antigens and antibodies. Industrial processes have been recently developed using viral vectors and several pharmaceutical proteins are currently in pre-clinical and clinical trials. Virotherapy Virotherapy involves the use of genetically modified viruses to treat diseases. Viruses have been modified by scientists to reproduce in cancer cells and destroy them but not infect healthy cells. Talimogene laherparepvec (T-VEC), for example, is a modified herpes simplex virus that has had a gene, which is required for viruses to replicate in healthy cells, deleted and replaced with a human gene (GM-CSF) that stimulates immunity. When this virus infects cancer cells, it destroys them and in doing so the presence the GM-CSF gene attracts dendritic cells from the surrounding tissues of the body. The dendritic cells process the dead cancer cells and present components of them to other cells of the immune system. Having completed successful clinical trials, the virus gained approval for the treatment of melanoma in late 2015. Viruses that have been reprogrammed to kill cancer cells are called oncolytic viruses. Materials science and nanotechnology From the viewpoint of a materials scientist, viruses can be regarded as organic nanoparticles. Their surface carries specific tools that enable them to cross the barriers of their host cells. The size and shape of viruses and the number and nature of the functional groups on their surface are precisely defined. As such, viruses are commonly used in materials science as scaffolds for covalently linked surface modifications. A particular quality of viruses is that they can be tailored by directed evolution. The powerful techniques developed by life sciences are becoming the basis of engineering approaches towards nanomaterials, opening a wide range of applications far beyond biology and medicine. Because of their size, shape, and well-defined chemical structures, viruses have been used as templates for organising materials on the nanoscale. Examples include the work at the Naval Research Laboratory in Washington, D.C., using Cowpea mosaic virus (CPMV) particles to amplify signals in DNA microarray based sensors. In this application, the virus particles separate the fluorescent dyes used for signalling to prevent the formation of non-fluorescent dimers that act as quenchers. Another example is the use of CPMV as a nanoscale breadboard for molecular electronics. Synthetic viruses Many viruses can be synthesised de novo ("from scratch"). The first synthetic virus was created in 2002. Although somewhat of a misconception, it is not the actual virus that is synthesised, but rather its DNA genome (in case of a DNA virus), or a cDNA copy of its genome (in case of RNA viruses). For many virus families the naked synthetic DNA or RNA (once enzymatically converted back from the synthetic cDNA) is infectious when introduced into a cell. That is, they contain all the necessary information to produce new viruses. This technology is now being used to investigate novel vaccine strategies. The ability to synthesise viruses has far-reaching consequences, since viruses can no longer be regarded as extinct, as long as the information of their genome sequence is known and permissive cells are available. As of June 2021, the full-length genome sequences of 11,464 different viruses, including smallpox, are publicly available in an online database maintained by the National Institutes of Health. Weapons The ability of viruses to cause devastating epidemics in human societies has led to the concern that viruses could be weaponised for biological warfare. Further concern was raised by the successful recreation of the infamous 1918 influenza virus in a laboratory. The smallpox virus devastated numerous societies throughout history before its eradication. There are only two centres in the world authorised by the WHO to keep stocks of smallpox virus: the State Research Center of Virology and Biotechnology VECTOR in Russia and the Centers for Disease Control and Prevention in the United States. It may be used as a weapon, as the vaccine for smallpox sometimes had severe side-effects, it is no longer used routinely in any country. Thus, much of the modern human population has almost no established resistance to smallpox and would be vulnerable to the virus.
Biology and health sciences
Biology
null
19167840
https://en.wikipedia.org/wiki/Chronology%20of%20the%20universe
Chronology of the universe
The chronology of the universe describes the history and future of the universe according to Big Bang cosmology. Research published in 2015 estimates the earliest stages of the universe's existence as taking place 13.8 billion years ago, with an uncertainty of around 21 million years at the 68% confidence level. Overview For the purposes of this summary, it is convenient to divide the chronology of the universe since it originated, into five parts. It is generally considered meaningless or unclear whether time existed before this chronology. Very early universe The first picosecond (10−12 seconds) of cosmic time includes the Planck epoch, during which currently established laws of physics may not have applied; the emergence in stages of the four known fundamental interactions or forces—first gravitation, and later the electromagnetic, weak and strong interactions; and the accelerated expansion of the universe due to cosmic inflation. Tiny ripples in the universe at this stage are believed to be the basis of large-scale structures that formed much later. Different stages of the very early universe are understood to different extents. The earlier parts are beyond the grasp of practical experiments in particle physics but can be explored through the extrapolation of known physical laws to extremely high temperatures. Early universe This period lasted around 380,000 years. Initially, various kinds of subatomic particles are formed in stages. These particles include almost equal amounts of matter and antimatter, so most of it quickly annihilates, leaving a small excess of matter in the universe. At about one second, neutrinos decouple; these neutrinos form the cosmic neutrino background (CνB). If primordial black holes exist, they are also formed at about one second of cosmic time. Composite subatomic particles emerge—including protons and neutrons—and from about 2 minutes, conditions are suitable for nucleosynthesis: around 25% of the protons and all the neutrons fuse into heavier elements, initially deuterium which itself quickly fuses into mainly helium-4. By 20 minutes, the universe is no longer hot enough for nuclear fusion, but far too hot for neutral atoms to exist or photons to travel far. It is therefore an opaque plasma. The recombination epoch begins at around 18,000 years, as electrons are combining with helium nuclei to form . At around 47,000 years, as the universe cools, its behavior begins to be dominated by matter rather than radiation. At around 100,000 years, after the neutral helium atoms form, helium hydride is the first molecule. Much later, hydrogen and helium hydride react to form molecular hydrogen (H2) the fuel needed for the first stars. At about 370,000 years, neutral hydrogen atoms finish forming ("recombination") greatly reducing the Thomson scattering of photons. No longer scattered by free electrons, the photons were (" decoupled") and propagated freely. This vast collection of photons from the earliest times of the universe can still be detected today as the cosmic microwave background (CMB). This is the oldest direct observation we currently have of the universe. Gravity builds cosmic structure This period measures from 380,000 years until about 1 billion years. Even before recombination and decoupling, matter began to accumulate around clumps of dark matter. Clouds of hydrogen collapsed very slowly to form stars and galaxies, so there were few sources of light and the emission from these sources was immediately absorbed by hydrogen atoms. The only photons (electromagnetic radiation, or "light") in the universe were those released during decoupling (visible today as the cosmic microwave background) and 21 cm radio emissions occasionally emitted by hydrogen atoms. This period is known as the cosmic Dark Ages. At some point around 200 to 500 million years, the earliest generations of stars and galaxies form (exact timings are still being researched), and early large structures gradually emerge, drawn to the foam-like dark matter filaments which have already begun to draw together throughout the universe. The earliest generations of stars have not yet been observed astronomically. They may have been very massive (100–300 solar masses) and non-metallic, with very short lifetimes compared to most stars we see today, so they commonly finish burning their hydrogen fuel and explode as highly energetic pair-instability supernovae after mere millions of years. Other theories suggest that they may have included small stars, some perhaps still burning today. In either case, these early generations of supernovae created most of the every day elements we see around us today, and seeded the universe with them. Galaxy clusters and superclusters emerge over time. At some point, high-energy photons from the earliest stars, dwarf galaxies and perhaps quasars lead to a period of reionization that commences gradually between about 250–500 million years and finishes by about 1 billion years (exact timings still being researched). The Dark Ages only fully came to an end at about 1 billion years as the universe gradually transitioned into the universe we see around us today, but denser, hotter, more intense in star formation, and richer in smaller (particularly unbarred) spiral and irregular galaxies, as opposed to giant elliptical galaxies. The earliest galaxies that have been observed, around from 330 million years after the Big Bang, or 13.4 billion years ago (redshift of z=13.2), have few elements heavier than hydrogen (metal poor) and show spectroscopic evidence of being surrounded by neutral hydrogen as expected. Other analysis suggests these galaxies formed rapidly in an environment of intense radiation. Universe as it appears today From 1 billion years, and for about 12.8 billion years, the universe has looked much as it does today and it will continue to appear very similar for many billions of years into the future. The thin disk of our galaxy began to form when the universe was about 5 billion years old or Gya. The Solar System formed at about 9.2 billion years (4.6 Gya), with the earliest evidence of life on Earth emerging by about 10 billion years (3.8 Gya). The thinning of matter over time reduces the ability of gravity to decelerate the expansion of the universe; in contrast, dark energy (believed to be a constant scalar field throughout the visible universe) is a constant factor tending to accelerate the expansion of the universe. The universe's expansion passed an inflection point about five or six billion years ago when the universe entered the modern "dark-energy-dominated era" where the universe's expansion is now accelerating rather than decelerating. The present-day universe is quite well understood, but beyond about 100 billion years of cosmic time (about 86 billion years in the future), we are less sure which path the universe will take. Far future and ultimate fate At some time, the Stelliferous Era will end as stars are no longer being born, and the expansion of the universe will mean that the observable universe becomes limited to local galaxies. There are various scenarios for the far future and ultimate fate of the universe. More exact knowledge of the present-day universe may allow these to be better understood. Tabular summary Note: The radiation temperature in the table below refers to the cosmic microwave background radiation and is given by 2.725 K·(1 + ), where is the redshift. Big Bang The concordance model of cosmology, called the Lambda-CDM model, is based on a model of spacetime that starts with the assumption that the density of mass is homogeneous and isotropic. These assumptions lead to the Friedmann–Lemaître–Robertson–Walker (FLRW) metric, a measure of distance between objects. With this metric Einstein field equations reduce to a simpler form called Friedmann equations which can be solved by treating spacetime as perfect fluid characterized by only pressure and density. The Lambda-CDM model closely matches high precision data across many kinds of astrophysical measurements, leading to the widespread acceptance of the Big Bang model. Very early universe Planck epoch Times shorter than 10−43 seconds (Planck time) Since the standard model of cosmology predicts expansion of the universe from a very hot time in the distant past, it can be followed back to smaller and smaller scales. However, it cannot be followed back to zero space. Below distance known as a Planck length, the basis for the equations breaks down. The energy of particles in this time is so large that quantum effects take over from Einstein equations for gravity. The Planck time,10−43 seconds, is therefore the beginning time for the Big Bang model of cosmology. Grand unification epoch Between 10−43 seconds and 10−36 seconds after the Big Bang After the Planck era, the universe could in principle be modeled by extensions of the Standard model of particle physics, for example, those called grand unified theories. Many such theories have proposed but none been successful producing quantitative agreement with the results of modern astrophysical observations. Neverthe less, the time between 10−43 and 10−36 seconds haas been called the grand unification epoch. As the universe expanded and cooled, it crossed transition temperatures at which forces separated from each other. These cosmological phase transitions can be visualized as similar to condensation and freezing phase transitions of ordinary matter. At certain temperatures/energies, water molecules change their behavior and structure, and they will behave completely differently. Like steam turning to water, the fields which define the universe's fundamental forces and particles also completely change their behaviors and structures when the temperature/energy falls below a certain point. This is not apparent in everyday life, because it only happens at far higher temperatures than usually seen in the present-day universe. These phase transitions in the universe's fundamental forces are believed to be caused by a phenomenon of quantum fields called "symmetry breaking". In everyday terms, as the universe cools, it becomes possible for the quantum fields that create the forces and particles around us, to settle at lower energy levels and with higher levels of stability. In doing so, they completely shift how they interact. Forces and interactions arise due to these fields, so the universe can behave very differently above and below a phase transition. For example, in a later epoch, a side effect of one phase transition is that suddenly, many particles that had no mass at all acquire a mass (they begin to interact differently with the Higgs field), and a single force begins to manifest as two separate forces. Assuming that nature is described by a so-called Grand Unified Theory (GUT), the grand unification epoch began with a phase transition of this kind, when gravitation separated from the universal combined gauge force. This caused two forces to now exist: gravity, and an electrostrong interaction. There is no hard evidence yet that such a combined force existed, but many physicists believe it did. The physics of this electrostrong interaction would be described by a Grand Unified Theory. The grand unification epoch ended with a second phase transition, as the electrostrong interaction in turn separated, and began to manifest as two separate interactions, called the strong and the electroweak interactions. Electroweak epoch Between 10−36 seconds (or the end of inflation) and 10−32 seconds after the Big Bang Depending on how epochs are defined, and the model being followed, the electroweak epoch may be considered to start before or after the inflationary epoch. In some models, it is described as including the inflationary epoch. In other models, the electroweak epoch is said to begin after the inflationary epoch ended, at roughly 10−32 seconds. According to traditional Big Bang cosmology, the electroweak epoch began 10−36 seconds after the Big Bang, when the temperature of the universe was low enough (1028 K) for the electronuclear force to begin to manifest as two separate interactions, the strong and the electroweak interactions. (The electroweak interaction will also separate later, dividing into the electromagnetic and weak interactions.) The exact point where electrostrong symmetry was broken is not certain, owing to speculative and as yet incomplete theoretical knowledge. Inflationary epoch and the rapid expansion of space Before c. 10−32 seconds after the Big Bang At this point of the very early universe, the universe is thought to have expanded by a factor of at least 1078 in volume. This is equivalent to a linear increase of at least 1026 times in every spatial dimension—equivalent to an object 1 nanometre (10−9 m, about half the width of a molecule of DNA) in length, expanding to one approximately long in a tiny fraction of a second. This phase of the cosmic expansion history is known as inflation. The mechanism that drove inflation remains unknown, although many models have been put forward. In several of the more prominent models, it is thought to have been triggered by the separation of the strong and electroweak interactions which ended the grand unification epoch. One of the theoretical products of this phase transition was a scalar field called the inflaton field. As this field settled into its lowest energy state throughout the universe, it generated an enormous repulsive force that led to a rapid expansion of the universe. Inflation explains several observed properties of the current universe that are otherwise difficult to account for, including explaining how today's universe has ended up so exceedingly homogeneous (spatially uniform) on a very large scale, even though it was highly disordered in its earliest stages. It is not known exactly when the inflationary epoch ended, but it is thought to have been between 10−33 and 10−32 seconds after the Big Bang. The rapid expansion of space meant that elementary particles remaining from the grand unification epoch were now distributed very thinly across the universe. However, the huge potential energy of the inflaton field was released at the end of the inflationary epoch, as the inflaton field decayed into other particles, known as "reheating". This heating effect led to the universe being repopulated with a dense, hot mixture of quarks, anti-quarks and gluons. In other models, reheating is often considered to mark the start of the electroweak epoch, and some theories, such as warm inflation, avoid a reheating phase entirely. In non-traditional versions of Big Bang theory (known as "inflationary" models), inflation ended at a temperature corresponding to roughly 10−32 seconds after the Big Bang, but this does not imply that the inflationary era lasted less than 10−32 seconds. To explain the observed homogeneity of the universe, the duration in these models must be longer than 10−32 seconds. Therefore, in inflationary cosmology, the earliest meaningful time "after the Big Bang" is the time of the end of inflation. After inflation ended, the universe continued to expand, but at a decelerating rate. About 4 billion years ago the expansion gradually began to speed up again. This is believed to be due to dark energy becoming dominant in the universe's large-scale behavior. It is still expanding (and accelerating), today. On 17 March 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-modes power spectrum which was interpreted as clear experimental evidence for the theory of inflation. However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported and finally, on 2 February 2015, a joint analysis of data from BICEP2/Keck and the European Space Agency's Planck microwave space telescope concluded that the statistical "significance [of the data] is too low to be interpreted as a detection of primordial B-modes" and can be attributed mainly to polarized dust in the Milky Way. Supersymmetry breaking (speculative) If supersymmetry is a property of the universe, then it must be broken at an energy that is no lower than 1 TeV, the electroweak scale. The masses of particles and their superpartners would then no longer be equal. This very high energy could explain why no superpartners of known particles have ever been observed. Early universe After cosmic inflation ends, the universe is filled with a hot quark–gluon plasma, the remains of reheating. From this point onwards the physics of the early universe is much better understood, and the energies involved in the Quark epoch are directly accessible in particle physics experiments and other detectors. Electroweak epoch and early thermalization Starting anywhere between 10−22 and 10−15 seconds after the Big Bang, until 10−12 seconds after the Big Bang Sometime after inflation, the created particles went through thermalization, where mutual interactions lead to thermal equilibrium. Before the electroweak symmetry breaking, at a temperature of around 1015 K, approximately 10−15 seconds after the Big Bang, the electromagnetic and weak interaction have not yet separated, and the gauge bosons and fermions have not yet gained mass through the Higgs mechanism. This epoch ended with electroweak symmetry breaking, potentially through a phase transition. In some extensions of the Standard Model of particle physics, baryogenesis also happened at this stage, creating an imbalance between matter and anti-matter (though in extensions to this model, this may have happened earlier). Little is known about the details of these processes. Thermalization The number density of each particle species was, by a similar analysis to Stefan–Boltzmann law: , which is roughly just . Since the interaction was strong, the cross-section was approximately the particle wavelength squared, which is roughly . The rate of collisions per particle species can thus be calculated from the mean free path, giving approximately: For comparison, since the cosmological constant was negligible at this stage, the Hubble parameter was: where x ~ 102 was the number of available particle species. Thus H is orders of magnitude lower than the rate of collisions per particle species. This means there was plenty of time for thermalization at this stage. At this epoch, the collision rate is proportional to the third root of the number density, and thus to , where is the scale parameter. The Hubble parameter, however, is proportional to . Going back in time and higher in energy, and assuming no new physics at these energies, a careful estimate gives that thermalization was first possible when the temperature was: approximately 10−22 seconds after the Big Bang. Electroweak symmetry breaking 10−12 seconds after the Big Bang As the universe's temperature continued to fall below 159.5±1.5 GeV, electroweak symmetry breaking happened. So far as we know, it was the penultimate symmetry breaking event in the formation of the universe, the final one being chiral symmetry breaking in the quark sector. This has two related effects: Via the Higgs mechanism, all elementary particles interacting with the Higgs field become massive, having been massless at higher energy levels. As a side-effect, the weak nuclear force and electromagnetic force, and their respective bosons (the W and Z bosons and photon) now begin to manifest differently in the present universe. Before electroweak symmetry breaking these bosons were all massless particles and interacted over long distances, but at this point the W and Z bosons abruptly become massive particles only interacting over distances smaller than the size of an atom, while the photon remains massless and remains a long-distance interaction. After electroweak symmetry breaking, the fundamental interactions we know of—gravitation, electromagnetic, weak and strong interactions—have all taken their present forms, and fundamental particles have their expected masses, but the temperature of the universe is still too high to allow the stable formation of many particles we now see in the universe, so there are no protons or neutrons, and therefore no atoms, atomic nuclei, or molecules. (More exactly, any composite particles that form by chance, almost immediately break up again due to the extreme energies.) Quark epoch Between 10−12 seconds and 10−5 seconds after the Big Bang The quark epoch began approximately 10−12 seconds after the Big Bang. This was the period in the evolution of the early universe immediately after electroweak symmetry breaking when the fundamental interactions of gravitation, electromagnetism, the strong interaction and the weak interaction had taken their present forms, but the temperature of the universe was still too high to allow quarks to bind together to form hadrons. During the quark epoch the universe was filled with a dense, hot quark–gluon plasma, containing quarks, leptons and their antiparticles. Collisions between particles were too energetic to allow quarks to combine into mesons or baryons. The quark epoch ended when the universe was about 10−5 seconds old, when the average energy of particle interactions had fallen below the mass of the lightest hadron, the pion. Baryogenesis Perhaps by 10−11 seconds Baryons are subatomic particles such as protons and neutrons, that are composed of three quarks. It would be expected that both baryons, and particles known as antibaryons would have formed in equal numbers. However, this does not seem to be what happened—as far as we know, the universe was left with far more baryons than antibaryons. In fact, almost no antibaryons are observed in nature. It is not clear how this came about. Any explanation for this phenomenon must allow the Sakharov conditions related to baryogenesis to have been satisfied at some time after the end of cosmological inflation. Current particle physics suggests asymmetries under which these conditions would be met, but these asymmetries appear to be too small to account for the observed baryon-antibaryon asymmetry of the universe. Hadron epoch Between 10−5 second and 1 second after the Big Bang The quark–gluon plasma that composes the universe cools until hadrons, including baryons such as protons and neutrons, can form. Initially, hadron/anti-hadron pairs could form, so matter and antimatter were in thermal equilibrium. However, as the temperature of the universe continued to fall, new hadron/anti-hadron pairs were no longer produced, and most of the newly formed hadrons and anti-hadrons annihilated each other, giving rise to pairs of high-energy photons. A comparatively small residue of hadrons remained at about 1 second of cosmic time, when this epoch ended. Theory predicts that about 1 neutron remained for every 6 protons, with the ratio falling to 1:7 over time due to neutron decay. This is believed to be correct because, at a later stage, the neutrons and some of the protons fused, leaving hydrogen, a hydrogen isotope called deuterium, helium and other elements, which can be measured. A 1:7 ratio of hadrons would indeed produce the observed element ratios in the early and current universe. Neutrino decoupling and cosmic neutrino background (CνB) Around 1 second after the Big Bang At approximately 1 second after the Big Bang neutrinos decouple and begin travelling freely through space. As neutrinos rarely interact with matter, these neutrinos still exist today, analogous to the much later cosmic microwave background emitted during recombination, around 370,000 years after the Big Bang. The neutrinos from this event have a very low energy, around 10−10 times the amount of those observable with present-day direct detection. Even high-energy neutrinos are notoriously difficult to detect, so this cosmic neutrino background (CνB) may not be directly observed in detail for many years, if at all. However, Big Bang cosmology makes many predictions about the CνB, and there is very strong indirect evidence that the CνB exists, both from Big Bang nucleosynthesis predictions of the helium abundance, and from anisotropies in the cosmic microwave background (CMB). One of these predictions is that neutrinos will have left a subtle imprint on the CMB. It is well known that the CMB has irregularities. Some of the CMB fluctuations were roughly regularly spaced, because of the effect of baryonic acoustic oscillations. In theory, the decoupled neutrinos should have had a very slight effect on the phase of the various CMB fluctuations. In 2015, it was reported that such shifts had been detected in the CMB. Moreover, the fluctuations corresponded to neutrinos of almost exactly the temperature predicted by Big Bang theory ( compared to a prediction of ), and exactly three types of neutrino, the same number of neutrino flavors predicted by the Standard Model. Possible formation of primordial black holes May have occurred within about 1 second after the Big Bang Primordial black holes are a hypothetical type of black hole proposed in 1966, that may have formed during the so-called radiation-dominated era, due to the high densities and inhomogeneous conditions within the first second of cosmic time. Random fluctuations could lead to some regions becoming dense enough to undergo gravitational collapse, forming black holes. Any primordial black holes would have to have less mass than an asteroid to avoid detection. Lepton epoch Between 1 second and 10 seconds after the Big Bang The majority of hadrons and anti-hadrons annihilate each other at the end of the hadron epoch, leaving leptons (such as the electron, muons and certain neutrinos) and antileptons, dominating the mass of the universe. The lepton epoch follows a similar path to the earlier hadron epoch. Initially leptons and antileptons are produced in pairs. About 10 seconds after the Big Bang the temperature of the universe falls to the point at which new lepton–antilepton pairs are no longer created and most remaining leptons and antileptons quickly annihilated each other, giving rise to pairs of high-energy photons, and leaving a small residue of non-annihilated leptons. Photon epoch Between 10 seconds and 370,000 years after the Big Bang After most leptons and antileptons are annihilated at the end of the lepton epoch, most of the mass–energy in the universe is left in the form of photons. (Much of the rest of its mass–energy is in the form of neutrinos and other relativistic particles.) Therefore, the energy of the universe, and its overall behavior, is dominated by its photons. These photons continue to interact frequently with charged particles, i.e., electrons, protons and (eventually) nuclei. They continue to do so for about the next 370,000 years. Nucleosynthesis of light elements Between 2 minutes and 20 minutes after the Big Bang Between about 2 and 20 minutes after the Big Bang, the temperature and pressure of the universe allowed nuclear fusion to occur, giving rise to nuclei of a few light elements beyond hydrogen ("Big Bang nucleosynthesis"). About 25% of the protons, and all the neutrons fuse to form deuterium, a hydrogen isotope, and most of the deuterium quickly fuses to form helium-4. Atomic nuclei will easily unbind (break apart) above a certain temperature, related to their binding energy. From about 2 minutes, the falling temperature means that deuterium no longer unbinds, and is stable, and starting from about 3 minutes, helium and other elements formed by the fusion of deuterium also no longer unbind and are stable. The short duration and falling temperature means that only the simplest and fastest fusion processes can occur. Only tiny amounts of nuclei beyond helium are formed, because nucleosynthesis of heavier elements is difficult and requires thousands of years even in stars. Small amounts of tritium (another hydrogen isotope) and beryllium-7 and -8 are formed, but these are unstable and are quickly lost again. A small amount of deuterium is left unfused because of the very short duration. Therefore, the only stable nuclides created by the end of Big Bang nucleosynthesis are protium (single proton/hydrogen nucleus), deuterium, helium-3, helium-4, and lithium-7. By mass, the resulting matter is about 75% hydrogen nuclei, 25% helium nuclei, and perhaps 10−10 by mass of lithium-7. The next most common stable isotopes produced are lithium-6, beryllium-9, boron-11, carbon, nitrogen and oxygen ("CNO"), but these have predicted abundances of between 5 and 30 parts in 1015 by mass, making them essentially undetectable and negligible. The amounts of each light element in the early universe can be estimated from old galaxies, and is strong evidence for the Big Bang. For example, the Big Bang should produce about 1 neutron for every 7 protons, allowing for 25% of all nucleons to be fused into helium-4 (2 protons and 2 neutrons out of every 16 nucleons), and this is the amount we find today, and far more than can be easily explained by other processes. Similarly, deuterium fuses extremely easily; any alternative explanation must also explain how conditions existed for deuterium to form, but also left some of that deuterium unfused and not immediately fused again into helium. Any alternative must also explain the proportions of the various light elements and their isotopes. A few isotopes, such as lithium-7, were found to be present in amounts that differed from theory, but over time, these differences have been resolved by better observations. Matter domination 47,000 years after the Big Bang Until now, the universe's large-scale dynamics and behavior have been determined mainly by radiation—meaning, those constituents that move relativistically (at or near the speed of light), such as photons and neutrinos. As the universe cools, from around 47,000 years (redshift z = 3600), the universe's large-scale behavior becomes dominated by matter instead. This occurs because the energy density of matter begins to exceed both the energy density of radiation and the vacuum energy density. Around or shortly after 47,000 years, the densities of non-relativistic matter (atomic nuclei) and relativistic radiation (photons) become equal, the Jeans length, which determines the smallest structures that can form (due to competition between gravitational attraction and pressure effects), begins to fall and perturbations, instead of being wiped out by free streaming radiation, can begin to grow in amplitude. According to the Lambda-CDM model, by this stage, the matter in the universe is around 84.5% cold dark matter and 15.5% "ordinary" matter. There is overwhelming evidence that dark matter exists and dominates the universe, but since the exact nature of dark matter is still not understood, the Big Bang theory does not presently cover any stages in its formation. From this point on, and for several billion years to come, the presence of dark matter accelerates the formation of structure in the universe. In the early universe, dark matter gradually gathers in huge filaments under the effects of gravity, collapsing faster than ordinary (baryonic) matter because its collapse is not slowed by radiation pressure. This amplifies the tiny inhomogeneities (irregularities) in the density of the universe which was left by cosmic inflation. Over time, slightly denser regions become denser and slightly rarefied (emptier) regions become more rarefied. Ordinary matter eventually gathers together faster than it would otherwise do, because of the presence of these concentrations of dark matter. The properties of dark matter that allow it to collapse quickly without radiation pressure, also mean that it cannot lose energy by radiation either. Losing energy is necessary for particles to collapse into dense structures beyond a certain point. Therefore, dark matter collapses into huge but diffuse filaments and haloes, and not into stars or planets. Ordinary matter, which can lose energy by radiation, forms dense objects and also gas clouds when it collapses. Recombination, photon decoupling, and the cosmic microwave background (CMB) About 370,000 years after the Big Bang, two connected events occurred: the ending of recombination and photon decoupling. Recombination describes the ionized particles combining to form the first neutral atoms, and decoupling refers to the photons released ("decoupled") as the newly formed atoms settle into more stable energy states. Just before recombination, the baryonic matter in the universe was at a temperature where it formed a hot ionized plasma. Most of the photons in the universe interacted with electrons and protons, and could not travel significant distances without interacting with ionized particles. As a result, the universe was opaque or "foggy". Although there was light, it was not possible to see, nor can we observe that light through telescopes. Starting around 18,000 years, the universe has cooled to a point where free electrons can combine with helium nuclei to form atoms. Neutral helium nuclei then start to form at around 100,000 years, with neutral hydrogen formation peaking around 260,000 years. This process is known as recombination. The name is slightly inaccurate and is given for historical reasons: in fact the electrons and atomic nuclei were combining for the first time. At around 100,000 years, the universe had cooled enough for helium hydride, the first molecule, to form. In April 2019, this molecule was first announced to have been observed in interstellar space, in NGC 7027, a planetary nebula within this galaxy. (Much later, atomic hydrogen reacted with helium hydride to create molecular hydrogen, the fuel required for star formation.) Directly combining in a low energy state (ground state) is less efficient, so these hydrogen atoms generally form with the electrons still in a high-energy state, and once combined, the electrons quickly release energy in the form of one or more photons as they transition to a low energy state. This release of photons is known as photon decoupling. Some of these decoupled photons are captured by other hydrogen atoms, the remainder remain free. By the end of recombination, most of the protons in the universe have formed neutral atoms. This change from charged to neutral particles means that the mean free path photons can travel before capture in effect becomes infinite, so any decoupled photons that have not been captured can travel freely over long distances (see Thomson scattering). The universe has become transparent to visible light, radio waves and other electromagnetic radiation for the first time in its history. The photons released by these newly formed hydrogen atoms initially had a temperature/energy of around ~ 4000 K. This would have been visible to the eye as a pale yellow/orange tinted, or "soft", white color. Over billions of years since decoupling, as the universe has expanded, the photons have been red-shifted from visible light to radio waves (microwave radiation corresponding to a temperature of about 2.7 K). Red shifting describes the photons acquiring longer wavelengths and lower frequencies as the universe expanded over billions of years, so that they gradually changed from visible light to radio waves. These same photons can still be detected as radio waves today. They form the cosmic microwave background, and they provide crucial evidence of the early universe and how it developed. Around the same time as recombination, existing pressure waves within the electron-baryon plasma—known as baryon acoustic oscillations—became embedded in the distribution of matter as it condensed, giving rise to a very slight preference in distribution of large-scale objects. Therefore, the cosmic microwave background is a picture of the universe at the end of this epoch including the tiny fluctuations generated during inflation (see 9-year WMAP image), and the spread of objects such as galaxies in the universe is an indication of the scale and size of the universe as it developed over time. Gravity builds cosmic structure 370 thousand to about 1 billion years after the Big Bang Dark Ages After recombination and decoupling, the universe was transparent and had cooled enough to allow light to travel long distances, but there were no light-producing structures such as stars and galaxies. Stars and galaxies are formed when dense regions of gas form due to the action of gravity, and this takes a long time within a near-uniform density of gas and on the scale required, so it is estimated that stars did not exist for perhaps hundreds of millions of years after recombination. This period, known as the Dark Ages, began around 370,000 years after the Big Bang. During the Dark Ages, the temperature of the universe cooled from some 4000 K to about 60 K (3727 °C to about −213 °C), and only two sources of photons existed: the photons released during recombination/decoupling (as neutral hydrogen atoms formed), which we can still detect today as the cosmic microwave background (CMB), and photons occasionally released by neutral hydrogen atoms, known as the 21 cm spin line of neutral hydrogen. The hydrogen spin line is in the microwave range of frequencies, and within 3 million years, the CMB photons had redshifted out of visible light to infrared; from that time until the first stars, there were no visible light photons. Other than perhaps some rare statistical anomalies, the universe was truly dark. The first generation of stars, known as Population III stars, formed within a few hundred million years after the Big Bang. These stars were the first source of visible light in the universe after recombination. Structures may have begun to emerge from around 150 million years, and early galaxies emerged from around 180 to 700 million years. As they emerged, the Dark Ages gradually ended. Because this process was gradual, the Dark Ages only ended fully at around 1 billion years, as the universe took on its present appearance. Oldest observations of stars and galaxies At present, the oldest observations of stars and galaxies are from shortly after the start of reionization, with galaxies such as GN-z11 (Hubble Space Telescope, 2016) at about z≈11.1 (about 400 million years cosmic time). Hubble's successor, the James Webb Space Telescope, launched December 2021, is designed to detect objects up to 100 times fainter than Hubble, and much earlier in the history of the universe, back to redshift z≈20 (about 180 million years cosmic time). This is believed to be earlier than the first galaxies, and around the era of the first stars. There is also an observational effort underway to detect the faint 21 cm spin line radiation, as it is in principle an even more powerful tool than the cosmic microwave background for studying the early universe. Earliest structures and stars emerge Around 150 million to 1 billion years after the Big Bang The matter in the universe is around 84.5% cold dark matter and 15.5% "ordinary" matter. Since the start of the matter-dominated era, dark matter has gradually been gathering in huge spread-out (diffuse) filaments under the effects of gravity. Ordinary matter eventually gathers together faster than it would otherwise do, because of the presence of these concentrations of dark matter. It is also slightly more dense at regular distances due to early baryon acoustic oscillations (BAO) which became embedded into the distribution of matter when photons decoupled. Unlike dark matter, ordinary matter can lose energy by many routes, which means that as it collapses, it can lose the energy which would otherwise hold it apart, and collapse more quickly, and into denser forms. Ordinary matter gathers where dark matter is denser, and in those places it collapses into clouds of mainly hydrogen gas. The first stars and galaxies form from these clouds. Where numerous galaxies have formed, galaxy clusters and superclusters will eventually arise. Large voids with few stars will develop between them, marking where dark matter became less common. The exact timings of the first stars, galaxies, supermassive black holes, and quasars, and the start and end timings and progression of the period known as reionization, are still being actively researched, with new findings published periodically. : the earliest confirmed galaxies (for example GN-z11) date from around 380–400 million years, suggesting surprisingly fast gas cloud condensation and stellar birth rates; and observations of the Lyman-alpha forest, and of other changes to the light from ancient objects, allow the timing for reionization and its eventual end to be narrowed down. But these are all still areas of active research. Structure formation in the Big Bang model proceeds hierarchically, due to gravitational collapse, with smaller structures forming before larger ones. The earliest structures to form are the first stars (known as Population III stars), dwarf galaxies, and quasars (which are thought to be bright, early active galaxies containing a supermassive black hole surrounded by an inward-spiralling accretion disk of gas). Before this epoch, the evolution of the universe could be understood through linear cosmological perturbation theory: that is, all structures could be understood as small deviations from a perfect homogeneous universe. This is computationally relatively easy to study. At this point non-linear structures begin to form, and the computational problem becomes much more difficult, involving, for example, N-body simulations with billions of particles. The Bolshoi cosmological simulation is a high precision simulation of this era. These Population III stars are also responsible for turning the few light elements that were formed in the Big Bang (hydrogen, helium and small amounts of lithium) into many heavier elements. They can be huge as well as perhaps small—and non-metallic (no elements except hydrogen and helium). The larger stars have very short lifetimes compared to most Main Sequence stars we see today, so they commonly finish burning their hydrogen fuel and explode as supernovae after mere millions of years, seeding the universe with heavier elements over repeated generations. They mark the start of the Stelliferous Era. As yet, no Population III stars have been found, so the understanding of them is based on computational models of their formation and evolution. Fortunately, observations of the cosmic microwave background radiation can be used to date when star formation began in earnest. Analysis of such observations made by the Planck microwave space telescope in 2016 concluded that the first generation of stars may have formed from around 300 million years after the Big Bang. Quasars provide some additional evidence of early structure formation. Their light shows evidence of elements such as carbon, magnesium, iron and oxygen. This is evidence that by the time quasars formed, a massive phase of star formation had already taken place, including sufficient generations of Population III stars to give rise to these elements. Reionization As the first stars, dwarf galaxies and quasars gradually form, the intense radiation they emit reionizes much of the surrounding universe; splitting the neutral hydrogen atoms back into a plasma of free electrons and protons for the first time since recombination and decoupling. Reionization is evidenced from observations of quasars. Quasars are a form of active galaxy, and the most luminous objects observed in the universe. Electrons in neutral hydrogen have specific patterns of absorbing ultraviolet photons, related to electron energy levels and called the Lyman series. Ionized hydrogen does not have electron energy levels of this kind. Therefore, light travelling through ionized hydrogen and neutral hydrogen shows different absorption lines. Ionized hydrogen in the intergalactic medium (particularly electrons) can scatter light through Thomson scattering as it did before recombination, but the expansion of the universe and clumping of gas into galaxies resulted in a concentration too low to make the universe fully opaque by the time of reionization. Because of the immense distance travelled by light (billions of light years) to reach Earth from structures existing during reionization, any absorption by neutral hydrogen is redshifted by various amounts, rather than by one specific amount, indicating when the absorption of then-ultraviolet light happened. These features make it possible to study the state of ionization at many different times in the past. Reionization began as "bubbles" of ionized hydrogen which became larger over time until the entire intergalactic medium was ionized, when the absorption lines by neutral hydrogen become rare. The absorption was due to the general state of the universe (the intergalactic medium) and not due to passing through galaxies or other dense areas. Reionization might have started to happen as early as z = 16 (250 million years of cosmic time) and was mostly complete by around z = 9 or 10 (500 million years), with the remaining neutral hydrogen becoming fully ionized z = 5 or 6 (1 billion years), when Gunn-Peterson troughs that show the presence of large amounts of neutral hydrogen disappear. The intergalactic medium remains predominantly ionized to the present day, the exception being some remaining neutral hydrogen clouds, which cause Lyman-alpha forests to appear in spectra. These observations have narrowed down the period of time during which reionization took place, but the source of the photons that caused reionization is still not completely certain. To ionize neutral hydrogen, an energy larger than 13.6 eV is required, which corresponds to ultraviolet photons with a wavelength of 91.2 nm or shorter, implying that the sources must have produced significant amount of ultraviolet and higher energy. Protons and electrons will recombine if energy is not continuously provided to keep them apart, which also sets limits on how numerous the sources were and their longevity. With these constraints, it is expected that quasars and first generation stars and galaxies were the main sources of energy. The current leading candidates from most to least significant are currently believed to be Population III stars (the earliest stars; possibly 70%), dwarf galaxies (very early small high-energy galaxies; possibly 30%), and a contribution from quasars (a class of active galactic nuclei). However, by this time, matter had become far more spread out due to the ongoing expansion of the universe. Although the neutral hydrogen atoms were again ionized, the plasma was much more thin and diffuse, and photons were much less likely to be scattered. Despite being reionized, the universe remained largely transparent during reionization due how sparse the intergalactic medium was. Reionization gradually ended as the intergalactic medium became virtually completely ionized, although some regions of neutral hydrogen do exist, creating Lyman-alpha forests. In August 2023, images of black holes and related matter in the very early universe by the James Webb Space Telescope were reported and discussed. Galaxies, clusters and superclusters Matter continues to draw together under the influence of gravity, to form galaxies. The stars from this time period, known as Population II stars, are formed early on in this process, with more recent Population I stars formed later. Gravitational attraction also gradually pulls galaxies towards each other to form groups, clusters and superclusters. Hubble Ultra Deep Field observations has identified a number of small galaxies merging to form larger ones, at 800 million years of cosmic time (13 billion years ago). (This age estimate is now believed to be slightly overstated). Present and future The universe has appeared much the same as it does now, for many billions of years. It will continue to look similar for many more billions of years into the future. The galactic disk of the Milky Way is estimated to have been formed 8.8 ± 1.7 billion years ago but only the age of the Sun, 4.567 billion years, is known precisely. Dark energy-dominated era From about 9.8 billion years after the Big Bang From about 9.8 billion years of cosmic time, the universe's large-scale behavior is believed to have gradually changed for the third time in its history. Its behavior had originally been dominated by radiation (relativistic constituents such as photons and neutrinos) for the first 47,000 years, and since about 370,000 years of cosmic time, its behavior had been dominated by matter. During its matter-dominated era, the expansion of the universe had begun to slow down, as gravity reined in the initial outward expansion. But from about 9.8 billion years of cosmic time, observations show that the expansion of the universe slowly stops decelerating, and gradually begins to accelerate again, instead. While the precise cause is not known, the observation is accepted as correct by the cosmologist community. By far the most accepted understanding is that this is due to an unknown form of energy which has been given the name "dark energy". "Dark" in this context means that it is not directly observed, but its existence can be deduced by examining the gravitational effect it has on the universe. Research is ongoing to understand this dark energy. Dark energy is now believed to be the single largest component of the universe, as it constitutes about 68.3% of the entire mass–energy of the physical universe. Dark energy is believed to act like a cosmological constant—a scalar field that exists throughout space. Unlike gravity, the effects of such a field do not diminish (or only diminish slowly) as the universe grows. While matter and gravity have a greater effect initially, their effect quickly diminishes as the universe continues to expand. Objects in the universe, which are initially seen to be moving apart as the universe expands, continue to move apart, but their outward motion gradually slows down. This slowing effect becomes smaller as the universe becomes more spread out. Eventually, the outward and repulsive effect of dark energy begins to dominate over the inward pull of gravity. Instead of slowing down and perhaps beginning to move inward under the influence of gravity, from about 9.8 billion years of cosmic time, the expansion of space starts to slowly accelerate outward at a gradually increasing rate. Far future and ultimate fate There are several competing scenarios for the long-term evolution of the universe. Which of them will happen, if any, depends on the precise values of physical constants such as the cosmological constant, the possibility of proton decay, the energy of the vacuum (meaning, the energy of "empty" space itself), and the natural laws beyond the Standard Model. If the expansion of the universe continues and it stays in its present form, eventually all but the nearest galaxies will be carried away from us by the expansion of space at such a velocity that the observable universe will be limited to our own gravitationally bound local galaxy cluster. In the very long term (after many trillions—thousands of billions—of years, cosmic time), the Stelliferous Era will end, as stars cease to be born and even the longest-lived stars gradually die. Beyond this, all objects in the universe will cool and (with the possible exception of protons) gradually decompose back to their constituent particles and then into subatomic particles and very low-level photons and other fundamental particles, by a variety of possible processes. Ultimately, in the extreme future, the following scenarios have been proposed for the ultimate fate of the universe: In this kind of extreme timescale, extremely rare quantum phenomena may also occur that are extremely unlikely to be seen on a timescale smaller than trillions of years. These may also lead to unpredictable changes to the state of the universe which would not be likely to be significant on any smaller timescale. For example, on a timescale of millions of trillions of years, black holes might appear to evaporate almost instantly, uncommon quantum tunnelling phenomena would appear to be common, and quantum (or other) phenomena so unlikely that they might occur just once in a trillion years may occur many times.
Physical sciences
Physical cosmology
null
19172225
https://en.wikipedia.org/wiki/Prokaryote
Prokaryote
A prokaryote (; less commonly spelled procaryote) is a single-celled organism whose cell lacks a nucleus and other membrane-bound organelles. The word prokaryote comes from the Ancient Greek (), meaning 'before', and (), meaning 'nut' or 'kernel'. In the earlier two-empire system arising from the work of Édouard Chatton, prokaryotes were classified within the empire Prokaryota. However, in the three-domain system, based upon molecular phylogenetics, prokaryotes are divided into two domains: Bacteria and Archaea. A third domain, Eukaryota, consists of organisms with nuclei. Prokaryotes evolved before eukaryotes, and lack nuclei, mitochondria, and most of the other distinct organelles that characterize the eukaryotic cell. Some unicellular prokaryotes, such as cyanobacteria, form colonies held together by biofilms, and large colonies can create multilayered microbial mats. Prokaryotes are asexual, reproducing via binary fission, although horizontal gene transfer is common. Molecular phylogenetics has provided insight into the evolution and interrelationships of the three domains of life. The division between prokaryotes and eukaryotes reflects two very different levels of cellular organization; only eukaryotic cells have an enclosed nucleus that contains its DNA, and other membrane-bound organelles including mitochondria. More recently, the primary division has been seen as that between Archaea and Bacteria, since the eukaryotes are part of the archaean clade and have multiple homologies with other Archaea. Structure The cellular components of prokaryotes are not enclosed in membranes within the cytoplasm, like eukaryotic organelles. Bacteria have microcompartments, quasi-organelles enclosed in protein shells such as encapsulin protein cages, while both bacteria and some archaea have gas vesicles. Prokaryotes have simple cell skeletons. These are highly diverse, and contain homologues of the eukaryote proteins actin and tubulin. The cytoskeleton provides the capability for movement within the cell. Most prokaryotes are between 1 and 10 μm, but they vary in size from 0.2 μm in Thermodiscus spp. and Mycoplasma genitalium to 750 μm in Thiomargarita namibiensis. Bacterial cells have various shapes, including spherical or ovoid cocci, e.g., Streptococcus; cylindrical bacilli, e.g., Lactobacillus; spiral bacteria, e.g., Helicobacter; or comma-shaped, e.g., Vibrio. Archaea are mainly simple ovoids, but Haloquadratum is flat and square. Reproduction and DNA transfer Bacteria and archaea reproduce through asexual reproduction, usually by binary fission. Genetic exchange and recombination occur by horizontal gene transfer, not involving replication. DNA transfer between prokaryotic cells occurs in bacteria and archaea. Gene transfer in bacteria In bacteria, gene transfer occurs by three processes. These are virus-mediated transduction; conjugation; and natural transformation. Transduction of bacterial genes by bacteriophage viruses appears to reflect occasional errors during intracellular assembly of virus particles, rather than an adaptation of the host bacteria. There are at least three ways that it can occur, all involving the incorporation of some bacterial DNA in the virus, and from there to another bacterium. Conjugation involves plasmids, allowing plasmid DNA to be transferred from one bacterium to another. Infrequently, a plasmid may integrate into the host bacterial chromosome, and subsequently transfer part of the host bacterial DNA to another bacterium. Natural bacterial transformation involves the transfer of DNA from one bacterium to another through the water around them. This is a bacterial adaptation for DNA transfer, because it depends on the interaction of numerous bacterial gene products. The bacterium must first enter the physiological state called competence; in Bacillus subtilis, the process involves 40 genes. The amount of DNA transferred during transformation can be as much as a third of the whole chromosome. Transformation is common, occurring in at least 67 species of bacteria. Gene transfer in archaea Among archaea, Haloferax volcanii forms cytoplasmic bridges between cells that transfer DNA between cells, while Sulfolobus solfataricus transfers DNA between cells by direct contact. Exposure of S. solfataricus to agents that damage DNA induces cellular aggregation, perhaps enhancing homologous recombination to increase the repair of damaged DNA. Colonies and biofilms Prokaryotes are strictly unicellular, but most can form stable aggregate communities in biofilms. Bacterial biofilms are formed by the secretion of extracellular polymeric substance (EPS). Myxobacteria have multicellular stages in their life cycles. Biofilms may be structurally complex and may attach to solid surfaces, or exist at liquid-air interfaces. Bacterial biofilms are often made up of microcolonies (dome-shaped masses of bacteria and matrix) separated by channels through which water may flow easily. Microcolonies may join together above the substratum to form a continuous layer. This structure functions as a simple circulatory system by moving water through the biofilm, helping to provide cells with oxygen which is often in short supply. The result approaches a multicellular organisation. Differential cell expression, collective behavior, signaling (quorum sensing), programmed cell death, and discrete biological dispersal events all seem to point in this direction. Bacterial biofilms may be 100 times more resistant to antibiotics than free-living unicells, making them difficult to remove from surfaces they have colonized. Environment Prokaryotes have diversified greatly throughout their long existence. Their metabolism is far more varied than that of eukaryotes, leading to many highly distinct types. For example, prokaryotes may obtain energy by chemosynthesis. Prokaryotes live nearly everywhere on Earth, including in environments as cold as soils in Antarctica, or as hot as undersea hydrothermal vents and land-based hot springs. Some archaea and bacteria are extremophiles, thriving in harsh conditions, such as high temperatures (thermophiles) or high salinity (halophiles). Some archaeans are methanogens, living in anoxic environments and releasing methane. Many archaea grow as plankton in the oceans. Symbiotic prokaryotes live in or on the bodies of other organisms, including humans. Prokaryotes have high populations in the soil, in the sea, and in undersea sediments. Soil prokaryotes are still heavily undercharacterized despite their easy proximity to humans and their tremendous economic importance to agriculture. Evolution The first organisms A widespread current model of the origin of life is that the first organisms were prokaryotes. These may have evolved out of protocells, while the eukaryotes evolved later in the history of life. An alternative model is that extant prokaryotes evolved from more complex eukaryotic ancestors through a process of simplification. Another view is that the three domains of life arose simultaneously, from a set of varied cells that formed a single gene pool. The oldest known fossilized prokaryotes were laid down approximately 3.5 billion years ago, only about 1 billion years after the formation of the Earth's crust. Eukaryotes only appear in the fossil record later, and may have formed from endosymbiosis of multiple prokaryote ancestors. The oldest known fossil eukaryotes are about 1.7 billion years old. However, some genetic evidence suggests eukaryotes appeared as early as 3 billion years ago. Phylogeny According to the 2016 phylogenetic analysis of Laura Hug and colleagues, using genomic data on over 1,000 organisms, the relationships among prokaryotes are as shown in the tree diagram. Classification Taxonomic history The distinction between prokaryotes and eukaryotes was established by the microbiologists Roger Stanier and C. B. van Niel in their 1962 paper The concept of a bacterium (though spelled procaryote and eucaryote there). That paper cites Édouard Chatton's 1937 book Titres et Travaux Scientifiques for using those terms and recognizing the distinction. One reason for this classification was so that the group then often called blue-green algae (now cyanobacteria) would not be classified as plants but grouped with bacteria. In 1977, Carl Woese proposed dividing prokaryotes into the Bacteria and Archaea (originally Eubacteria and Archaebacteria) because of the major differences in the structure and genetics between the two groups of organisms. Archaea were originally thought to be extremophiles, living only in inhospitable conditions such as extremes of temperature, pH, and radiation but have since been found in all types of habitats. The resulting arrangement of Eukaryota (also called "Eucarya"), Bacteria, and Archaea is called the three-domain system, replacing the traditional two-empire system. As distinct from eukaryotes The division between prokaryotes and eukaryotes has been considered the most important distinction or difference among organisms. The distinction is that eukaryotic cells have a "true" nucleus containing their DNA, whereas prokaryotic cells do not have a nucleus. Both eukaryotes and prokaryotes contain ribosomes which produce proteins as specified by the cell's DNA. Prokaryote ribosomes are smaller than those in eukaryote cytoplasm, but similar to those inside mitochondria and chloroplasts, one of several lines of evidence that those organelles derive from bacteria incorporated by symbiogenesis. The genome in a prokaryote is held within a DNA/protein complex in the cytosol called the nucleoid, which lacks a nuclear envelope. The complex contains a single circular chromosome, a cyclic, double-stranded molecule of stable chromosomal DNA, in contrast to the multiple linear, compact, highly organized chromosomes found in eukaryotic cells. In addition, many important genes of prokaryotes are stored in separate circular DNA structures called plasmids. Like eukaryotes, prokaryotes may partially duplicate genetic material, and can have a haploid chromosomal composition that is partially replicated. Prokaryotes lack mitochondria and chloroplasts. Instead, processes such as oxidative phosphorylation and photosynthesis take place across the prokaryotic cell membrane. However, prokaryotes do possess some internal structures, such as prokaryotic cytoskeletons. It has been suggested that the bacterial phylum Planctomycetota has a membrane around the nucleoid and contains other membrane-bound cellular structures. However, further investigation revealed that Planctomycetota cells are not compartmentalized or nucleated and, like other bacterial membrane systems, are interconnected. Prokaryotic cells are usually much smaller than eukaryotic cells. Therefore, prokaryotes have a larger surface-area-to-volume ratio, giving them a higher metabolic rate, a higher growth rate, and as a consequence, a shorter generation time than eukaryotes. Eukaryotes as Archaea There is increasing evidence that the roots of the eukaryotes are to be found in the archaean Asgard group, perhaps Heimdallarchaeota. For example, histones which usually package DNA in eukaryotic nuclei, are found in several archaean groups, giving evidence for homology. The non-bacterial group comprising Archaea and Eukaryota was called Neomura by Thomas Cavalier-Smith in 2002, on the view that these form a clade. Unlike the above assumption of a fundamental split between prokaryotes and eukaryotes, the most important difference between biota may be the division between Bacteria and the rest (Archaea and Eukaryota). DNA replication differs fundamentally between the Bacteria and Archaea (including that in eukaryotic nuclei), and it may not be homologous between these two groups. Further, ATP synthase, though homologous in all organisms, differs greatly between bacteria (including eukaryotic organelles such as mitochondria and chloroplasts) and the archaea/eukaryote nucleus group. The last common ancestor of all life (called LUCA) should have possessed an early version of this protein complex. As ATP synthase is obligate membrane bound, this supports the assumption that LUCA was a cellular organism. The RNA world hypothesis might clarify this scenario, as LUCA might have lacked DNA, but had an RNA genome built by ribosomes as suggested by Woese. A ribonucleoprotein world has been proposed based on the idea that oligopeptides may have been built together with primordial nucleic acids at the same time, which supports the concept of a ribocyte as LUCA. The feature of DNA as the material base of the genome might have then been adopted separately in bacteria and in archaea (and later eukaryote nuclei), presumably with the help of some viruses (possibly retroviruses as they could reverse transcribe RNA to DNA).
Biology and health sciences
Other organisms
null
19174720
https://en.wikipedia.org/wiki/Electric%20battery
Electric battery
An electric battery is a source of electric power consisting of one or more electrochemical cells with external connections for powering electrical devices. When a battery is supplying power, its positive terminal is the cathode and its negative terminal is the anode. The terminal marked negative is the source of electrons. When a battery is connected to an external electric load, those negatively charged electrons flow through the circuit and reach to the positive terminal, thus cause a redox reaction by attracting positively charged ions, cations. Thus converts high-energy reactants to lower-energy products, and the free-energy difference is delivered to the external circuit as electrical energy. Historically the term "battery" specifically referred to a device composed of multiple cells; however, the usage has evolved to include devices composed of a single cell. Primary (single-use or "disposable") batteries are used once and discarded, as the electrode materials are irreversibly changed during discharge; a common example is the alkaline battery used for flashlights and a multitude of portable electronic devices. Secondary (rechargeable) batteries can be discharged and recharged multiple times using an applied electric current; the original composition of the electrodes can be restored by reverse current. Examples include the lead–acid batteries used in vehicles and lithium-ion batteries used for portable electronics such as laptops and mobile phones. Batteries come in many shapes and sizes, from miniature cells used to power hearing aids and wristwatches to, at the largest extreme, huge battery banks the size of rooms that provide standby or emergency power for telephone exchanges and computer data centers. Batteries have much lower specific energy (energy per unit mass) than common fuels such as gasoline. In automobiles, this is somewhat offset by the higher efficiency of electric motors in converting electrical energy to mechanical work, compared to combustion engines. History Invention Benjamin Franklin first used the term "battery" in 1749 when he was doing experiments with electricity using a set of linked Leyden jar capacitors. Franklin grouped a number of the jars into what he described as a "battery", using the military term for weapons functioning together. By multiplying the number of holding vessels, a stronger charge could be stored, and more power would be available on discharge. Italian physicist Alessandro Volta built and described the first electrochemical battery, the voltaic pile, in 1800. This was a stack of copper and zinc plates, separated by brine-soaked paper disks, that could produce a steady current for a considerable length of time. Volta did not understand that the voltage was due to chemical reactions. He thought that his cells were an inexhaustible source of energy, and that the associated corrosion effects at the electrodes were a mere nuisance, rather than an unavoidable consequence of their operation, as Michael Faraday showed in 1834. Although early batteries were of great value for experimental purposes, in practice their voltages fluctuated and they could not provide a large current for a sustained period. The Daniell cell, invented in 1836 by British chemist John Frederic Daniell, was the first practical source of electricity, becoming an industry standard and seeing widespread adoption as a power source for electrical telegraph networks. It consisted of a copper pot filled with a copper sulfate solution, in which was immersed an unglazed earthenware container filled with sulfuric acid and a zinc electrode. These wet cells used liquid electrolytes, which were prone to leakage and spillage if not handled correctly. Many used glass jars to hold their components, which made them fragile and potentially dangerous. These characteristics made wet cells unsuitable for portable appliances. Near the end of the nineteenth century, the invention of dry cell batteries, which replaced the liquid electrolyte with a paste, made portable electrical devices practical. Batteries in vacuum tube devices historically used a wet cell for the "A" battery (to provide power to the filament) and a dry cell for the "B" battery (to provide the plate voltage). Ongoing developments Between 2010 and 2018, annual battery demand grew by 30%, reaching a total of 180 GWh in 2018. Conservatively, the growth rate is expected to be maintained at an estimated 25%, culminating in demand reaching 2600 GWh in 2030. In addition, cost reductions are expected to further increase the demand to as much as 3562 GWh. Important reasons for this high rate of growth of the electric battery industry include the electrification of transport, and large-scale deployment in electricity grids, supported by decarbonization initiatives. Distributed electric batteries, such as those used in battery electric vehicles (vehicle-to-grid), and in home energy storage, with smart metering and that are connected to smart grids for demand response, are active participants in smart power supply grids. New methods of reuse, such as echelon use of partly-used batteries, add to the overall utility of electric batteries, reduce energy storage costs, and also reduce pollution/emission impacts due to longer lives. In echelon use of batteries, vehicle electric batteries that have their battery capacity reduced to less than 80%, usually after service of 5–8 years, are repurposed for use as backup supply or for renewable energy storage systems. Grid scale energy storage envisages the large-scale use of batteries to collect and store energy from the grid or a power plant and then discharge that energy at a later time to provide electricity or other grid services when needed. Grid scale energy storage (either turnkey or distributed) are important components of smart power supply grids. Chemistry and principles Batteries convert chemical energy directly to electrical energy. In many cases, the electrical energy released is the difference in the cohesive or bond energies of the metals, oxides, or molecules undergoing the electrochemical reaction. For instance, energy can be stored in Zn or Li, which are high-energy metals because they are not stabilized by d-electron bonding, unlike transition metals. Batteries are designed so that the energetically favorable redox reaction can occur only when electrons move through the external part of the circuit. A battery consists of some number of voltaic cells. Each cell consists of two half-cells connected in series by a conductive electrolyte containing metal cations. One half-cell includes electrolyte and the negative electrode, the electrode to which anions (negatively charged ions) migrate; the other half-cell includes electrolyte and the positive electrode, to which cations (positively charged ions) migrate. Cations are reduced (electrons are added) at the cathode, while metal atoms are oxidized (electrons are removed) at the anode. Some cells use different electrolytes for each half-cell; then a separator is used to prevent mixing of the electrolytes while allowing ions to flow between half-cells to complete the electrical circuit. Each half-cell has an electromotive force (emf, measured in volts) relative to a standard. The net emf of the cell is the difference between the emfs of its half-cells. Thus, if the electrodes have emfs and , then the net emf is ; in other words, the net emf is the difference between the reduction potentials of the half-reactions. The electrical driving force or across the terminals of a cell is known as the terminal voltage (difference) and is measured in volts. The terminal voltage of a cell that is neither charging nor discharging is called the open-circuit voltage and equals the emf of the cell. Because of internal resistance, the terminal voltage of a cell that is discharging is smaller in magnitude than the open-circuit voltage and the terminal voltage of a cell that is charging exceeds the open-circuit voltage. An ideal cell has negligible internal resistance, so it would maintain a constant terminal voltage of until exhausted, then dropping to zero. If such a cell maintained 1.5 volts and produced a charge of one coulomb then on complete discharge it would have performed 1.5 joules of work. In actual cells, the internal resistance increases under discharge and the open-circuit voltage also decreases under discharge. If the voltage and resistance are plotted against time, the resulting graphs typically are a curve; the shape of the curve varies according to the chemistry and internal arrangement employed. The voltage developed across a cell's terminals depends on the energy release of the chemical reactions of its electrodes and electrolyte. Alkaline and zinc–carbon cells have different chemistries, but approximately the same emf of 1.5 volts; likewise NiCd and NiMH cells have different chemistries, but approximately the same emf of 1.2 volts. The high electrochemical potential changes in the reactions of lithium compounds give lithium cells emfs of 3 volts or more. Almost any liquid or moist object that has enough ions to be electrically conductive can serve as the electrolyte for a cell. As a novelty or science demonstration, it is possible to insert two electrodes made of different metals into a lemon, potato, etc. and generate small amounts of electricity. A voltaic pile can be made from two coins (such as a nickel and a penny) and a piece of paper towel dipped in salt water. Such a pile generates a very low voltage but, when many are stacked in series, they can replace normal batteries for a short time. Types Primary and secondary batteries Batteries are classified into primary and secondary forms: Primary batteries are designed to be used until exhausted of energy then discarded. Their chemical reactions are generally not reversible, so they cannot be recharged. When the supply of reactants in the battery is exhausted, the battery stops producing current and is useless. Secondary batteries can be recharged; that is, they can have their chemical reactions reversed by applying electric current to the cell. This regenerates the original chemical reactants, so they can be used, recharged, and used again multiple times. Some types of primary batteries used, for example, for telegraph circuits, were restored to operation by replacing the electrodes. Secondary batteries are not indefinitely rechargeable due to dissipation of the active materials, loss of electrolyte and internal corrosion. Primary batteries, or primary cells, can produce current immediately on assembly. These are most commonly used in portable devices that have low current drain, are used only intermittently, or are used well away from an alternative power source, such as in alarm and communication circuits where other electric power is only intermittently available. Disposable primary cells cannot be reliably recharged, since the chemical reactions are not easily reversible and active materials may not return to their original forms. Battery manufacturers recommend against attempting to recharge primary cells. In general, these have higher energy densities than rechargeable batteries, but disposable batteries do not fare well under high-drain applications with loads under 75 ohms (75 Ω). Common types of disposable batteries include zinc–carbon batteries and alkaline batteries. Secondary batteries, also known as secondary cells, or rechargeable batteries, must be charged before first use; they are usually assembled with active materials in the discharged state. Rechargeable batteries are (re)charged by applying electric current, which reverses the chemical reactions that occur during discharge/use. Devices to supply the appropriate current are called chargers. The oldest form of rechargeable battery is the lead–acid battery, which are widely used in automotive and boating applications. This technology contains liquid electrolyte in an unsealed container, requiring that the battery be kept upright and the area be well ventilated to ensure safe dispersal of the hydrogen gas it produces during overcharging. The lead–acid battery is relatively heavy for the amount of electrical energy it can supply. Its low manufacturing cost and its high surge current levels make it common where its capacity (over approximately 10 Ah) is more important than weight and handling issues. A common application is the modern car battery, which can, in general, deliver a peak current of 450 amperes. Composition Many types of electrochemical cells have been produced, with varying chemical processes and designs, including galvanic cells, electrolytic cells, fuel cells, flow cells and voltaic piles. A wet cell battery has a liquid electrolyte. Other names are flooded cell, since the liquid covers all internal parts or vented cell, since gases produced during operation can escape to the air. Wet cells were a precursor to dry cells and are commonly used as a learning tool for electrochemistry. They can be built with common laboratory supplies, such as beakers, for demonstrations of how electrochemical cells work. A particular type of wet cell known as a concentration cell is important in understanding corrosion. Wet cells may be primary cells (non-rechargeable) or secondary cells (rechargeable). Originally, all practical primary batteries such as the Daniell cell were built as open-top glass jar wet cells. Other primary wet cells are the Leclanche cell, Grove cell, Bunsen cell, Chromic acid cell, Clark cell, and Weston cell. The Leclanche cell chemistry was adapted to the first dry cells. Wet cells are still used in automobile batteries and in industry for standby power for switchgear, telecommunication or large uninterruptible power supplies, but in many places batteries with gel cells have been used instead. These applications commonly use lead–acid or nickel–cadmium cells. Molten salt batteries are primary or secondary batteries that use a molten salt as electrolyte. They operate at high temperatures and must be well insulated to retain heat. A dry cell uses a paste electrolyte, with only enough moisture to allow current to flow. Unlike a wet cell, a dry cell can operate in any orientation without spilling, as it contains no free liquid, making it suitable for portable equipment. By comparison, the first wet cells were typically fragile glass containers with lead rods hanging from the open top and needed careful handling to avoid spillage. Lead–acid batteries did not achieve the safety and portability of the dry cell until the development of the gel battery. A common dry cell is the zinc–carbon battery, sometimes called the dry Leclanché cell, with a nominal voltage of 1.5 volts, the same as the alkaline battery (since both use the same zinc–manganese dioxide combination). A standard dry cell comprises a zinc anode, usually in the form of a cylindrical pot, with a carbon cathode in the form of a central rod. The electrolyte is ammonium chloride in the form of a paste next to the zinc anode. The remaining space between the electrolyte and carbon cathode is taken up by a second paste consisting of ammonium chloride and manganese dioxide, the latter acting as a depolariser. In some designs, the ammonium chloride is replaced by zinc chloride. A reserve battery can be stored unassembled (unactivated and supplying no power) for a long period (perhaps years). When the battery is needed, then it is assembled (e.g., by adding electrolyte); once assembled, the battery is charged and ready to work. For example, a battery for an electronic artillery fuze might be activated by the impact of firing a gun. The acceleration breaks a capsule of electrolyte that activates the battery and powers the fuze's circuits. Reserve batteries are usually designed for a short service life (seconds or minutes) after long storage (years). A water-activated battery for oceanographic instruments or military applications becomes activated on immersion in water. On 28 February 2017, the University of Texas at Austin issued a press release about a new type of solid-state battery, developed by a team led by lithium-ion battery inventor John Goodenough, "that could lead to safer, faster-charging, longer-lasting rechargeable batteries for handheld mobile devices, electric cars and stationary energy storage". The solid-state battery is also said to have "three times the energy density", increasing its useful life in electric vehicles, for example. It should also be more ecologically sound since the technology uses less expensive, earth-friendly materials such as sodium extracted from seawater. They also have much longer life. Sony has developed a biological battery that generates electricity from sugar in a way that is similar to the processes observed in living organisms. The battery generates electricity through the use of enzymes that break down carbohydrates. The sealed valve regulated lead–acid battery (VRLA battery) is popular in the automotive industry as a replacement for the lead–acid wet cell. The VRLA battery uses an immobilized sulfuric acid electrolyte, reducing the chance of leakage and extending shelf life. VRLA batteries immobilize the electrolyte. The two types are: Gel batteries (or "gel cell") use a semi-solid electrolyte. Absorbed Glass Mat (AGM) batteries absorb the electrolyte in a special fiberglass matting. Other portable rechargeable batteries include several sealed "dry cell" types, that are useful in applications such as mobile phones and laptop computers. Cells of this type (in order of increasing power density and cost) include nickel–cadmium (NiCd), nickel–zinc (NiZn), nickel–metal hydride (NiMH), and lithium-ion (Li-ion) cells. Li-ion has by far the highest share of the dry cell rechargeable market. NiMH has replaced NiCd in most applications due to its higher capacity, but NiCd remains in use in power tools, two-way radios, and medical equipment. In the 2000s, developments include batteries with embedded electronics such as USBCELL, which allows charging an AA battery through a USB connector, nanoball batteries that allow for a discharge rate about 100x greater than current batteries, and smart battery packs with state-of-charge monitors and battery protection circuits that prevent damage on over-discharge. Low self-discharge (LSD) allows secondary cells to be charged prior to shipping. Lithium–sulfur batteries were used on the longest and highest solar-powered flight. Consumer and industrial grades Batteries of all types are manufactured in consumer and industrial grades. Costlier industrial-grade batteries may use chemistries that provide higher power-to-size ratio, have lower self-discharge and hence longer life when not in use, more resistance to leakage and, for example, ability to handle the high temperature and humidity associated with medical autoclave sterilization. Combination and management Standard-format batteries are inserted into battery holder in the device that uses them. When a device does not uses standard-format batteries, they are typically combined into a custom battery pack which holds multiple batteries in addition to features such as a battery management system and battery isolator which ensure that the batteries within are charged and discharged evenly. Sizes Primary batteries readily available to consumers range from tiny button cells used for electric watches, to the No. 6 cell used for signal circuits or other long duration applications. Secondary cells are made in very large sizes; very large batteries can power a submarine or stabilize an electrical grid and help level out peak loads. , the world's largest battery was built in South Australia by Tesla. It can store 129 MWh. A battery in Hebei Province, China, which can store 36 MWh of electricity was built in 2013 at a cost of $500 million. Another large battery, composed of Ni–Cd cells, was in Fairbanks, Alaska. It covered —bigger than a football pitch—and weighed 1,300 tonnes. It was manufactured by ABB to provide backup power in the event of a blackout. The battery can provide 40 MW of power for up to seven minutes. Sodium–sulfur batteries have been used to store wind power. A 4.4 MWh battery system that can deliver 11 MW for 25 minutes stabilizes the output of the Auwahi wind farm in Hawaii. Comparison Many important cell properties, such as voltage, energy density, flammability, available cell constructions, operating temperature range and shelf life, are dictated by battery chemistry. Performance, capacity and discharge A battery's characteristics may vary over load cycle, over charge cycle, and over lifetime due to many factors including internal chemistry, current drain, and temperature. At low temperatures, a battery cannot deliver as much power. As such, in cold climates, some car owners install battery warmers, which are small electric heating pads that keep the car battery warm. A battery's capacity is the amount of electric charge it can deliver at a voltage that does not drop below the specified terminal voltage. The more electrode material contained in the cell the greater its capacity. A small cell has less capacity than a larger cell with the same chemistry, although they develop the same open-circuit voltage. Capacity is usually stated in ampere-hours (A·h) (mAh for small batteries). The rated capacity of a battery is usually expressed as the product of 20 hours multiplied by the current that a new battery can consistently supply for 20 hours at , while remaining above a specified terminal voltage per cell. For example, a battery rated at 100 A·h can deliver 5 A over a 20-hour period at room temperature. The fraction of the stored charge that a battery can deliver depends on multiple factors, including battery chemistry, the rate at which the charge is delivered (current), the required terminal voltage, the storage period, ambient temperature and other factors. The higher the discharge rate, the lower the capacity. The relationship between current, discharge time and capacity for a lead acid battery is approximated (over a typical range of current values) by Peukert's law: where is the capacity when discharged at a rate of 1 amp. is the current drawn from battery (A). is the amount of time (in hours) that a battery can sustain. is a constant around 1.3. Charged batteries (rechargeable or disposable) lose charge by internal self-discharge over time although not discharged, due to the presence of generally irreversible side reactions that consume charge carriers without producing current. The rate of self-discharge depends upon battery chemistry and construction, typically from months to years for significant loss. When batteries are recharged, additional side reactions reduce capacity for subsequent discharges. After enough recharges, in essence all capacity is lost and the battery stops producing power. Internal energy losses and limitations on the rate that ions pass through the electrolyte cause battery efficiency to vary. Above a minimum threshold, discharging at a low rate delivers more of the battery's capacity than at a higher rate. Installing batteries with varying A·h ratings changes operating time, but not device operation unless load limits are exceeded. High-drain loads such as digital cameras can reduce total capacity of rechargeable or disposable batteries. For example, a battery rated at 2 A·h for a 10- or 20-hour discharge would not sustain a current of 1 A for a full two hours as its stated capacity suggests. The C-rate is a measure of the rate at which a battery is being charged or discharged. It is defined as the current through the battery divided by the theoretical current draw under which the battery would deliver its nominal rated capacity in one hour. It has the units h−1. Because of internal resistance loss and the chemical processes inside the cells, a battery rarely delivers nameplate rated capacity in only one hour. Typically, maximum capacity is found at a low C-rate, and charging or discharging at a higher C-rate reduces the usable life and capacity of a battery. Manufacturers often publish datasheets with graphs showing capacity versus C-rate curves. C-rate is also used as a rating on batteries to indicate the maximum current that a battery can safely deliver in a circuit. Standards for rechargeable batteries generally rate the capacity and charge cycles over a 4-hour (0.25C), 8 hour (0.125C) or longer discharge time. Types intended for special purposes, such as in a computer uninterruptible power supply, may be rated by manufacturers for discharge periods much less than one hour (1C) but may suffer from limited cycle life. In 2009 experimental lithium iron phosphate () battery technology provided the fastest charging and energy delivery, discharging all its energy into a load in 10 to 20 seconds. In 2024 a prototype battery for electric cars that could charge from 10% to 80% in five minutes was demonstrated, and a Chinese company claimed that car batteries it had introduced charged 10% to 80% in 10.5 minutes—the fastest batteries available—compared to Tesla's 15 minutes to half-charge. Lifespan and endurance Battery life (or lifetime) has two meanings for rechargeable batteries but only one for non-chargeables. It can be used to describe the length of time a device can run on a fully charged battery—this is also unambiguously termed "endurance". For a rechargeable battery it may also be used for the number of charge/discharge cycles possible before the cells fail to operate satisfactorily—this is also termed "lifespan". The term shelf life is used to describe how long a battery will retain its performance between manufacture and use. Available capacity of all batteries drops with decreasing temperature. In contrast to most of today's batteries, the Zamboni pile, invented in 1812, offers a very long service life without refurbishment or recharge, although it can supply very little current (nanoamps). The Oxford Electric Bell has been ringing almost continuously since 1840 on its original pair of batteries, thought to be Zamboni piles. Disposable batteries typically lose 8–20% of their original charge per year when stored at room temperature (20–30 °C). This is known as the "self-discharge" rate, and is due to non-current-producing "side" chemical reactions that occur within the cell even when no load is applied. The rate of side reactions is reduced for batteries stored at lower temperatures, although some can be damaged by freezing and storing in a fridge will not meaningfully prolong shelf life and risks damaging condensation. Old rechargeable batteries self-discharge more rapidly than disposable alkaline batteries, especially nickel-based batteries; a freshly charged nickel cadmium (NiCd) battery loses 10% of its charge in the first 24 hours, and thereafter discharges at a rate of about 10% a month. However, newer low self-discharge nickel–metal hydride (NiMH) batteries and modern lithium designs display a lower self-discharge rate (but still higher than for primary batteries). The active material on the battery plates changes chemical composition on each charge and discharge cycle; active material may be lost due to physical changes of volume, further limiting the number of times the battery can be recharged. Most nickel-based batteries are partially discharged when purchased, and must be charged before first use. Newer NiMH batteries are ready to be used when purchased, and have only 15% discharge in a year. Some deterioration occurs on each charge–discharge cycle. Degradation usually occurs because electrolyte migrates away from the electrodes or because active material detaches from the electrodes. Low-capacity NiMH batteries (1,700–2,000 mA·h) can be charged some 1,000 times, whereas high-capacity NiMH batteries (above 2,500 mA·h) last about 500 cycles. NiCd batteries tend to be rated for 1,000 cycles before their internal resistance permanently increases beyond usable values. Fast charging increases component changes, shortening battery lifespan. If a charger cannot detect when the battery is fully charged then overcharging is likely, damaging it. NiCd cells, if used in a particular repetitive manner, may show a decrease in capacity called "memory effect". The effect can be avoided with simple practices. NiMH cells, although similar in chemistry, suffer less from memory effect. Automotive lead–acid rechargeable batteries must endure stress due to vibration, shock, and temperature range. Because of these stresses and sulfation of their lead plates, few automotive batteries last beyond six years of regular use. Automotive starting (SLI: Starting, Lighting, Ignition) batteries have many thin plates to maximize current. In general, the thicker the plates the longer the life. They are typically discharged only slightly before recharge. "Deep-cycle" lead–acid batteries such as those used in electric golf carts have much thicker plates to extend longevity. The main benefit of the lead–acid battery is its low cost; its main drawbacks are large size and weight for a given capacity and voltage. Lead–acid batteries should never be discharged to below 20% of their capacity, because internal resistance will cause heat and damage when they are recharged. Deep-cycle lead–acid systems often use a low-charge warning light or a low-charge power cut-off switch to prevent the type of damage that will shorten the battery's life. Battery life can be extended by storing the batteries at a low temperature, as in a refrigerator or freezer, which slows the side reactions. Such storage can extend the life of alkaline batteries by about 5%; rechargeable batteries can hold their charge much longer, depending upon type. To reach their maximum voltage, batteries must be returned to room temperature; discharging an alkaline battery at 250 mA at 0 °C is only half as efficient as at 20 °C. Alkaline battery manufacturers such as Duracell do not recommend refrigerating batteries. Hazards A battery explosion is generally caused by misuse or malfunction, such as attempting to recharge a primary (non-rechargeable) battery, or a short circuit. When a battery is recharged at an excessive rate, an explosive gas mixture of hydrogen and oxygen may be produced faster than it can escape from within the battery (e.g. through a built-in vent), leading to pressure build-up and eventual bursting of the battery case. In extreme cases, battery chemicals may spray violently from the casing and cause injury. An expert summary of the problem indicates that this type uses "liquid electrolytes to transport lithium ions between the anode and the cathode. If a battery cell is charged too quickly, it can cause a short circuit, leading to explosions and fires". Car batteries are most likely to explode when a short circuit generates very large currents. Such batteries produce hydrogen, which is very explosive, when they are overcharged (because of electrolysis of the water in the electrolyte). During normal use, the amount of overcharging is usually very small and generates little hydrogen, which dissipates quickly. However, when "jump starting" a car, the high current can cause the rapid release of large volumes of hydrogen, which can be ignited explosively by a nearby spark, e.g. when disconnecting a jumper cable. Overcharging (attempting to charge a battery beyond its electrical capacity) can also lead to a battery explosion, in addition to leakage or irreversible damage. It may also cause damage to the charger or device in which the overcharged battery is later used. Disposing of a battery via incineration may cause an explosion as steam builds up within the sealed case. Many battery chemicals are corrosive, poisonous or both. If leakage occurs, either spontaneously or through accident, the chemicals released may be dangerous. For example, disposable batteries often use a zinc "can" both as a reactant and as the container to hold the other reagents. If this kind of battery is over-discharged, the reagents can emerge through the cardboard and plastic that form the remainder of the container. The active chemical leakage can then damage or disable the equipment that the batteries power. For this reason, many electronic device manufacturers recommend removing the batteries from devices that will not be used for extended periods of time. Many types of batteries employ toxic materials such as lead, mercury, and cadmium as an electrode or electrolyte. When each battery reaches end of life it must be disposed of to prevent environmental damage. Batteries are one form of electronic waste (e-waste). E-waste recycling services recover toxic substances, which can then be used for new batteries. Of the nearly three billion batteries purchased annually in the United States, about 179,000 tons end up in landfills across the country. Batteries may be harmful or fatal if swallowed. Small button cells can be swallowed, in particular by young children. While in the digestive tract, the battery's electrical discharge may lead to tissue damage; such damage is occasionally serious and can lead to death. Ingested disk batteries do not usually cause problems unless they become lodged in the gastrointestinal tract. The most common place for disk batteries to become lodged is the esophagus, resulting in clinical sequelae. Batteries that successfully traverse the esophagus are unlikely to lodge elsewhere. The likelihood that a disk battery will lodge in the esophagus is a function of the patient's age and battery size. Older children do not have problems with batteries smaller than 21–23 mm. Liquefaction necrosis may occur because sodium hydroxide is generated by the current produced by the battery (usually at the anode). Perforation has occurred as rapidly as 6 hours after ingestion. Some battery manufactures have added a bad taste to batteries to discourage swallowing. Legislation and regulation Legislation around electric batteries includes such topics as safe disposal and recycling. In the United States, the Mercury-Containing and Rechargeable Battery Management Act of 1996 banned the sale of mercury-containing batteries, enacted uniform labeling requirements for rechargeable batteries and required that rechargeable batteries be easily removable. California and New York City prohibit the disposal of rechargeable batteries in solid waste. The rechargeable battery industry operates nationwide recycling programs in the United States and Canada, with dropoff points at local retailers. The Battery Directive of the European Union has similar requirements, in addition to requiring increased recycling of batteries and promoting research on improved battery recycling methods. In accordance with this directive all batteries to be sold within the EU must be marked with the "collection symbol" (a crossed-out wheeled bin). This must cover at least 3% of the surface of prismatic batteries and 1.5% of the surface of cylindrical batteries. All packaging must be marked likewise. In response to reported accidents and failures, occasionally ignition or explosion, recalls of devices using lithium-ion batteries have become more common in recent years. On 9 December 2022, the European Parliament reached an agreement to force, from 2026, manufacturers to design all electrical appliances sold in the EU (and not used predominantly in wet conditions) so that consumers can easily remove and replace batteries themselves.
Technology
Energy
null
2434557
https://en.wikipedia.org/wiki/Non-inertial%20reference%20frame
Non-inertial reference frame
A non-inertial reference frame (also known as an accelerated reference frame) is a frame of reference that undergoes acceleration with respect to an inertial frame. An accelerometer at rest in a non-inertial frame will, in general, detect a non-zero acceleration. While the laws of motion are the same in all inertial frames, in non-inertial frames, they vary from frame to frame, depending on the acceleration. In classical mechanics it is often possible to explain the motion of bodies in non-inertial reference frames by introducing additional fictitious forces (also called inertial forces, pseudo-forces, and d'Alembert forces) to Newton's second law. Common examples of this include the Coriolis force and the centrifugal force. In general, the expression for any fictitious force can be derived from the acceleration of the non-inertial frame. As stated by Goodman and Warner, "One might say that F ma holds in any coordinate system provided the term 'force' is redefined to include the so-called 'reversed effective forces' or 'inertia forces'." In the theory of general relativity, the curvature of spacetime causes frames to be locally inertial, but globally non-inertial. Due to the non-Euclidean geometry of curved space-time, there are no global inertial reference frames in general relativity. More specifically, the fictitious force which appears in general relativity is the force of gravity. Avoiding fictitious forces in calculations In flat spacetime, the use of non-inertial frames can be avoided if desired. Measurements with respect to non-inertial reference frames can always be transformed to an inertial frame, incorporating directly the acceleration of the non-inertial frame as that acceleration as seen from the inertial frame. This approach avoids the use of fictitious forces (it is based on an inertial frame, where fictitious forces are absent, by definition) but it may be less convenient from an intuitive, observational, and even a calculational viewpoint. As pointed out by Ryder for the case of rotating frames as used in meteorology: Detection of a non-inertial frame: need for fictitious forces That a given frame is non-inertial can be detected by its need for fictitious forces to explain observed motions. For example, the rotation of the Earth can be observed using a Foucault pendulum. The rotation of the Earth seemingly causes the pendulum to change its plane of oscillation because the surroundings of the pendulum move with the Earth. As seen from an Earth-bound (non-inertial) frame of reference, the explanation of this apparent change in orientation requires the introduction of the fictitious Coriolis force. Another famous example is that of the tension in the string between two spheres rotating about each other. In that case, the prediction of the measured tension in the string based on the motion of the spheres as observed from a rotating reference frame requires the rotating observers to introduce a fictitious centrifugal force. In this connection, it may be noted that a change in coordinate system, for example, from Cartesian to polar, if implemented without any change in relative motion, does not cause the appearance of fictitious forces, although the form of the laws of motion varies from one type of curvilinear coordinate system to another. Relativistic point of view Frames and flat spacetime If a region of spacetime is declared to be Euclidean, and effectively free from obvious gravitational fields, then if an accelerated coordinate system is overlaid onto the same region, it can be said that a uniform fictitious field exists in the accelerated frame (we reserve the word gravitational for the case in which a mass is involved). An object accelerated to be stationary in the accelerated frame will "feel" the presence of the field, and they will also be able to see environmental matter with inertial states of motion (stars, galaxies, etc.) to be apparently falling "downwards" in the field al g curved trajectories as if the field is real. In frame-based descriptions, this supposed field can be made to appear or disappear by switching between "accelerated" and "inertial" coordinate systems. More advanced descriptions As the situation is modeled in finer detail, using the general principle of relativity, the concept of a frame-dependent gravitational field becomes less realistic. In these Machian models, the accelerated body can agree that the apparent gravitational field is associated with the motion of the background matter, but can also claim that the motion of the material as if there is a gravitational field, causes the gravitational field - the accelerating background matter "drags light". Similarly, a background observer can argue that the forced acceleration of the mass causes an apparent gravitational field in the region between it and the environmental material (the accelerated mass also "drags light"). This "mutual" effect, and the ability of an accelerated mass to warp lightbeam geometry and lightbeam-based coordinate systems, is referred to as frame-dragging. Frame-dragging removes the usual distinction between accelerated frames (which show gravitational effects) and inertial frames (where the geometry is supposedly free from gravitational fields). When a forcibly-accelerated body physically "drags" a coordinate system, the problem becomes an exercise in warped spacetime for all observers.
Physical sciences
Classical mechanics
Physics
5959843
https://en.wikipedia.org/wiki/Vector%20notation
Vector notation
In mathematics and physics, vector notation is a commonly used notation for representing vectors, which may be Euclidean vectors, or more generally, members of a vector space. For denoting a vector, the common typographic convention is lower case, upright boldface type, as in . The International Organization for Standardization (ISO) recommends either bold italic serif, as in , or non-bold italic serif accented by a right arrow, as in . In advanced mathematics, vectors are often represented in a simple italic type, like any variable. Vector representations include Cartesian, polar, cylindrical, and spherical coordinates. History In 1835 Giusto Bellavitis introduced the idea of equipollent directed line segments which resulted in the concept of a vector as an equivalence class of such segments. The term vector was coined by W. R. Hamilton around 1843, as he revealed quaternions, a system which uses vectors and scalars to span a four-dimensional space. For a quaternion q = a + bi + cj + dk, Hamilton used two projections: S q = a, for the scalar part of q, and V q = bi + cj + dk, the vector part. Using the modern terms cross product (×) and dot product (.), the quaternion product of two vectors p and q can be written pq = –p.q + p×q. In 1878, W. K. Clifford severed the two products to make the quaternion operation useful for students in his textbook Elements of Dynamic. Lecturing at Yale University, Josiah Willard Gibbs supplied notation for the scalar product and vector products, which was introduced in Vector Analysis. In 1891, Oliver Heaviside argued for Clarendon to distinguish vectors from scalars. He criticized the use of Greek letters by Tait and Gothic letters by Maxwell. In 1912, J.B. Shaw contributed his "Comparative Notation for Vector Expressions" to the Bulletin of the Quaternion Society. Subsequently, Alexander Macfarlane described 15 criteria for clear expression with vectors in the same publication. Vector ideas were advanced by Hermann Grassmann in 1841, and again in 1862 in the German language. But German mathematicians were not taken with quaternions as much as were English-speaking mathematicians. When Felix Klein was organizing the German mathematical encyclopedia, he assigned Arnold Sommerfeld to standardize vector notation. In 1950, when Academic Press published G. Kuerti’s translation of the second edition of volume 2 of Lectures on Theoretical Physics by Sommerfeld, vector notation was the subject of a footnote: "In the original German text, vectors and their components are printed in the same Gothic types. The more usual way of making a typographical distinction between the two has been adopted for this translation." Felix Klein commented on differences in notation of vectors and their operations in 1925 through a Mr. Seyfarth who prepared a supplement to Elementary Mathematics from an Advanced Standpoint — Geometry after "repeated conferences" with him. The terms line-segment, plane-segment, plane magnitude, inner and outer product come from Grassmann, while the words scalar, vector, scalar product, and vector product came from Hamilton. The disciples of Grassmann, in other ways so orthodox, replaced in part the appropriate expressions of the master by others. The existing terminologies were merged or modified, and the symbols which indicate the separate operations have been used with the greatest arbitrariness. On these accounts even for the expert, a great lack of clearness has crept into this field, which is mathematically so simple. Efforts to unify the various notational terms through committees of the International Congress of Mathematicians were described as follows: The Committee which was set up in Rome for the unification of vector notation did not have the slightest success, as was to have been expected. At the following Congress in Cambridge (1912), they had to explain that they had not finished their task, and to request that their time be extended to the meeting of the next Congress, which was to have taken place in Stockholm in 1916, but which was omitted because of the war. The committee on units and symbols met a similar fate. It published in 1921 a proposed notation for vector quantities, which aroused at once and from many sides the most violent opposition. Rectangular coordinates Given a Cartesian coordinate system, a vector may be specified by its Cartesian coordinates. Tuple notation A vector v in n-dimensional real coordinate space can be specified using a tuple (ordered list) of coordinates: Sometimes angle brackets are used instead of parentheses. Matrix notation A vector in can also be specified as a row or column matrix containing the ordered set of components. A vector specified as a row matrix is known as a row vector; one specified as a column matrix is known as a column vector. Again, an n-dimensional vector can be specified in either of the following forms using matrices: where v1, v2, …, vn − 1, vn are the components of v. In some advanced contexts, a row and a column vector have different meaning; see covariance and contravariance of vectors for more. Unit vector notation A vector in (or fewer dimensions, such as where vz below is zero) can be specified as the sum of the scalar multiples of the components of the vector with the members of the standard basis in . The basis is represented with the unit vectors , , and . A three-dimensional vector can be specified in the following form, using unit vector notation: where vx, vy, and vz are the scalar components of v. Scalar components may be positive or negative; the absolute value of a scalar component is its magnitude. Polar coordinates The two polar coordinates of a point in a plane may be considered as a two dimensional vector. Such a vector consists of a magnitude (or length) and a direction (or angle). The magnitude, typically represented as r, is the distance from a starting point, the origin, to the point which is represented. The angle, typically represented as θ (the Greek letter theta), is the angle, usually measured , between a fixed direction, typically that of the positive x-axis, and the direction from the origin to the point. The angle is typically reduced to lie within the range radians or . Ordered set and matrix notations Vectors can be specified using either ordered pair notation (a subset of ordered set notation using only two components), or matrix notation, as with rectangular coordinates. In these forms, the first component of the vector is r (instead of v1), and the second component is θ (instead of v2). To differentiate polar coordinates from rectangular coordinates, the angle may be prefixed with the angle symbol, . Two-dimensional polar coordinates for v can be represented as any of the following, using either ordered pair or matrix notation: where r is the magnitude, θ is the angle, and the angle symbol () is optional. Direct notation Vectors can also be specified using simplified autonomous equations that define r and θ explicitly. This can be unwieldy, but is useful for avoiding the confusion with two-dimensional rectangular vectors that arises from using ordered pair or matrix notation. A two-dimensional vector whose magnitude is 5 units, and whose direction is π/9 radians (20°), can be specified using either of the following forms: Cylindrical vectors A cylindrical vector is an extension of the concept of polar coordinates into three dimensions. It is akin to an arrow in the cylindrical coordinate system. A cylindrical vector is specified by a distance in the xy-plane, an angle, and a distance from the xy-plane (a height). The first distance, usually represented as r or ρ (the Greek letter rho), is the magnitude of the projection of the vector onto the xy-plane. The angle, usually represented as θ or φ (the Greek letter phi), is measured as the offset from the line collinear with the x-axis in the positive direction; the angle is typically reduced to lie within the range . The second distance, usually represented as h or z, is the distance from the xy-plane to the endpoint of the vector. Ordered set and matrix notations Cylindrical vectors use polar coordinates, where the second distance component is concatenated as a third component to form ordered triplets (again, a subset of ordered set notation) and matrices. The angle may be prefixed with the angle symbol (); the distance-angle-distance combination distinguishes cylindrical vectors in this notation from spherical vectors in similar notation. A three-dimensional cylindrical vector v can be represented as any of the following, using either ordered triplet or matrix notation: Where r is the magnitude of the projection of v onto the xy-plane, θ is the angle between the positive x-axis and v, and h is the height from the xy-plane to the endpoint of v. Again, the angle symbol () is optional. Direct notation A cylindrical vector can also be specified directly, using simplified autonomous equations that define r (or ρ), θ (or φ), and h (or z). Consistency should be used when choosing the names to use for the variables; ρ should not be mixed with θ and so on. A three-dimensional vector, the magnitude of whose projection onto the xy-plane is 5 units, whose angle from the positive x-axis is π/9 radians (20°), and whose height from the xy-plane is 3 units can be specified in any of the following forms: Spherical vectors A spherical vector is another method for extending the concept of polar vectors into three dimensions. It is akin to an arrow in the spherical coordinate system. A spherical vector is specified by a magnitude, an azimuth angle, and a zenith angle. The magnitude is usually represented as ρ. The azimuth angle, usually represented as θ, is the () offset from the positive x-axis. The zenith angle, usually represented as φ, is the offset from the positive z-axis. Both angles are typically reduced to lie within the range from zero (inclusive) to 2π (exclusive). Ordered set and matrix notations Spherical vectors are specified like polar vectors, where the zenith angle is concatenated as a third component to form ordered triplets and matrices. The azimuth and zenith angles may be both prefixed with the angle symbol (); the prefix should be used consistently to produce the distance-angle-angle combination that distinguishes spherical vectors from cylindrical ones. A three-dimensional spherical vector v can be represented as any of the following, using either ordered triplet or matrix notation: Where ρ is the magnitude, θ is the azimuth angle, and φ is the zenith angle. Direct notation Like polar and cylindrical vectors, spherical vectors can be specified using simplified autonomous equations, in this case for ρ, θ, and φ. A three-dimensional vector whose magnitude is 5 units, whose azimuth angle is π/9 radians (20°), and whose zenith angle is π/4 radians (45°) can be specified as: Operations In any given vector space, the operations of vector addition and scalar multiplication are defined. Normed vector spaces also define an operation known as the norm (or determination of magnitude). Inner product spaces also define an operation known as the inner product. In , the inner product is known as the dot product. In and , an additional operation known as the cross product is also defined. Vector addition Vector addition is represented with the plus sign used as an operator between two vectors. The sum of two vectors u and v would be represented as: Scalar multiplication Scalar multiplication is represented in the same manners as algebraic multiplication. A scalar beside a vector (either or both of which may be in parentheses) implies scalar multiplication. The two common operators, a dot and a rotated cross, are also acceptable (although the rotated cross is almost never used), but they risk confusion with dot products and cross products, which operate on two vectors. The product of a scalar k with a vector v can be represented in any of the following fashions: Vector subtraction and scalar division Using the algebraic properties of subtraction and division, along with scalar multiplication, it is also possible to “subtract” two vectors and “divide” a vector by a scalar. Vector subtraction is performed by adding the scalar multiple of −1 with the second vector operand to the first vector operand. This can be represented by the use of the minus sign as an operator. The difference between two vectors u and v can be represented in either of the following fashions: Scalar division is performed by multiplying the vector operand with the multiplicative inverse of the scalar operand. This can be represented by the use of the fraction bar or division signs as operators. The quotient of a vector v and a scalar c can be represented in any of the following forms: Norm The norm of a vector is represented with double bars on both sides of the vector. The norm of a vector v can be represented as: The norm is also sometimes represented with single bars, like , but this can be confused with absolute value (which is a type of norm). Inner product The inner product of two vectors (also known as the scalar product, not to be confused with scalar multiplication) is represented as an ordered pair enclosed in angle brackets. The inner product of two vectors u and v would be represented as: Dot product In , the inner product is also known as the dot product. In addition to the standard inner product notation, the dot product notation (using the dot as an operator) can also be used (and is more common). The dot product of two vectors u and v can be represented as: In some older literature, the dot product is implied between two vectors written side-by-side. This notation can be confused with the dyadic product between two vectors. Cross product The cross product of two vectors (in ) is represented using the rotated cross as an operator. The cross product of two vectors u and v would be represented as: By some conventions (e.g. in France and in some areas of higher mathematics), this is also denoted by a wedge, which avoids confusion with the wedge product since the two are functionally equivalent in three dimensions: In some older literature, the following notation is used for the cross product between u and v: Nabla Vector notation is used with calculus through the Nabla operator: With a scalar function f, the gradient is written as with a vector field F, the divergence is written as and with a vector field F, the curl is written as
Mathematics
Linear algebra
null
10099478
https://en.wikipedia.org/wiki/Hectorite
Hectorite
Hectorite is a rare soft, greasy, white clay mineral with a chemical formula of . Hectorite was first described in 1941 and named for an occurrence in the United States near Hector (in San Bernardino County, California, 30 miles east of Barstow.) Hectorite occurs with bentonite as an alteration product of clinoptilolite from volcanic ash and tuff with a high glass content. Hectorite is also found in the beige/brown clay ghassoul, mined in the Atlas Mountains in Morocco. A large deposit of hectorite is also found at the Thacker Pass lithium deposit, located within the McDermitt Caldera in Nevada. The Thacker Pass lithium deposit could be a significant source of lithium. Despite its rarity, it is economically viable as the Hector mine sits over a large deposit of the mineral. Hectorite is mostly used in making cosmetics, but has uses in chemical and other industrial applications, and is a mineral source for refined lithium metal.
Physical sciences
Silicate minerals
Earth science
10101126
https://en.wikipedia.org/wiki/Kinder%20goat
Kinder goat
The Kinder is an American breed of domestic goat. It originated on a farm in Snohomish, Washington, where in about 1985 an American Pygmy buck was cross-bred with Nubian does. The resulting stock was selectively bred to create a compact but well-muscled goat, suitable both for milk and for meat production. A herd-book was started in 1988; by 2006 about three thousand head had been registered. History The Kinder originated in about 1985 on a farm in Snohomish, Washington, in the north-western United States. There, an American Pygmy buck was cross-bred with Nubian does. The resulting stock was selectively bred to create a compact but well-muscled goat, suitable both for goat's milk and for goat's meat production. In 1988 a breed society, the Kinder Goat Breeders Association, was established, and a herd-book was started; by 2006 about three thousand head had been registered. The breed has spread within the United States, where it is present in about thirteen states, and also to Brazil and Canada. The conservation status of the Kinder was listed by the FAO as endangered in 2007; in 2020 DAD-IS listed its status as unknown. Characteristics The Kinder is of moderate size, with a sturdy body inherited from the American Pygmy, but with the longer legs of the Nubian. Height at the withers is for does and for bucks, with weights of about and respectively. It is horned in both sexes, but in the United States is commonly disbudded. The coat is short; the breed standard does not specify any particular coat color. Use A Kinder doe may give some of milk in a lactation of about 305 days. The milk is claimed to have an average butterfat content of about 5.5%, occasionally reaching 7%; it is high in milk solids, and is thus suitable for cheese-making. Like other goat breeds of tropical origin, the Kinder is an aseasonal breeder, and can be bred at any time of the year. It is a highly prolific breed – twin and triplet births are a normal occurrence. The kids put on weight rapidly; the dressed weight after slaughter averages approximately 60%.
Biology and health sciences
Goats
Animals
10101941
https://en.wikipedia.org/wiki/Bortle%20scale
Bortle scale
The Bortle dark-sky scale (usually referred to as simply the Bortle scale) is a nine-level numeric scale that measures the night sky's brightness of a particular location. It quantifies the astronomical observability of celestial objects and the interference caused by light pollution. Amateur astronomer John E. Bortle created the scale and published it in the February 2001 edition of Sky & Telescope magazine to help skywatchers evaluate the darkness of an observing site, and secondarily, to compare the darkness of observing sites. The scale ranges from Class 1, the darkest skies available on Earth, through to Class 9, inner-city skies. It gives several criteria for each level beyond naked-eye limiting magnitude (NELM). The accuracy and utility of the scale have been questioned in 2014 research. The table summarizes Bortle's descriptions of the classes. For some classes, there can be drastic differences from one class to the next, e.g, Bortle 4 to 5. Table of dark-sky classifications
Physical sciences
Basics
Astronomy
11539646
https://en.wikipedia.org/wiki/Knife%20switch
Knife switch
A knife switch is a type of switch used to control the flow of electricity in a circuit. It is composed of a hinge which allows a metal lever, or knife, to be lifted from or inserted into a slot or jaw. The hinge and jaw are both fixed to an insulated base, and the knife has an insulated handle. Current flows through the switch when the knife is pushed into the jaw. Knife switches can take several forms, including single-throw, in which the knife engages with only a single slot, and double-throw, in which the knife hinge is placed between two slots and can engage with either one. Multiple knives may be attached to a single handle and can be used to activate more than one circuit simultaneously; this is a multi-pole switch. Current uses Though used commonly in the past, knife switches are now rare, finding use largely in science demonstrations where the exposed mechanics of the switch make its function and state visually apparent. The knife switch is extremely simple in construction and use, but for any dangerous electrical supply its exposed metal parts present a great risk of electric shock, and the switch is subject to arcing when opened at higher voltages, which poses a further risk of shock or burns to the operator and can cause fires or explosions under certain conditions. Open knife switches were supplanted by safety switches with current-carrying contacts inside metal enclosures which can only be opened by switching off the power. In modern applications, automatic switches (such as contactors and relays) and manual switches such as circuit breakers are used. These devices use snap-action mechanisms which open the switch contacts rapidly and feature arc chutes where the arcs caused by opening the switches are quenched. These devices also prevent injury due to accidental contact, as all of the current–carrying metal parts of the switch are surrounded by insulating guards.
Technology
Components
null
708839
https://en.wikipedia.org/wiki/Loschmidt%27s%20paradox
Loschmidt's paradox
In physics, Loschmidt's paradox (named for J.J. Loschmidt), also known as the reversibility paradox, irreversibility paradox, or (), is the objection that it should not be possible to deduce an irreversible process from time-symmetric dynamics. This puts the time reversal symmetry of (almost) all known low-level fundamental physical processes at odds with any attempt to infer from them the second law of thermodynamics which describes the behaviour of macroscopic systems. Both of these are well-accepted principles in physics, with sound observational and theoretical support, yet they seem to be in conflict, hence the paradox. Origin Josef Loschmidt's criticism was provoked by the H-theorem of Boltzmann, which employed kinetic theory to explain the increase of entropy in an ideal gas from a non-equilibrium state, when the molecules of the gas are allowed to collide. In 1876, Loschmidt pointed out that if there is a motion of a system from time t0 to time t1 to time t2 that leads to a steady decrease of H (increase of entropy) with time, then there is another allowed state of motion of the system at t1, found by reversing all the velocities, in which H must increase. This revealed that one of Boltzmann's key assumptions, molecular chaos, or, the Stosszahlansatz, that all particle velocities were completely uncorrelated, did not follow from Newtonian dynamics. One can assert that possible correlations are uninteresting, and therefore decide to ignore them; but if one does so, one has changed the conceptual system, injecting an element of time-asymmetry by that very action. Reversible laws of motion cannot explain why we experience our world to be in such a comparatively low state of entropy at the moment (compared to the equilibrium entropy of universal heat death); and to have been at even lower entropy in the past. Later authors have coined the term "Loschmitz's demon" (in analogy to Maxwell's demon, see below) for an entity that is able to reverse time evolution in a microscopic system, in their case of nuclear spins, which is indeed, if only for a short time, experimentally possible. Before Loschmidt In 1874, two years before the Loschmidt paper, William Thomson defended the second law against the time reversal objection in his paper "The kinetic theory of the dissipation of energy". Arrow of time Any process that happens regularly in the forward direction of time but rarely or never in the opposite direction, such as entropy increasing in an isolated system, defines what physicists call an arrow of time in nature. This term only refers to an observation of an asymmetry in time; it is not meant to suggest an explanation for such asymmetries. Loschmidt's paradox is equivalent to the question of how it is possible that there could be a thermodynamic arrow of time given time-symmetric fundamental laws, since time-symmetry implies that for any process compatible with these fundamental laws, a reversed version that looked exactly like a film of the first process played backwards would be equally compatible with the same fundamental laws, and would even be equally probable if one were to pick the system's initial state randomly from the phase space of all possible states for that system. Although most of the arrows of time described by physicists are thought to be special cases of the thermodynamic arrow, there are a few that are believed to be unconnected, like the cosmological arrow of time based on the fact that the universe is expanding rather than contracting, and the fact that a few processes in particle physics actually violate time-symmetry, while they respect a related symmetry known as CPT symmetry. In the case of the cosmological arrow, most physicists believe that entropy would continue to increase even if the universe began to contract (although the physicist Thomas Gold once proposed a model in which the thermodynamic arrow would reverse in this phase). In the case of the violations of time-symmetry in particle physics, the situations in which they occur are rare and are only known to involve a few types of meson particles. Furthermore, due to CPT symmetry, reversal of the direction of time is equivalent to renaming particles as antiparticles and vice versa. Therefore, this cannot explain Loschmidt's paradox. Dynamical systems Current research in dynamical systems offers one possible mechanism for obtaining irreversibility from reversible systems. The central argument is based on the claim that the correct way to study the dynamics of macroscopic systems is to study the transfer operator corresponding to the microscopic equations of motion. It is then argued that the transfer operator is not unitary (i.e. is not reversible) but has eigenvalues whose magnitude is strictly less than one; these eigenvalues corresponding to decaying physical states. This approach is fraught with various difficulties; it works well for only a handful of exactly solvable models. Abstract mathematical tools used in the study of dissipative systems include definitions of mixing, wandering sets, and ergodic theory in general. Fluctuation theorem One approach to handling Loschmidt's paradox is the fluctuation theorem, derived heuristically by Denis Evans and Debra Searles, which gives a numerical estimate of the probability that a system away from equilibrium will have a certain value for the dissipation function (often an entropy like property) over a certain amount of time. The result is obtained with the exact time reversible dynamical equations of motion and the universal causation proposition. The fluctuation theorem is obtained using the fact that dynamics is time reversible. Quantitative predictions of this theorem have been confirmed in laboratory experiments at the Australian National University conducted by Edith M. Sevick et al. using optical tweezers apparatus. This theorem is applicable for transient systems, which may initially be in equilibrium and then driven away (as was the case for the first experiment by Sevick et al.) or some other arbitrary initial state, including relaxation towards equilibrium. There is also an asymptotic result for systems which are in a nonequilibrium steady state at all times. There is a crucial point in the fluctuation theorem, that differs from how Loschmidt framed the paradox. Loschmidt considered the probability of observing a single trajectory, which is analogous to enquiring about the probability of observing a single point in phase space. In both of these cases the probability is always zero. To be able to effectively address this you must consider the probability density for a set of points in a small region of phase space, or a set of trajectories. The fluctuation theorem considers the probability density for all of the trajectories that are initially in an infinitesimally small region of phase space. This leads directly to the probability of finding a trajectory, in either the forward or the reverse trajectory sets, depending upon the initial probability distribution as well as the dissipation which is done as the system evolves. It is this crucial difference in approach that allows the fluctuation theorem to correctly solve the paradox. Information theory A more recent proposal concentrates on the step of the paradox in which velocities are reversed. At that moment the gas becomes an open system, and in order to reverse the velocities, position and velocity measurements have to be made. Without this, no reversal is possible. These measurements are themselves either irreversible, or reversible. In the first case, they require an increase of entropy in the measuring device that will at least offset the decrease during the reversed evolution of the gas. In the second case, Landauer's principle can be evoked to reach the same conclusion. Hence, the gas+measuring device system obeys the Second Law of Thermodynamics. It is not a coincidence that this argument mirrors closely another one given by Bennett to explain away Maxwell’s demon. The difference is that the role of measurement is obvious in Maxwell’s demon, but not in Loschmidt’s paradox, which may explain the 40-year gap between both explanations. In the case of the single-trajectory paradox, this argument preempts the need for any other explanation, although some of them make valid points. The broader paradox, “an irreversible process cannot be deduced from reversible dynamics,” is not covered by the argument given in this section. Big Bang Another way of dealing with Loschmidt's paradox is to see the second law as an expression of a set of boundary conditions, in which our universe's time coordinate has a low-entropy starting point: the Big Bang. From this point of view, the arrow of time is determined entirely by the direction that leads away from the Big Bang, and a hypothetical universe with a maximum-entropy Big Bang would have no arrow of time. The theory of cosmic inflation tries to give reason why the early universe had such a low entropy.
Physical sciences
Thermodynamics
Physics
708974
https://en.wikipedia.org/wiki/Rapini
Rapini
Rapini (broccoli rabe or raab) () is a green cruciferous vegetable, with the leaves, buds, and stems all being edible; the buds somewhat resemble broccoli. Rapini is known for its bitter taste, and is particularly associated with Mediterranean cuisine. It is a particularly rich dietary source of vitamin K. Classification Native to Europe, the plant is a member of the tribe Brassiceae of the Brassicaceae (mustard family). Rapini is classified scientifically as Brassica rapa var. ruvo, or Brassica rapa subsp. sylvestris var. esculenta. It is also known as broccoletti, broccoli raab, broccoli rabe, spring raab, and ruvo kale. Turnip and bok choy are different varieties (or subspecies) of this species. Description Rapini has many spiked leaves that surround clusters of green buds that resemble small heads of broccoli. Small, edible yellow flowers may be blooming among the buds. Culinary use The flavor of rapini has been described as nutty, bitter, and pungent, as well as almond-flavored. Rapini needs little more than a trim at the base. The entire stalk is edible when young, but the base becomes more fibrous as the season advances. Rapini is widely used in the cuisine of Rome as well as Southern Italy, particularly in the regions of Sicily, Calabria, Campania, Apulia, In Italian, rapini is called cime di rapa or broccoletti di rapa; in Naples, the green is often called friarielli. Within Portuguese cuisine, grelos de nabo are similar in taste and texture to broccoli rabe. Rapini is also popular in the Galicia region of northwestern Spain; a rapini festival (Feira do grelo) is held in the Galician town of As Pontes every February. Rapini may be sautéed or braised with olive oil and garlic, and sometimes chili pepper and anchovy. It may be used as an ingredient in soup, served with orecchiette, other pasta, or pan-fried sausage. Rapini is sometimes (but not always) blanched before being cooked further. In the United States, rapini is popular in Italian American kitchens; the D'Arrigo Brothers popularized the ingredient in the United States and gave it the name broccoli rabe. Broccoli rabe is a component of some hoagies and submarine sandwiches; in Philadelphia, a popular sandwich is Italian-style roast pork with locally-made sharp provolone cheese, broccoli rabe, and peppers. Rapini can also be a component of pasta dishes, especially when accompanied by Italian sausage. Nutrition Raw rapini is 93% water, 3% each of protein and carbohydrates, and contains negligble fat (table). In a reference amount of , raw rapini supplies 22 calories of food energy, and is a rich source (20% or more of the Daily Value, DV) of vitamin K (187% DV), vitamin C (22% DV), and folate (21% DV) (table). Vitamin A, vitamin E, and several B vitamins, along with the dietary minerals, iron and manganese, are in moderate amounts (10-19% DV) (table).
Biology and health sciences
Leafy vegetables
Plants
709427
https://en.wikipedia.org/wiki/Micro%20black%20hole
Micro black hole
Micro black holes, also called mini black holes or quantum mechanical black holes, are hypothetical tiny (<1 ) black holes, for which quantum mechanical effects play an important role. The concept that black holes may exist that are smaller than stellar mass was introduced in 1971 by Stephen Hawking. It is possible that such black holes were created in the high-density environment of the early Universe (or Big Bang), or possibly through subsequent phase transitions (referred to as primordial black holes). They might be observed by astrophysicists through the particles they are expected to emit by Hawking radiation. Some hypotheses involving additional space dimensions predict that micro black holes could be formed at energies as low as the TeV range, which are available in particle accelerators such as the Large Hadron Collider. Popular concerns have then been raised over end-of-the-world scenarios (see Safety of particle collisions at the Large Hadron Collider). However, such quantum black holes would instantly evaporate, either totally or leaving only a very weakly interacting residue. Beside the theoretical arguments, cosmic rays hitting the Earth do not produce any damage, although they reach energies in the range of hundreds of TeV. Minimum mass of a black hole In an early speculation, Stephen Hawking conjectured that a black hole would not form with a mass below about (roughly the Planck mass). To make a black hole, one must concentrate mass or energy sufficiently that the escape velocity from the region in which it is concentrated exceeds the speed of light. Some extensions of present physics posit the existence of extra dimensions of space. In higher-dimensional spacetime, the strength of gravity increases more rapidly with decreasing distance than in three dimensions. With certain special configurations of the extra dimensions, this effect can lower the Planck scale to the TeV range. Examples of such extensions include large extra dimensions, special cases of the Randall–Sundrum model, and string theory configurations like the GKP solutions. In such scenarios, black hole production could possibly be an important and observable effect at the Large Hadron Collider (LHC). It would also be a common natural phenomenon induced by cosmic rays. All this assumes that the theory of general relativity remains valid at these small distances. If it does not, then other, currently unknown, effects might limit the minimum size of a black hole. Elementary particles are equipped with a quantum-mechanical, intrinsic angular momentum (spin). The correct conservation law for the total (orbital plus spin) angular momentum of matter in curved spacetime requires that spacetime is equipped with torsion. The simplest and most natural theory of gravity with torsion is the Einstein–Cartan theory. Torsion modifies the Dirac equation in the presence of the gravitational field and causes fermion particles to be spatially extended. In this case the spatial extension of fermions limits the minimum mass of a black hole to be on the order of , showing that micro black holes may not exist. The energy necessary to produce such a black hole is 39 orders of magnitude greater than the energies available at the Large Hadron Collider, indicating that the LHC cannot produce mini black holes. But if black holes are produced, then the theory of general relativity is proven wrong and does not exist at these small distances. The rules of general relativity would be broken, as is consistent with theories of how matter, space, and time break down around the event horizon of a black hole. This would prove the spatial extensions of the fermion limits to be incorrect as well. The fermion limits assume a minimum mass needed to sustain a black hole, as opposed to the opposite, the minimum mass needed to start a black hole, which in theory is achievable in the LHC under some conditions. Stability Hawking radiation In 1975, Stephen Hawking argued that, due to quantum effects, black holes "evaporate" by a process now referred to as Hawking radiation in which elementary particles (such as photons, electrons, quarks and gluons) are emitted. His calculations showed that the smaller the size of the black hole, the faster the evaporation rate, resulting in a sudden burst of particles as the micro black hole suddenly explodes. Any primordial black hole of sufficiently low mass will evaporate to near the Planck mass within the lifetime of the Universe. In this process, these small black holes radiate away matter. A rough picture of this is that pairs of virtual particles emerge from the vacuum near the event horizon, with one member of a pair being captured, and the other escaping the vicinity of the black hole. The net result is the black hole loses mass (due to conservation of energy). According to the formulae of black hole thermodynamics, the more the black hole loses mass, the hotter it becomes, and the faster it evaporates, until it approaches the Planck mass. At this stage, a black hole would have a Hawking temperature of (), which means an emitted Hawking particle would have an energy comparable to the mass of the black hole. Thus, a thermodynamic description breaks down. Such a micro black hole would also have an entropy of only 4 nats, approximately the minimum possible value. At this point then, the object can no longer be described as a classical black hole, and Hawking's calculations also break down. While Hawking radiation is sometimes questioned, Leonard Susskind summarizes an expert perspective in his book The Black Hole War: "Every so often, a physics paper will appear claiming that black holes don't evaporate. Such papers quickly disappear into the infinite junk heap of fringe ideas." Conjectures for the final state Conjectures for the final fate of the black hole include total evaporation and production of a Planck-mass-sized black hole remnant. Such Planck-mass black holes may in effect be stable objects if the quantized gaps between their allowed energy levels bar them from emitting Hawking particles or absorbing energy gravitationally like a classical black hole. In such case, they would be weakly interacting massive particles; this could explain dark matter. Primordial black holes Formation in the early Universe Production of a black hole requires concentration of mass or energy within the corresponding Schwarzschild radius. It was hypothesized by Zel'dovich and Novikov first and independently by Hawking that, shortly after the Big Bang, the Universe was dense enough for any given region of space to fit within its own Schwarzschild radius. Even so, at that time, the Universe was not able to collapse into a singularity due to its uniform mass distribution and rapid growth. This, however, does not fully exclude the possibility that black holes of various sizes may have emerged locally. A black hole formed in this way is called a primordial black hole and is the most widely accepted hypothesis for the possible creation of micro black holes. Computer simulations suggest that the probability of formation of a primordial black hole is inversely proportional to its mass. Thus, the most likely outcome would be micro black holes. Expected observable effects A primordial black hole with an initial mass of around would be completing its evaporation today; a less massive primordial black hole would have already evaporated. Under optimal conditions, the Fermi Gamma-ray Space Telescope satellite, launched in June 2008, might detect experimental evidence for evaporation of nearby black holes by observing gamma ray bursts. It is unlikely that a collision between a microscopic black hole and an object such as a star or a planet would be noticeable. The small radius and high density of the black hole would allow it to pass straight through any object consisting of normal atoms, interacting with only few of its atoms while doing so. It has, however, been suggested that a small black hole of sufficient mass passing through the Earth would produce a detectable acoustic or seismic signal. On the moon, it may leave a distinct type of crater, still visible after billions of years. Human-made micro black holes Feasibility of production In familiar three-dimensional gravity, the minimum energy of a microscopic black hole is (equivalent to 1.6 GJ or 444 kWh), which would have to be condensed into a region on the order of the Planck length. This is far beyond the limits of any current technology. It is estimated that to collide two particles to within a distance of a Planck length with currently achievable magnetic field strengths would require a ring accelerator about 1,000 light years in diameter to keep the particles on track. However, in some scenarios involving extra dimensions of space, the Planck mass can be as low as the TeV range. The Large Hadron Collider (LHC) has a design energy of for proton–proton collisions and 1,150 TeV for Pb–Pb collisions. It was argued in 2001 that, in these circumstances, black hole production could be an important and observable effect at the LHC or future higher-energy colliders. Such quantum black holes should decay emitting sprays of particles that could be seen by detectors at these facilities. A paper by Choptuik and Pretorius, published in 2010 in Physical Review Letters, presented a computer-generated proof that micro black holes must form from two colliding particles with sufficient energy, which might be allowable at the energies of the LHC if additional dimensions are present other than the customary four (three spatial, one temporal). Safety arguments Hawking's calculation and more general quantum mechanical arguments predict that micro black holes evaporate almost instantaneously. Additional safety arguments beyond those based on Hawking radiation were given in the paper, which showed that in hypothetical scenarios with stable micro black holes massive enough to destroy Earth, such black holes would have been produced by cosmic rays and would have likely already destroyed astronomical objects such as planets, stars, or stellar remnants such as neutron stars and white dwarfs. Black holes in quantum theories of gravity It is possible, in some theories of quantum gravity, to calculate the quantum corrections to ordinary, classical black holes. Contrarily to conventional black holes, which are solutions of gravitational field equations of the general theory of relativity, quantum gravity black holes incorporate quantum gravity effects in the vicinity of the origin, where classically a curvature singularity occurs. According to the theory employed to model quantum gravity effects, there are different kinds of quantum gravity black holes, namely loop quantum black holes, non-commutative black holes, and asymptotically safe black holes. In these approaches, black holes are singularity-free. Virtual micro black holes were proposed by Stephen Hawking in 1995 and by Fabio Scardigli in 1999 as part of a Grand Unified Theory as a quantum gravity candidate.
Physical sciences
Basics_2
Astronomy
709548
https://en.wikipedia.org/wiki/Redcurrant
Redcurrant
The redcurrant or red currant (Ribes rubrum) is a member of the genus Ribes in the gooseberry family. It is native to western Europe. The species is widely cultivated and has escaped into the wild in many regions. Description Ribes rubrum is a deciduous shrub normally growing to tall, occasionally , with five-lobed leaves arranged spirally on the stems. The flowers are inconspicuous yellow-green, in pendulous racemes, maturing into bright red translucent edible berries about diameter, with 3–10 berries on each raceme. An established bush can produce of berries from mid- to late summer. Phytochemicals Redcurrant fruits are known for their tart flavor, a characteristic provided by a relatively high content of organic acids and mixed polyphenols. As many as 65 different phenolic compounds may contribute to the astringent properties of redcurrants, with these contents increasing during the last month of ripening. Twenty-five individual polyphenols and other nitrogen-containing phytochemicals in redcurrant juice have been isolated specifically with the astringent flavor profile sensed in the human tongue. Cultivation Several other similar species native in Europe, Asia and North America also have edible fruit. These include Ribes spicatum (northern Europe and northern Asia), Ribes alpinum (northern Europe, and at high altitudes south to the Alps, Pyrenees and Caucasus), R. schlechtendalii (northeast Europe), R. multiflorum (southeast Europe), R. petraeum (southwest Europe) and R. triste (North America; Newfoundland to Alaska and southward in mountains). While Ribes rubrum is native to Europe, large berried cultivars of the redcurrant were first produced in Belgium and northern France in the 17th century. In modern times, numerous cultivars have been selected; some of these have escaped gardens and can be found in the wild across Europe and extending into Asia. The white currant is also a cultivar of R. rubrum. Although it is a sweeter and less pigmented variant of the redcurrant, not a separate botanical species, it is sometimes marketed with names such as R. sativum or R. silvestre, or sold as a different fruit. Currant bushes prefer partial to full sunlight and can grow in most types of soil. They are relatively low-maintenance plants and can also be used as ornamentation. Cultivars Many redcurrant and whitecurrant cultivars are available for domestic cultivation from specialist growers. The following have gained the Royal Horticultural Society's Award of Garden Merit: "Jonkheer van Tets" "Red Lake" "Stanza" "White Grape" (whitecurrant) Uses Nutrition In a reference serving, redcurrants (or white) supply of food energy and are a rich source of vitamin C, providing 49% of the Daily Value (DV, table). Vitamin K is the only other essential nutrient in significant content at 10% of DV (table). Culinary With maturity, the tart flavour of redcurrant fruit is slightly greater than its blackcurrant relative, but with the same approximate sweetness. The white-fruited variant of redcurrant, often referred to as white currant, has the same tart flavour but with greater sweetness. Although frequently cultivated for jams and cooked preparations, much like the white currant, it is often served raw or as a simple accompaniment in salads, garnishes, or drinks when in season. In the United Kingdom, redcurrant jelly is a condiment often served with lamb, game meat including venison, turkey and goose in a festive or Sunday roast. It is essentially a jam and is made in the same way, by adding the redcurrants to sugar, boiling, and straining. In France, the highly rarefied and hand-made Bar-le-duc or "Lorraine jelly" is a spreadable preparation traditionally made from white currants or alternatively redcurrants. The pips are taken off by hand, originally by monks, with a goose feather, before cooking. In Scandinavia and Schleswig-Holstein, it is often used in fruit soups and summer puddings (rødgrød, rote grütze or rode grütt). In Germany it is also used in combination with custard or meringue as a filling for tarts. In Linz, Austria, it is the most commonly used filling for the Linzer torte. It can be enjoyed in its fresh state without the addition of sugar. In German-speaking areas, syrup or nectar derived from the redcurrant is added to soda water and enjoyed as a refreshing drink named Johannisbeerschorle. It is so named because the redcurrants (Johannisbeeren, "John's berry" in German) are said to ripen first on St. John's Day, also known as Midsummer Day, June 24. In Russia, redcurrants are ubiquitous and used in jams, preserves, compotes and desserts. It is also used to make kissel, a sweet dessert made from fresh berries or fruits (such as red currants, cherries, cranberries). The leaves have many uses in traditional medicine, such as making an infusion with black tea.
Biology and health sciences
Berries
Plants
710174
https://en.wikipedia.org/wiki/Bridge%20circuit
Bridge circuit
A bridge circuit is a topology of electrical circuitry in which two circuit branches (usually in parallel with each other) are "bridged" by a third branch connected between the first two branches at some intermediate point along them. The bridge was originally developed for laboratory measurement purposes and one of the intermediate bridging points is often adjustable when so used. Bridge circuits now find many applications, both linear and non-linear, including in instrumentation, filtering and power conversion. The best-known bridge circuit, the Wheatstone bridge, was invented by Samuel Hunter Christie and popularized by Charles Wheatstone, and is used for measuring resistance. It is constructed from four resistors, two of known values R1 and R3 (see diagram), one whose resistance is to be determined Rx, and one which is variable and calibrated R2. Two opposite vertices are connected to a source of electric current, such as a battery, and a galvanometer is connected across the other two vertices. The variable resistor is adjusted until the galvanometer reads zero. It is then known that the ratio between the variable resistor and its neighbour R1 is equal to the ratio between the unknown resistor and its neighbour R3, which enables the value of the unknown resistor to be calculated. The Wheatstone bridge has also been generalised to measure impedance in AC circuits, and to measure resistance, inductance, capacitance, and dissipation factor separately. Variants are known as the Wien bridge, Maxwell bridge, and Heaviside bridge (used to measure the effect of mutual inductance). All are based on the same principle, which is to compare the output of two potential dividers sharing a common source. In power supply design, a bridge circuit or bridge rectifier is an arrangement of diodes or similar devices used to rectify an electric current, i.e. to convert it from an unknown or alternating polarity to a direct current of known polarity. In some motor controllers, an H-bridge is used to control the direction the motor turns. Bridge current equation From the figure to the right, the bridge current is represented as I5 Per Thévenin's theorem, finding the Thévenin equivalent circuit which is connected to the bridge load R5 and using the arbitrary current flow I5, we have: Thevenin Source (Vth) is given by the formula: and the Thevenin resistance (Rth): Therefore, the current flow (I5) through the bridge is given by Ohm's law: and the voltage (V5) across the load (R5) is given by the voltage divider formula:
Technology
Functional circuits
null
710251
https://en.wikipedia.org/wiki/Wind%20wave
Wind wave
In fluid dynamics, a wind wave, or wind-generated water wave, is a surface wave that occurs on the free surface of bodies of water as a result of the wind blowing over the water's surface. The contact distance in the direction of the wind is known as the fetch. Waves in the oceans can travel thousands of kilometers before reaching land. Wind waves on Earth range in size from small ripples to waves over high, being limited by wind speed, duration, fetch, and water depth. When directly generated and affected by local wind, a wind wave system is called a wind sea. Wind waves will travel in a great circle route after being generated – curving slightly left in the southern hemisphere and slightly right in the northern hemisphere. After moving out of the area of fetch and no longer being affected by the local wind, wind waves are called swells and can travel thousands of kilometers. A noteworthy example of this is waves generated south of Tasmania during heavy winds that will travel across the Pacific to southern California, producing desirable surfing conditions. Wind waves in the ocean are also called ocean surface waves and are mainly gravity waves, where gravity is the main equilibrium force. Wind waves have a certain amount of randomness: subsequent waves differ in height, duration, and shape with limited predictability. They can be described as a stochastic process, in combination with the physics governing their generation, growth, propagation, and decay – as well as governing the interdependence between flow quantities such as the water surface movements, flow velocities, and water pressure. The key statistics of wind waves (both seas and swells) in evolving sea states can be predicted with wind wave models. Although waves are usually considered in the water seas of Earth, the hydrocarbon seas of Titan may also have wind-driven waves. Waves in bodies of water may also be generated by other causes, both at the surface and underwater (such as watercraft, animals, waterfalls, landslides, earthquakes, bubbles, and impact events). Formation The great majority of large breakers seen at a beach result from distant winds. Five factors influence the formation of the flow structures in wind waves: Wind speed or strength relative to wave speed – the wind must be moving faster than the wave crest for energy transfer to the wave. The uninterrupted distance of open water over which the wind blows without significant change in direction (called the fetch) Width of the area affected by fetch (at a right angle to the distance) Wind duration – the time for which the wind has blown over the water. Water depth All of these factors work together to determine the size of the water waves and the structure of the flow within them. The main dimensions associated with wave propagation are: Wave height (vertical distance from trough to crest) Wave length (distance from crest to crest in the direction of propagation) Wave period (time interval between arrival of consecutive crests at a stationary point) Wave direction or azimuth (predominantly driven by wind direction) A fully developed sea has the maximum wave size theoretically possible for a wind of specific strength, duration, and fetch. Further exposure to that specific wind could only cause a dissipation of energy due to the breaking of wave tops and formation of "whitecaps". Waves in a given area typically have a range of heights. For weather reporting and for scientific analysis of wind wave statistics, their characteristic height over a period of time is usually expressed as significant wave height. This figure represents an average height of the highest one-third of the waves in a given time period (usually chosen somewhere in the range from 20 minutes to twelve hours), or in a specific wave or storm system. The significant wave height is also the value a "trained observer" (e.g. from a ship's crew) would estimate from visual observation of a sea state. Given the variability of wave height, the largest individual waves are likely to be somewhat less than twice the reported significant wave height for a particular day or storm. Wave formation on an initially flat water surface by wind is started by a random distribution of normal pressure of turbulent wind flow over the water. This pressure fluctuation produces normal and tangential stresses in the surface water, which generates waves. It is usually assumed for the purpose of theoretical analysis that: The water is originally at rest. The water is not viscous. The water is irrotational. There is a random distribution of normal pressure to the water surface from the turbulent wind. Correlations between air and water motions are neglected. The second mechanism involves wind shear forces on the water surface. John W. Miles suggested a surface wave generation mechanism that is initiated by turbulent wind shear flows based on the inviscid Orr–Sommerfeld equation in 1957. He found the energy transfer from the wind to the water surface is proportional to the curvature of the velocity profile of the wind at the point where the mean wind speed is equal to the wave speed. Since the wind speed profile is logarithmic to the water surface, the curvature has a negative sign at this point. This relation shows the wind flow transferring its kinetic energy to the water surface at their interface. Assumptions: two-dimensional parallel shear flow incompressible, inviscid water and wind irrotational water slope of the displacement of the water surface is small Generally, these wave formation mechanisms occur together on the water surface and eventually produce fully developed waves. For example, if we assume a flat sea surface (Beaufort state 0), and a sudden wind flow blows steadily across the sea surface, the physical wave generation process follows the sequence: Turbulent wind forms random pressure fluctuations at the sea surface. Ripples with wavelengths in the order of a few centimeters are generated by the pressure fluctuations. (The Phillips mechanism) The winds keep acting on the initially rippled sea surface causing the waves to become larger. As the waves grow, the pressure differences get larger causing the growth rate to increase. Finally, the shear instability expedites the wave growth exponentially. (The Miles mechanism) The interactions between the waves on the surface generate longer waves and the interaction will transfer wave energy from the shorter waves generated by the Miles mechanism to the waves which have slightly lower frequencies than the frequency at the peak wave magnitudes, then finally the waves will be faster than the crosswind speed (Pierson & Moskowitz). Types Three different types of wind waves develop over time: Capillary waves, or ripples, dominated by surface tension effects. Gravity waves, dominated by gravitational and inertial forces. Seas, raised locally by the wind. Swells, which have traveled away from where they were raised by the wind, and have to a greater or lesser extent dispersed. Ripples appear on smooth water when the wind blows, but will die quickly if the wind stops. The restoring force that allows them to propagate is surface tension. Sea waves are larger-scale, often irregular motions that form under sustained winds. These waves tend to last much longer, even after the wind has died, and the restoring force that allows them to propagate is gravity. As waves propagate away from their area of origin, they naturally separate into groups of common direction and wavelength. The sets of waves formed in this manner are known as swells. The Pacific Ocean is from Indonesia to the coast of Colombia and, based on an average wavelength of , would have ~258,824 swells over that width. It is sometimes alleged that out of a set of waves, the seventh wave in a set is always the largest; while this isn't the case, the waves in the middle of a given set tend to be larger than those before and after them. Individual "rogue waves" (also called "freak waves", "monster waves", "killer waves", and "king waves") much higher than the other waves in the sea state can occur. In the case of the Draupner wave, its height was 2.2 times the significant wave height. Such waves are distinct from tides, caused by the Moon and Sun's gravitational pull, tsunamis that are caused by underwater earthquakes or landslides, and waves generated by underwater explosions or the fall of meteorites—all having far longer wavelengths than wind waves. The largest ever recorded wind waves are not rogue waves, but standard waves in extreme sea states. For example, high waves were recorded on the RRS Discovery in a sea with significant wave height, so the highest wave was only 1.6 times the significant wave height. The biggest recorded by a buoy (as of 2011) was high during the 2007 typhoon Krosa near Taiwan. Spectrum Ocean waves can be classified based on: the disturbing force that creates them; the extent to which the disturbing force continues to influence them after formation; the extent to which the restoring force weakens or flattens them; and their wavelength or period. Seismic sea waves have a period of about 20 minutes, and speeds of . Wind waves (deep-water waves) have a period up to about 20 seconds. The speed of all ocean waves is controlled by gravity, wavelength, and water depth. Most characteristics of ocean waves depend on the relationship between their wavelength and water depth. Wavelength determines the size of the orbits of water molecules within a wave, but water depth determines the shape of the orbits. The paths of water molecules in a wind wave are circular only when the wave is traveling in deep water. A wave cannot "feel" the bottom when it moves through water deeper than half its wavelength because too little wave energy is contained in the water movement below that depth. Waves moving through water deeper than half their wavelength are known as deep-water waves. On the other hand, the orbits of water molecules in waves moving through shallow water are flattened by the proximity of the sea bottom surface. Waves in water shallower than 1/20 their original wavelength are known as shallow-water waves. Transitional waves travel through water deeper than 1/20 their original wavelength but shallower than half their original wavelength. In general, the longer the wavelength, the faster the wave energy will move through the water. The relationship between the wavelength, period and velocity of any wave is: where C is speed (celerity), L is the wavelength, and T is the period (in seconds). Thus the speed of the wave derives from the functional dependence of the wavelength on the period (the dispersion relation). The speed of a deep-water wave may also be approximated by: where g is the acceleration due to gravity, per second squared. Because g and π (3.14) are constants, the equation can be reduced to: when C is measured in meters per second and L in meters. In both formulas the wave speed is proportional to the square root of the wavelength. The speed of shallow-water waves is described by a different equation that may be written as: where C is speed (in meters per second), g is the acceleration due to gravity, and d is the depth of the water (in meters). The period of a wave remains unchanged regardless of the depth of water through which it is moving. As deep-water waves enter the shallows and feel the bottom, however, their speed is reduced, and their crests "bunch up", so their wavelength shortens. Spectral models Sea state can be described by the sea wave spectrum or just wave spectrum . It is composed of a wave height spectrum (WHS) and a wave direction spectrum (WDS) . Many interesting properties about the sea state can be found from the wave spectra. WHS describes the spectral density of wave height variance ("power") versus wave frequency, with dimension . The relationship between the spectrum and the wave amplitude for a wave component is: Some WHS models are listed below. The International Towing Tank Conference (ITTC) recommended spectrum model for fully developed sea (ISSC spectrum/modified Pierson-Moskowitz spectrum): ITTC recommended spectrum model for limited fetch (JONSWAP spectrum) where (The latter model has since its creation improved based on the work of Phillips and Kitaigorodskii to better model the wave height spectrum for high wavenumbers.) As for WDS, an example model of might be: Thus the sea state is fully determined and can be recreated by the following function where is the wave elevation, is uniformly distributed between 0 and , and is randomly drawn from the directional distribution function Shoaling and refraction As waves travel from deep to shallow water, their shape changes (wave height increases, speed decreases, and length decreases as wave orbits become asymmetrical). This process is called shoaling. Wave refraction is the process that occurs when waves interact with the sea bed to slow the velocity of propagation as a function of wavelength and period. As the waves slow down in shoaling water, the crests tend to realign at a decreasing angle to the depth contours. Varying depths along a wave crest cause the crest to travel at different phase speeds, with those parts of the wave in deeper water moving faster than those in shallow water. This process continues while the depth decreases, and reverses if it increases again, but the wave leaving the shoal area may have changed direction considerably. Rays—lines normal to wave crests between which a fixed amount of energy flux is contained—converge on local shallows and shoals. Therefore, the wave energy between rays is concentrated as they converge, with a resulting increase in wave height. Because these effects are related to a spatial variation in the phase speed, and because the phase speed also changes with the ambient current—due to the Doppler shift—the same effects of refraction and altering wave height also occur due to current variations. In the case of meeting an adverse current the wave steepens, i.e. its wave height increases while the wavelength decreases, similar to the shoaling when the water depth decreases. Breaking Some waves undergo a phenomenon called "breaking". A breaking wave is one whose base can no longer support its top, causing it to collapse. A wave breaks when it runs into shallow water, or when two wave systems oppose and combine forces. When the slope, or steepness ratio, of a wave, is too great, breaking is inevitable. Individual waves in deep water break when the wave steepness—the ratio of the wave height H to the wavelength λ—exceeds about 0.17, so for H > 0.17 λ. In shallow water, with the water depth small compared to the wavelength, the individual waves break when their wave height H is larger than 0.8 times the water depth h, that is H > 0.8 h. Waves can also break if the wind grows strong enough to blow the crest off the base of the wave. In shallow water, the base of the wave is decelerated by drag on the seabed. As a result, the upper parts will propagate at a higher velocity than the base and the leading face of the crest will become steeper and the trailing face flatter. This may be exaggerated to the extent that the leading face forms a barrel profile, with the crest falling forward and down as it extends over the air ahead of the wave. Three main types of breaking waves are identified by surfers or surf lifesavers. Their varying characteristics make them more or less suitable for surfing and present different dangers. Spilling, or rolling: these are the safest waves on which to surf. They can be found in most areas with relatively flat shorelines. They are the most common type of shorebreak. The deceleration of the wave base is gradual, and the velocity of the upper parts does not differ much with height. Breaking occurs mainly when the steepness ratio exceeds the stability limit. Plunging, or dumping: these break suddenly and can "dump" swimmers—pushing them to the bottom with great force. These are the preferred waves for experienced surfers. Strong offshore winds and long wave periods can cause dumpers. They are often found where there is a sudden rise in the seafloor, such as a reef or sandbar. Deceleration of the wave base is sufficient to cause upward acceleration and a significant forward velocity excess of the upper part of the crest. The peak rises and overtakes the forward face, forming a "barrel" or "tube" as it collapses. Surging: these may never actually break as they approach the water's edge, as the water below them is very deep. They tend to form on steep shorelines. These waves can knock swimmers over and drag them back into deeper water. When the shoreline is near vertical, waves do not break but are reflected. Most of the energy is retained in the wave as it returns to seaward. Interference patterns are caused by superposition of the incident and reflected waves, and the superposition may cause localized instability when peaks cross, and these peaks may break due to instability. (see also clapotic waves) Physics of waves Wind waves are mechanical waves that propagate along the interface between water and air; the restoring force is provided by gravity, and so they are often referred to as surface gravity waves. As the wind blows, pressure and friction perturb the equilibrium of the water surface and transfer energy from the air to the water, forming waves. The initial formation of waves by the wind is described in the theory of Phillips from 1957, and the subsequent growth of the small waves has been modeled by Miles, also in 1957. In linear plane waves of one wavelength in deep water, parcels near the surface move not plainly up and down but in circular orbits: forward above and backward below (compared to the wave propagation direction). As a result, the surface of the water forms not an exact sine wave, but more a trochoid with the sharper curves upwards—as modeled in trochoidal wave theory. Wind waves are thus a combination of transversal and longitudinal waves. When waves propagate in shallow water, (where the depth is less than half the wavelength) the particle trajectories are compressed into ellipses. In reality, for finite values of the wave amplitude (height), the particle paths do not form closed orbits; rather, after the passage of each crest, particles are displaced slightly from their previous positions, a phenomenon known as Stokes drift. As the depth below the free surface increases, the radius of the circular motion decreases. At a depth equal to half the wavelength λ, the orbital movement has decayed to less than 5% of its value at the surface. The phase speed (also called the celerity) of a surface gravity wave is—for pure periodic wave motion of small-amplitude waves—well approximated by where c = phase speed; λ = wavelength; d = water depth; g = acceleration due to gravity at the Earth's surface. In deep water, where , so and the hyperbolic tangent approaches , the speed approximates In SI units, with in m/s, , when is measured in metres. This expression tells us that waves of different wavelengths travel at different speeds. The fastest waves in a storm are the ones with the longest wavelength. As a result, after a storm, the first waves to arrive on the coast are the long-wavelength swells. For intermediate and shallow water, the Boussinesq equations are applicable, combining frequency dispersion and nonlinear effects. And in very shallow water, the shallow water equations can be used. If the wavelength is very long compared to the water depth, the phase speed (by taking the limit of when the wavelength approaches infinity) can be approximated by On the other hand, for very short wavelengths, surface tension plays an important role and the phase speed of these gravity-capillary waves can (in deep water) be approximated by where S = surface tension of the air-water interface; = density of the water. When several wave trains are present, as is always the case in nature, the waves form groups. In deep water, the groups travel at a group velocity which is half of the phase speed. Following a single wave in a group one can see the wave appearing at the back of the group, growing, and finally disappearing at the front of the group. As the water depth decreases towards the coast, this will have an effect: wave height changes due to wave shoaling and refraction. As the wave height increases, the wave may become unstable when the crest of the wave moves faster than the trough. This causes surf, a breaking of the waves. The movement of wind waves can be captured by wave energy devices. The energy density (per unit area) of regular sinusoidal waves depends on the water density , gravity acceleration and the wave height (which, for regular waves, is equal to twice the amplitude, ): The velocity of propagation of this energy is the group velocity. Models Surfers are very interested in the wave forecasts. There are many websites that provide predictions of the surf quality for the upcoming days and weeks. Wind wave models are driven by more general weather models that predict the winds and pressures over the oceans, seas, and lakes. Wind wave models are also an important part of examining the impact of shore protection and beach nourishment proposals. For many beach areas there is only patchy information about the wave climate, therefore estimating the effect of wind waves is important for managing littoral environments. A wind-generated wave can be predicted based on two parameters: wind speed at 10 m above sea level and wind duration, which must blow over long periods of time to be considered fully developed. The significant wave height and peak frequency can then be predicted for a certain fetch length. Seismic signals Ocean water waves generate seismic waves that are globally visible on seismographs. There are two principal constituents of the ocean wave-generated seismic microseism. The strongest of these is the secondary microseism which is created by ocean floor pressures generated by interfering ocean waves and has a spectrum that is generally between approximately 6–12 s periods, or at approximately half of the period of the responsible interfering waves. The theory for microseism generation by standing waves was provided by Michael Longuet-Higgins in 1950 after in 1941 Pierre Bernard suggested this relation with standing waves on the basis of observations. The weaker primary microseism, also globally visible, is generated by dynamic seafloor pressures of propagating waves above shallower (less than several hundred meters depth) regions of the global ocean. Microseisms were first reported in about 1900, and seismic records provide long-term proxy measurements of seasonal and climate-related large-scale wave intensity in Earth's oceans including those associated with anthropogenic global warming.
Physical sciences
Oceanography
Earth science
710897
https://en.wikipedia.org/wiki/Gardenia
Gardenia
Gardenia is a genus of flowering plants in the coffee family, Rubiaceae, native to the tropical and subtropical regions of Africa, Asia, Madagascar, Pacific Islands, and Australia. The genus was named by Carl Linnaeus and John Ellis after Alexander Garden (1730–1791), a Scottish naturalist. The type species is Gardenia jasminoides, as first published by Ellis in 1961. Description Gardenia species typically grow as shrubs or small trees, however some species, such as those native to New Guinea, may grow to 20-30m tall. A small number of species found in tropical East Africa and southern Africa grow as small pyrophytic subshrubs. At least one species, Gardenia epiphytica, native to Gabon and Cameroon, grows as an epiphyte. Most species are unarmed and spineless, but some species such as some of those found in Africa are spinescent. The leaf arrangement is typically opposite or verticillate may (arranged in whorls). Leaves vary by species; many species are glossy with a distinctly coriaceous (or leathery) texture such as that seen in Gardenia jasminoides, whilst in others, leaves may be thin and chartaceous (or paper-like). The flowers, particularly in the species most commonly grown in gardens, may be large and showy and white, cream or pale yellow in color, with a pleasant and strong, sometimes overpowering scent that may be more noticeable at night, something quite typical of moth-pollinated plants. Gardenia flowers are hermaphrodite (or bisexual) with each individual flower having both as both male and female structures (that is, having both stamens and carpels) with the flower. The arrangement of the flowers on the plant are solitary or in small terminal clusters or fascicles. The flowers vary across species, but most commonly have a funnel- or cylindrical-shaped corolla tube, normally elongated and narrow distally, surrounded by 5-12 or more lobes (petals) contorted or arranged in an overlapping pattern. Phytochemistry Crocetin is a chemical compound usually obtained from Crocus sativus, which can also be obtained from the fruit of Gardenia jasminoides. Gordonin is a novel methoxylated flavonol secreted in golden-colored resinous droplets of Gardenia gordonii, which is one of several critically endangered species of the Fiji Islands. Many of the native gardenias of the Pacific Islands and elsewhere in the paleotropics contribute towards the production of a diverse array of natural products. Methoxylated and oxygenated flavonols, flavones, and triterpenes accumulate on the vegetative and floral buds as yellow to brown droplets of secreted resins. Many focused phytochemical studies of these bud exudates have been published, including a population-level study of two rare, sympatric species of Fiji, G. candida and G. grievei. The evolutionary significance of the gums and resins of gardenias in attracting or repelling invertebrate herbivores, has yet to be explored by ecologists. Species Plants of the World Online recognises 128 species in this genus, as follows: Gardenia actinocarpa Gardenia anapetes Gardenia angkorensis Gardenia annamensis Gardenia aqualla Gardenia archboldiana Gardenia aubryi Gardenia barnesii Gardenia beamanii Gardenia boninensis Gardenia brachythamnus Gardenia brevicalyx Gardenia brighamii Gardenia buffalina Gardenia cambodiana Gardenia candida Gardenia carinata Gardenia carstensensis Gardenia chanii Gardenia chevalieri Gardenia clemensiae Gardenia collinsiae Gardenia cornuta Gardenia coronaria Gardenia costulata Gardenia crameri Gardenia cuneata Gardenia dacryoides Gardenia elata Gardenia epiphytica Gardenia erubescens Gardenia esculenta Gardenia ewartii Gardenia faucicola Gardenia fiorii Gardenia flava Gardenia fosbergii Gardenia fucata Gardenia fusca Gardenia gardneri Gardenia gjellerupii Gardenia gordonii Gardenia grievei Gardenia griffithii Gardenia gummifera Gardenia hageniana Gardenia hainanensis Gardenia hansemannii Gardenia hillii Gardenia hutchinsoniana Gardenia imperialis Gardenia invaginata Gardenia ixorifolia Gardenia jabiluka Gardenia jasminoides Gardenia kabaenensis Gardenia kakaduensis Gardenia kamialiensis Gardenia lacciflua Gardenia lamingtonii Gardenia lanutoo Gardenia latifolia Gardenia leopoldiana Gardenia leschenaultii Gardenia longistipula Gardenia magnifica Gardenia mannii Gardenia manongarivensis Gardenia maugaloae Gardenia megasperma Gardenia moszkowskii Gardenia mutabilis Gardenia nitida Gardenia obtusifolia Gardenia ornata Gardenia oudiepe Gardenia ovularis Gardenia pallens Gardenia panduriformis Gardenia papuana Gardenia philastrei Gardenia posoquerioides Gardenia propinqua Gardenia psidioides Gardenia pterocalyx Gardenia pyriformis Gardenia racemulosa Gardenia reflexisepala Gardenia reinwardtiana Gardenia remyi Gardenia resinifera Gardenia resiniflua Gardenia resinosa Gardenia rupicola Gardenia rutenbergiana Gardenia sambiranensis Gardenia saxatilis Gardenia scabrella Gardenia schlechteri Gardenia schwarzii Gardenia sericea Gardenia similis Gardenia siphonocalyx Gardenia sokotensis Gardenia sootepensis Gardenia stenophylla Gardenia storckii Gardenia subacaulis Gardenia subcarinata Gardenia taitensis Gardenia tannaensis Gardenia ternifolia Gardenia tessellaris Gardenia thailandica Gardenia thunbergia Gardenia tinneae Gardenia transvenulosa Gardenia trochainii Gardenia tropidocarpa Gardenia truncata Gardenia tubifera Gardenia urvillei Gardenia vernicosa Gardenia vilhelmii Gardenia vitiensis Gardenia vogelii Gardenia volkensii Gardenia vulcanica Cultivation and uses Gardenia plants are prized for the strong sweet scent of their flowers, which can be very large in size in some species. Gardenia jasminoides (syn. G. grandiflora, G. florida) is cultivated as a house plant. This species can be difficult to grow because it originated in warm humid tropical areas. It demands high humidity to thrive, and bright (but not direct) light. It flourishes in acidic soils with good drainage and thrives on temperatures of during the day and in the evening. Potting soils developed especially for gardenias are available. G. jasminoides grows no larger than 18 inches in height and width when grown indoors. In climates where it can be grown outdoors, it can attain a height of 6 feet. If water touches the flowers, they will turn brown. In Eastern Asia, Gardenia jasminoides is called () in China, () in Korea, and () in Japan. Its fruit is used as a yellow dye, used on fabric and food (including the Korean mung bean jelly called hwangpomuk). Its fruits are also used in traditional Chinese medicine for their clearing, calming, and cooling properties. In France, gardenias are the flower traditionally worn by men as boutonnière when in evening dress. In The Age of Innocence, Edith Wharton suggests it was customary for upper-class men from New York City to wear a gardenia in their buttonhole during the Gilded Age., Sigmund Freud remarked to the poet H.D. that gardenias were his favorite flower. In tiki culture, Donn Beach, aka Don the Beachcomber, frequently wore a fresh lei of gardenias almost every day at his tiki bars, allegedly spending $7,800 for flowers over the course of four years in 1938. He named one of his drinks the mystery gardenia cocktail. Trader Vic frequently used the gardenia as a flower garnish in his tiki drinks, such as in the scorpion and outrigger tiara cocktails. Several species occur in Hawaii, where gardenias are known as nau or nānū. Hattie McDaniel famously wore gardenias in her hair when she accepted an Academy Award, the first for an African American, for Gone with the Wind. Mo'Nique Hicks later wore gardenias in her hair when she won her Oscar, as a tribute to McDaniel. Gallery
Biology and health sciences
Others
null
711147
https://en.wikipedia.org/wiki/Takah%C4%93
Takahē
The South Island takahē (Porphyrio hochstetteri) is a flightless swamphen indigenous to New Zealand and the largest living member of the rail family. It is often known by the abbreviated name takahē, which it shares with the recently extinct North Island takahē. The two takahē species are also known as notornis. Takahē were hunted extensively by both early European settlers and Māori, and takahē's bones have been found in middens in the South Island. Fossil remains have also been found across the South Island. They were not named and described by Europeans until 1847, and then only from fossil bones. In 1850 a living bird was captured, and three more collected in the 19th century. After another bird was captured in 1898, and no more were to be found, the species was presumed extinct. Fifty years later, however, after a carefully planned search, South Island takahē were dramatically rediscovered in November 1948 by Geoffrey Orbell in an isolated valley in the South Island's Murchison Mountains. The species is now managed by the New Zealand Department of Conservation, whose Takahē Recovery Programme maintains populations on several offshore islands as well as Takahē Valley. Takahē has been reintroduced to numerous locations across the country. Although South Island takahē are still a threatened species, their NZTCS status was downgraded in 2016 from Nationally Critical to Nationally Vulnerable. As of 2023, the population is around 500 and is growing by 8 percent per year. Scientific description and naming Anatomist Richard Owen was sent fossil bird bones found in 1847 in South Taranaki on the North Island by collector Walter Mantell, and in 1848 he coined the genus Notornis ("southern bird") for them, naming the new species Notornis mantelli. The bird was presumed by Western science to be another extinct species like the moa. Two years later, a group of sealers in Tamatea / Dusky Sound, Fiordland, encountered a large bird which they chased with their dogs. "It ran with great speed, and upon being captured uttered loud screams, and fought and struggled violently; it was kept alive three or four days on board the schooner and then killed, and the body roasted and ate by the crew, each partaking of the dainty, which was declared to be delicious." Walter Mantell happened to meet the sealers, and secured the bird's skin from them. He sent it to his father, palaeontologist Gideon Mantell, who realised this was Notornis, a living bird known only from fossil bones, and presented it in 1850 to a meeting of the Zoological Society of London. A second specimen was sent to Gideon Mantell in 1851, caught by Māori on Secretary Island, Fiordland. (Takahē were well known to Māori, who travelled long distances to hunt them. The bird's name comes from the Māori verb takahi, to stamp or trample.) Only two more South Island takahē were collected by Europeans in the 19th century. One was caught by a rabbiter's dog on the eastern side of Lake Te Anau in 1879. It was bought by what is now the State Museum of Zoology, Dresden, for £105, and destroyed during the bombing of Dresden in World War II. Another takahē was caught by another dog, also on the shore of Lake Te Anau, on 7 August 1898; the dog, named 'Rough', was owned by musterer Jack Ross. Ross tried to revive the female takahē, but it died, and he delivered it to curator William Benham at Otago Museum. In excellent condition, it was purchased by the New Zealand government for £250 and was put on display; for many years it was the only mounted specimen in New Zealand, and the only takahē on display anywhere in the world. After 1898, hunters and settlers continued to report sightings of large blue-and-green birds, described as "giant pukakis" (pūkeko or Australasian swamphens); one group chased but could not catch a bird "the size of a goose, with blue-green feathers and the speed of a racehorse". None of the sightings were authenticated, and the only specimens collected were fossil bones. The takahē was considered extinct. Taxonomy and systematics The third takahē collected went to the Königlich Zoologisches und Anthropologisch-Ethnographisches Museum in Dresden, and the Director Adolf Bernhard Meyer examined the skeleton while preparing his classification of the museum's birds, Abbildungen von Vogelskeletten (1879–1895). He decided the skeletal differences between the Fiordland bird and Owen's North Island specimen were sufficient to make it a separate species, which he called Notornis hochstetteri, after the Austrian geologist Ferdinand von Hochstetter. Over the second half of the 20th century, the two Notornis species were gradually relegated to subspecies: Notornis mantelli mantelli in the North Island, and Notornis mantelli hochstetteri in the South. They were then incorporated into the same genus as the closely related Australasian swamphen or pūkeko (Porphyrio porphyrio), becoming a subspecies of Porphyrio mantelli. Pūkeko are members of a widespread species of swamphen, but based on fossil evidence have only been in New Zealand for a few hundred years, arriving from Australia after the islands were first settled by Polynesians. A morphological and genetic study of living and extinct Porphyrio revealed that North and South Island takahē were, as originally proposed by Meyer, separate species. The North Island species (P. mantelli, as described by Owen) was known to Māori as moho; it is extinct and only known from skeletal remains and one possible specimen. Moho were taller and more slender than takahē, and share a common ancestor with living pūkeko. Although it was historically proposed that the two takahē species were unrelated, a genetic analysis published in 2024 suggested that both takahē species are each others closest relatives and likely descended from a single ancestor that colonised New Zealand, with the split between the two species dated at around 4 to 1.5 million years ago. Rediscovery Living South Island takahē were rediscovered in an expedition led by Invercargill-based physician Geoffrey Orbell near Lake Te Anau in the Murchison Mountains, on 20 November 1948. The expedition started when footprints of an unknown bird were found near Lake Te Anau. Two takahē were caught but returned to the wild after photos were taken of the rediscovered bird. Description The South Island takahē is the largest living member of the family Rallidae. Its overall length averages and its average weight is about in males and in females, ranging from . The lifespan of a takahē can range from 18 years in the wild or 22 in animal sanctuaries. Its standing height is around . It is a stocky, powerful bird, with short strong legs and a massive bill which can deliver a painful bite to the unwary. Although a flightless bird, the takahē sometimes uses its reduced wings to help it clamber up slopes. South Island takahē plumage, beaks, and legs show typical gallinule colours. Adult takahē plumage is silky, iridescent, and mostly dark-blue or navy-blue on the head, neck, and underside, peacock blue on the wings. The back and inner wings are teal and green, becoming olive-green at the tail, which is white underneath. Takahe have a bright scarlet frontal shield and "carmine beaks marbled with shades of red". Their scarlet legs were described as "crayfish-red" by one of the early rediscoverers. Sexes are similar; the females are slightly smaller, and may display frayed tail feathers when nesting. Chicks are covered with jet-black fluffy down when hatched, and have very large brown legs, with a dark white-tipped bill. Immature takahē have a duller version of adult colouring, with a dark bill that turns red as they mature. South Island takahē are noisy. They have a non-directional warning call, which was described by the rediscoverers of takahē as someone "whistling to them over a .303 cartridge case", and a loud call. The contact call is easily confused with that of the weka (Gallirallus australis), but is generally more resonant and deeper. Behaviour and ecology The South Island takahē is a sedentary and flightless bird currently found in alpine grasslands habitats. It is territorial and remains in the grassland until the arrival of snow, when it descends to the forest or scrub. It eats grass, shoots, and insects, but predominantly leaves of Chionochloa tussocks and other alpine grass species. The South Island takahē can often be seen plucking a snow grass (Danthonia flavescens) stalk, taking it into one claw, and eating only the soft lower parts, which appears to be its favourite food, while the rest is discarded. A South Island takahē has been recorded feeding on a paradise duckling at Zealandia. Although this behaviour was previously unknown, the related Australasian swamphen or pūkeko occasionally feeds on eggs and nestlings of other birds as well. Breeding The South Island takahē is monogamous, with pairs remaining together from 12 years to, probably, their entire lives. It builds a bulky nest under bushes and scrub, and lays one to three buff eggs. The chick survival rate is between 25% and 80%, depending on location. Distribution and habitat Although it is indigenous to swamps, humans have turned its swampland habitats into farmland, and the South Island takahē was forced to move upland into the grasslands. The species is still present in the location where it was rediscovered in the Murchison Mountains. Small numbers have also been successfully translocated to five predator-free offshore islands, Tiritiri Matangi, Kapiti, Maud, Mana and Motutapu, where they can be viewed by the public. Additionally, captive takahē can be viewed at Te Anau and Pūkaha / Mount Bruce National wildlife centres. In June 2006 a pair of takahē were relocated to the Maungatautari Restoration Project. In September 2010 a pair of takahē (Hamilton and Guy) were released at Willowbank Wildlife Reserve – the first non-Department of Conservation institution to hold this species. In January 2011 two takahē were released in Zealandia, Wellington, and in mid-2015, two more takahē were released on Rotoroa Island in the Hauraki Gulf. There have also been relocations onto the Tawharanui Peninsula. In 2014 two pairs of Takahē were released into Wairakei golf and sanctuary, a private fenced sanctuary at Wairakei north of Taupō, the first chick was born there in November 2015. At October 2017 there were 347 takahē accounted for, an increase of 41 over 2016. The Orokonui Ecosanctuary is home to a single takahē breeding pair, Quammen and Paku. The pair successfully bred two chicks in 2018, both of which died from exposure after heavy rains in November 2018. The deaths caused some controversy with regards to the Ecosanctuary's policy of "non-interference". In 2018, eighteen South Island takahē were reintroduced to the Kahurangi National Park, 100 years after their local extinction. Following the 2018 release, a second re-introduction has taken place on Te Waipounamu in August 2023, eighteen takahē were released in the Upper Whakatipu Waimāori Valley in Ngāi Tahu owned Greenstone Station. Later that year in October, six more takahē were released onto the property. Status and conservation The near extinction of the formerly widespread South Island takahē is due to a number of factors: over-hunting, loss of habitat and introduced predators have all played a part. The introduction of red deer (Cervus elaphus) represent a severe competition for food, while stoats (Mustela erminea) take a role as predators. The spread of the forests in post-glacial Pleistocene-Holocene has contributed to the reduction of habitat. Since the species is K-selected, i.e. is long-lived, reproduces slowly, takes several years to reach maturity, and had a large range that has drastically contracted in comparatively few generations, inbreeding depression is a significant problem. The recovery efforts are hampered especially by low fertility of the remaining birds. Genetic analyses have been employed to select captive breeding stock in an effort to preserve the maximum genetic diversity. Decline of takahē The causes of the pre-European decline of takahē were postulated by Williams (1962) and later supported in a detailed report by Mills et al. (1984). They held that climate changes were the main cause of the low numbers of takahē before European settlement. The environmental conditions prior to the period of European settlement were not suitable for takahē, and eliminated most of the population. The rising temperatures were not tolerated by this group of birds. Takahē are adapted to alpine grasslands, and the post-glacial era modified those zones, causing a sharp decline in the takahē population. Secondly, they suggested that Polynesian settlers arriving about 800–1,000 years ago, bringing dogs and Polynesian rats (Rattus exulans) and hunting takahē for food, started another decline. European settlement in the nineteenth century almost wiped them out through hunting and introducing mammals such as deer which competed for food and predators (e.g. stoats) which preyed on them directly. Takahē population, conservation and protection After long threats of extinction, South Island takahē now find protection in Fiordland National Park (New Zealand's largest national park). However, the species has not made a stable recovery in this habitat since they were rediscovered in November 1948. In fact, the takahē population was at 400 before it was reduced to 118 in 1982 due to competition with Fiordland domestic deer. Conservationists noticed the threat that deer posed to takahē survival, and the national park has now implemented deer control with hunting by helicopter. The rediscovery of the South Island takahē caused great public interest. The New Zealand government took immediate action by closing off a remote part of Fiordland National Park to prevent the birds from being bothered. However, at the moment of rediscovery, there were different perspectives on how the bird should be conserved. At first, the Forest and Bird Society advocated for takahē to be left to work out their own "destiny", but many worried that the takahē would be incapable of making a comeback and thus become extinct like New Zealand's native huia. Interventionists then sought to relocate the takahē to "island sanctuaries" and breed them in captivity. Ultimately, no action was taken for nearly a decade due to a lack of resources and a desire to avoid conflict. The Burwood Takahē Breeding Centre was opened in 1985 at a site near Te Anau. The initial approach was to incubate eggs collected from nests and raise them by hand. Staff used hand-held puppets that replayed sounds of adult contact calls while feeding and interacting with the chicks, to help prevent the birds becoming "imprinted" on humans. Fibreglass replicas of adult birds were also placed in areas where the chicks slept. These methods were not used after 2011. Biologists from the Department of Conservation drew on their experience with designing remote island sanctuaries to establish a safe habitat for takahē and translocate birds onto Maud Island (Marlborough Sounds), Mana Island (near Wellington), Kapiti Island (Kapiti Coast), and Tiritiri Matangi Island (Hauraki Gulf). The success of these translocations has meant that the takahē's island metapopulation appears to have reached its carrying capacity, as revealed by the increasing ratio of non-breeding to breeding adults and declines in produced offspring. This may lead to reduced population growth rates and increased rates of inbreeding over time, thereby posing problems regarding the maintenance of genetic diversity and thus takahē survival in the long term. Recently, human intervention has been required to maintain the breeding success of the takahē, which is relatively low in the wild compared to other, less threatened species, so methods such as the removal of infertile eggs from nests and the captive rearing of chicks have been introduced to manage the takahē population. The Fiordland takahē population has a successful degree of reproductive output due to these management methods: the number of chicks per pairing with infertile egg removal and captive rearing is 0.66, compared to 0.43 for regions without any breeding management. It was reported that several takahē have accidentally been killed by hunters under contract to the Department of Conservation in the course of control measures aimed at reducing populations of the similar-looking pūkeko. One bird was killed in 2009 and four more—equivalent to 5% of the total population—in 2015. Future efforts for protection The original recovery strategies and goals set in the early 1980s, both long-term and short-term, are now well under way. The programme to move South Island takahē to predator-free island refuges, where the birds also receive supplementary feeding, began in 1984. Takahē can now be found on five small islands; Maud Island (Marlborough Sounds), Mana Island (off Wellington's west coast), Kapiti Island (off Wellington's west coast), Tiritiri Matangi Island (Hauraki Gulf) and Motutapu Island (Hauraki Gulf). The Department of Conservation also runs a captive breeding and rearing programme at the Burwood Breeding Centre near Te Anau which has up to 25 breeding pairs. Chicks are reared with minimal human contact. The offspring of the captive birds are used for new island releases and to add to the wild population in the Murchison Mountains. The Department of Conservation also manages wild takahē nests to boost the birds' recovery. An important management development has been the stringent control of deer in the Murchison Mountains and other takahē areas of Fiordland National Park. Following the introduction of deer hunting by helicopter, deer numbers have decreased dramatically and alpine vegetation is now recovering from years of heavy browsing. This improvement in its habitat has helped to increase takahē breeding success and survival. As of 2009, ongoing research aims to measure the impact of attacks by stoats and thus decide whether stoats are a significant problem requiring management. Population One of the original long-term goals was to establish a self-sustaining population of well over 500 South Island takahē. The population stood at 263 at the beginning of 2013. In 2016 the population rose to 306 takahē. In 2017 the population rose to 347—a 13 percent increase from the last year. In 2019, it increased to 418. As of 2023, the population is around 500.
Biology and health sciences
Gruiformes
Animals
711157
https://en.wikipedia.org/wiki/Extensive%20farming
Extensive farming
Extensive farming or extensive agriculture (as opposed to intensive farming) is an agricultural production system that uses small inputs of labour, fertilizers, and capital, relative to the land area being farmed. Systems Extensive farming most commonly means raising sheep and cattle in areas with low agricultural productivity, but includes large-scale growing of wheat, barley, cooking oils and other grain crops in areas like the Murray-Darling Basin in Australia. Here, owing to the extreme age and poverty of the soils, yields per hectare are very low, but the flat terrain and very large farm sizes mean yields per unit of labour are high. Nomadic herding is an extreme example of extensive farming, where herders move their animals to use feed from occasional rainfalls. Geography Extensive farming is found in the mid-latitude sections of most continents, as well as in desert regions where water for cropping is not available. The nature of extensive farming means it requires less rainfall than intensive farming. The farm is usually large in comparison with the numbers working and money spent on it. In 1957, most parts of Western Australia had pastures so poor that only one sheep to the square mile could be supported. Just as the demand has led to the basic division of cropping and pastoral activities, these areas can also be subdivided depending on the region's rainfall, vegetation type and agricultural activity within the area and the many other parentheses related to this data. Advantages Extensive farming has a number of advantages over intensive farming: Less labour per unit areas is required to farm large areas, especially since expensive alterations to land (like terracing) are completely absent. Mechanisation can be used more effectively over large, flat areas. Greater efficiency of labour means generally lower product prices. Animal welfare is generally improved because animals are not kept in stifling conditions. Lower requirements of inputs such as fertilizers. If animals are grazed on grassland native to the locality, there is less likely to be problems with exotic species. The meat of the livestock will taste better and appeal to customers. Local environment and soil are not damaged by overuse of chemicals. The ecological impact is lower. Animals bred in larger areas develop more efficiently. Disadvantages Extensive farming can have the following problems: Yields tend to be much lower than with intensive farming in the short term. Large land requirements limit the habitat of wild species (in some cases, even very low stocking rates can be dangerous), as is the case with intensive farming. Less profitable then intensive farming per unit of area. Extensive farming was once thought to produce more methane and nitrous oxide per kg of milk than intensive farming. One study estimated that the carbon "footprint" per billion kg (2.2 billion lb.) of milk produced in 2007 was 37 percent that of equivalent milk production in 1944. However, a more recent study by Centre de coopération internationale en recherche agronomique pour le développement found that extensive livestock systems impact the environment less than intensive systems.
Technology
Agriculture_2
null
711783
https://en.wikipedia.org/wiki/Splint%20%28medicine%29
Splint (medicine)
A splint is defined as "a rigid or flexible device that maintains in position a displaced or movable part; also used to keep in place and protect an injured part" or as "a rigid or flexible material used to protect, immobilize, or restrict motion in a part". Splints can be used for injuries that are not severe enough to immobilize the entire injured structure of the body. For instance, a splint can be used for certain fractures, soft tissue sprains, tendon injuries, or injuries awaiting orthopedic treatment. A splint may be static, not allowing motion, or dynamic, allowing controlled motion. Splints can also be used to relieve pain in damaged joints. Splints are quick and easy to apply and do not require a plastering technique. Splints are often made out of some kind of flexible material and a firm pole-like structure for stability. They often buckle or Velcro together. Uses By the emergency medical services or by volunteer first responders, to temporarily immobilize a fractured limb before transportation; By allied health professionals such as occupational therapists, physiotherapists and orthotists, to immobilize an articulation (e.g. the knee) that can be freed while not standing (e.g. during sleep); By athletic trainers to immobilize an injured bone or joint to facilitate safer transportation of the injured person; or By emergency department (ED) physicians to stabilize fractures or sprains until follow-up appointment with an orthopedist. Types Ankle stirrup – Used for the ankles. Finger splints – Used for the fingers. A "mallet" or baseball finger is a rupture of the extensor tendon and sometimes including a fracture. While surgery may be necessary such an injury may heal if placed in a finger splint. Nasal splint Posterior lower leg Posterior full leg Posterior elbow Sugar tong – Used for the forearm or wrist. They are named "sugar-tong" due to their long, U-shaped characteristics, similar to a type of utensil used to pick up sugar cubes. Thumb spica – Used for the thumb. Ulnar gutter – Used for the forearm to the palm. Volar wrist splint – Used for the wrist. Wrist/arm splint – Used for the wrist or arm. History B.C. to A.D. Splinting has been used since ancient times. Evidence suggests that splint usage dates back to 1500 B.C. that could treat not only fractures but burns as well. These splints were made from materials like, "leaves, reeds, bamboo, and bark padded with linen … [and] copper." Mummies from Egypt have been uncovered wearing splints from previous injuries that were obtained in their lifetime. Hippocrates, alive from 460 to 377 B.C., was very well known for his discoveries and techniques for splinting. He created a "distraction splint" that was advanced for his time. The splint, made up of leather cuffs that were separated by slim wooden slats, worked to repair the fracture and realign the bones. Around 1000 A.D. the use of Hippocrates' splinting technique using plants, like palm branches and cane halves, continued to be practiced. Flour dust, egg whites, and vegetable mixtures were created to form plaster for creating splints. Most splints in ancient times were cast-like and made to immobilize an area of the body. This is illustrated by the Aztecs around 1400 A.D. They made splints with leaves, leather, and paste. 1500s In the early 1500s gunpowder was introduced to Europe which caused a serious decline in the market of armor making. Armor makers had to figure out how to make a living with the skills they had already acquired. This led to the creation of braces due to the common use of metal in braces. Armor makers were knowledgeable in areas of the exterior anatomy and joint alignment, making braces the obvious replacement for their armor making. In 1517, after the evolution of the armor trade, injuries were being treated by metal braces secured by screws. Jumping to 1592, the first written piece on splints by surgeon Hieronymus Fabricius, shows various drawings of armor-like splints for the entire body. 1700s–1800s In the mid-1700s, doctors and mechanics worked with each other to create splints for certain injuries. Surgeons need these mechanics to design and build the splints for them. Most splints were made of metal. Plaster of Paris, a white powdery substance used mostly for casts and molds in the form of a quick-setting paste with water, began to be used for immobilizing splints. This method was not a popular way of splinting as it took too long to dry and suitable fabric was sparse. In the 1800s it was beginning to be recognized that rehabilitation after an injury was important. Orthopedics started to become a separate field from general surgery. A famous British Surgeon, Hugh Owen Thomas, created specialty splints that were cheap and best for injuries that were being rehabilitated. By 1883, mechanics and surgeons separated due to class issues. This created two different areas that shaped the way braces were being created and distributed. Around 1888, F. Gustav Ernst, a dedicated mechanic, released a book illustrating upper body splints. In 1899, orthopedic surgeon Alessandro Codivilla followed suit and published a book explaining the importance of using surgical procedures to set up better results using splints.
Technology
Devices
null
12474403
https://en.wikipedia.org/wiki/Climate%20change%20denial
Climate change denial
Climate change denial (also global warming denial) is a form of science denial characterized by rejecting, refusing to acknowledge, disputing, or fighting the scientific consensus on climate change. Those promoting denial commonly use rhetorical tactics to give the appearance of a scientific controversy where there is none. Climate change denial includes unreasonable doubts about the extent to which climate change is caused by humans, its effects on nature and human society, and the potential of adaptation to global warming by human actions. To a lesser extent, climate change denial can also be implicit when people accept the science but fail to reconcile it with their belief or action. Several studies have analyzed these positions as forms of denialism, pseudoscience, or propaganda. Many issues that are settled in the scientific community, such as human responsibility for climate change, remain the subject of politically or economically motivated attempts to downplay, dismiss or deny them—an ideological phenomenon academics and scientists call climate change denial. Climate scientists, especially in the United States, have reported government and oil-industry pressure to censor or suppress their work and hide scientific data, with directives not to discuss the subject publicly. The fossil fuels lobby has been identified as overtly or covertly supporting efforts to undermine or discredit the scientific consensus on climate change. Industrial, political and ideological interests organize activity to undermine public trust in climate science. Climate change denial has been associated with the fossil fuels lobby, the Koch brothers, industry advocates, ultraconservative think tanks, and ultraconservative alternative media, often in the U.S. More than 90% of papers that are skeptical of climate change originate from right-wing think tanks. Climate change denial is undermining efforts to act on or adapt to climate change, and exerts a powerful influence on the politics of climate change. In the 1970s, oil companies published research that broadly concurred with the scientific community's view on climate change. Since then, for several decades, oil companies have been organizing a widespread and systematic climate change denial campaign to seed public disinformation, a strategy that has been compared to the tobacco industry's organized denial of the hazards of tobacco smoking. Some of the campaigns are even carried out by the same people who previously spread the tobacco industry's denialist propaganda. Terminology Climate change denial refers to denial, dismissal, or doubt of the scientific consensus on the rate and extent of climate change, its significance, or its connection to human behavior, in whole or in part. Climate denial is a form of science denial. It can also take pseudoscientific forms. The terms climate skeptics or contrarians are nowadays used with the same meaning as climate change deniers even though deniers usually prefer not to, in order to sow confusion as to their intentions. The terminology is debated: most of those actively rejecting the scientific consensus use the terms skeptic and climate change skepticism, and only a few have expressed preference for being described as deniers. But the word "skepticism" is incorrectly used, as scientific skepticism is an intrinsic part of scientific methodology. In fact, all scientists adhere to scientific skepticism as part of the scientific process that demands continuing questioning. Both options are problematic, but climate change denial has become more widely used than skepticism. The term contrarian is more specific but less frequently used. In academic literature and journalism, the terms climate change denial and climate change deniers have well-established usage as descriptive terms without any pejorative connotation. The terminology evolved and emerged in the 1990s. By 1995 the word "skeptic" was being used specifically for the minority who publicized views contrary to the scientific consensus. This small group of scientists presented their views in public statements and the media rather than to the scientific community. Journalist Ross Gelbspan said in 1995 that industry had engaged "a small band of skeptics" to confuse public opinion in a "persistent and well-funded campaign of denial". His 1997 book The Heat is On may have been the first to concentrate specifically on the topic. In it, Gelbspan discusses a "pervasive denial of global warming" in a "persistent campaign of denial and suppression" involving "undisclosed funding of these 'greenhouse skeptics'" with "the climate skeptics" confusing the public and influencing decision makers. In December 2014, an open letter from the Committee for Skeptical Inquiry called on the media to stop using the term skepticism when referring to climate change denial. It contrasted scientific skepticism—which is "foundational to the scientific method"—with denial—"the a priori rejection of ideas without objective consideration"—and the behavior of those involved in political attempts to undermine climate science. It said: "Not all individuals who call themselves climate change skeptics are deniers. But virtually all deniers have falsely branded themselves as skeptics. By perpetrating this misnomer, journalists have granted undeserved credibility to those who reject science and scientific inquiry." In 2015, The New York Times's public editor said that the Times was increasingly using denier when "someone is challenging established science", but assessing this on an individual basis with no fixed policy, and would not use the term when someone was "kind of wishy-washy on the subject or in the middle". The executive director of the Society of Environmental Journalists said that while there was reasonable skepticism about specific issues, she felt that "denier" was "the most accurate term when someone claims there is no such thing as global warming, or agrees that it exists but denies that it has any cause we could understand or any impact that could be measured". A petition by climatetruth.org asked signers to "Tell the Associated Press: Establish a rule in the AP Stylebook ruling out the use of 'skeptic' to describe those who deny scientific facts". In September 2015, the Associated Press announced "an addition to AP Stylebook entry on global warming" that advised "to describe those who don't accept climate science or dispute the world is warming from human-made forces, use 'climate change doubters' or 'those who reject mainstream climate science'. Avoid use of 'skeptics' or 'deniers'". In May 2019, The Guardian also rejected use of the term "climate skeptic" in favor of "climate science denier". In addition to explicit denial, people have also shown implicit denial by accepting the scientific consensus but failing to "translate their acceptance into action". This type of denial is also called soft climate change denial. Categories and tactics In 2004, German climate scientist Stefan Rahmstorf described how the media give the misleading impression that climate change is still disputed within the scientific community, attributing this impression to climate change skeptics' PR efforts. He identified different positions that climate skeptics argue, which he used as a taxonomy of climate change skepticism. Later the model was also applied to denial: Trend skeptics or deniers (who claim that no significant warming is taking place): "Given that the warming is now evident even to laypeople, the trend skeptics are a gradually vanishing breed. They [...] claim that the warming trend measured by weather stations is an artefact due to urbanisation around those stations (urban heat island effect)." Attribution skeptics or deniers (who accept the climate change trends but claim there are natural causes for this, not human-made ones): "A few of them even deny that the rise in the atmospheric CO2 content is anthropogenic; they claim that the atmospheric CO2 is released from the ocean by natural processes." Impact skeptics or deniers (who think climate change is harmless or even beneficial, for example the "potential extension of agriculture into higher latitudes"). Sometimes consensus denial is added, for people who question the existence of the scientific consensus on anthropogenic climate change. The National Center for Science Education describes climate change denial as disputing differing points in the scientific consensus, a sequential range of arguments from denying the occurrence of climate change, accepting that but denying any significant human contribution, accepting these but denying scientific findings on how this would affect nature and human society, to accepting all these but denying that humans can mitigate or reduce the problems. James L. Powell provides a more extended list, as does climatologist Michael E. Mann in "six stages of denial", a ladder model whereby deniers have over time conceded acceptance of points, while retreating to a position that still rejects the mainstream consensus: Climate change denial is a form of denialism. Chris and Mark Hoofnagle have defined denialism in this context as the use of rhetorical devices "to give the appearance of legitimate debate where there is none, an approach that has the ultimate goal of rejecting a proposition on which a scientific consensus exists." This process characteristically uses one or more of the following tactics: Allegations that scientific consensus involves conspiring to fake data or suppress the truth: a climate change conspiracy theory. Fake experts, or individuals with views at odds with established knowledge, at the same time marginalizing or denigrating published topic experts. Like the manufactured doubt over smoking and health, a few contrarian scientists oppose the climate consensus, some of them the same people. Selectivity, such as cherry-picking atypical or even obsolete papers, in the same way that the MMR vaccine controversy was based on one paper: examples include discredited ideas of the medieval warm period. Unworkable demands of research, claiming that any uncertainty invalidates the field or exaggerating uncertainty while rejecting probabilities and mathematical models. Logical fallacies. Discussing specific aspects of climate change science Some politicians and climate change denial groups say that because is only a trace gas in the atmosphere (0.04%), it cannot cause climate change. But scientists have known for over a century that even this small proportion has a significant warming effect, and doubling the proportion leads to a large temperature increase. Some groups allege that water vapor is a more significant greenhouse gas, and is left out of many climate models. But while water vapor is a greenhouse gas, its very short atmospheric lifetime (about 10 days) compared to that of (hundreds of years) means that is the primary driver of increasing temperatures; water vapor acts as a feedback, not a forcing, mechanism. Climate denial groups may also argue that global warming has stopped, that a global warming hiatus is in effect, or that global temperatures are actually decreasing, leading to global cooling. These arguments are based on short-term fluctuations and ignore the long-term pattern. Some groups and prominent deniers such as William Happer argue that there is a greenhouse gas saturation effect that significantly decreases the warming potential of further gases released into the atmosphere. Such an effect does exist in some form, as Happer's research demonstrates, but is likely negligible with respect to net global warming. Climate change denial literature often features the suggestion that we should wait for better technologies before addressing climate change, when they will be more affordable and effective. Playing up the potential non-human causes Climate denial groups often point to natural variability, such as sunspots and cosmic rays, to explain the warming trend. According to these groups, there is natural variability that will abate over time, and human influence has little to do with it. But climate models already take these factors into account. The scientific consensus is that they cannot explain the observed warming trend. Playing up flawed studies In 2007, the Heartland Institute published an article titled "500 Scientists Whose Research Contradicts Man-Made Global Warming Scares" by Dennis T. Avery, a food policy analyst at the Hudson Institute. Avery's list was immediately called into question for misunderstanding and distorting the conclusions of many of the named studies and citing outdated, flawed studies that had long been abandoned. Many of the scientists on the list demanded their names be removed. At least 45 of them had no idea they were included as "co-authors" and disagreed with the article's conclusions. The Heartland Institute refused these requests, saying that the scientists "have no right—legally or ethically—to demand that their names be removed from a bibliography composed by researchers with whom they disagree". Disputing IPCC reports and processes Deniers have generally attacked either the IPCC's processes, scientist or the synthesis and executive summaries; the full reports attract less attention. In 1996, climate change denier Frederick Seitz criticized the 1995 IPCC Second Assessment Report, alleging corruption in the peer-review process. Scientists rejected his assertions; the presidents of the American Meteorological Society and University Corporation for Atmospheric Research described his claims as part of a "systematic effort by some individuals to undermine and discredit the scientific process". In 2005, the House of Lords Economics Committee wrote, "We have some concerns about the objectivity of the IPCC process, with some of its emissions scenarios and summary documentation apparently influenced by political considerations." It doubted the high emission scenarios and said that the IPCC had "played-down" what the committee called "some positive aspects of global warming". The main statements of the House of Lords Economics Committee were rejected in the response made by the United Kingdom government. On 10 December 2008, the U.S. Senate Committee on Environment and Public Works minority members released a report under the leadership of the Senate's most vocal global warming denier, Jim Inhofe. It says it summarizes scientific dissent from the IPCC. Many of its statements about the numbers of people listed in the report, whether they are actually scientists, and whether they support the positions attributed to them, have been disputed. Inhofe also said that "some parts of the IPCC process resembled a Soviet-style trial, in which the facts are predetermined, and ideological purity trumps technical and scientific rigor." Creating doubts about scientific publishing processes Some climate change deniers promote conspiracy theories alleging that the scientific consensus is illusory, or that climatologists are acting out of their own financial interests by causing undue alarm about a changing climate. Some climate change deniers claim that there is no scientific consensus on climate change, that any evidence for a scientific consensus is faked, or that the peer-review process for climate science papers has become corrupted by scientists seeking to suppress dissent. No evidence of such conspiracies has been presented. In fact, much of the data used in climate science is publicly available, contradicting allegations that scientists are hiding data or stonewalling requests. Some climate change deniers assert that the scientific consensus on climate change is based on conspiracies to produce manipulated data or suppress dissent. It is one of a number of tactics used in climate change denial to attempt to manufacture political and public controversy disputing this consensus. These people typically allege that, through worldwide acts of professional and criminal misconduct, the science behind climate change has been invented or distorted for ideological or financial reasons. They promote harmful conspiracy theories alleging that scientists and institutions involved in global warming research are part of a global scientific conspiracy or engaged in a manipulative hoax. The Great Global Warming Swindle is a 2007 British polemical documentary film directed by Martin Durkin that denies the scientific consensus about the reality and causes of climate change, justifying this by suggesting that climatology is influenced by funding and political factors. The film strongly opposes the scientific consensus on climate change. It argues that the consensus on climate change is the product of "a multibillion-dollar worldwide industry: created by fanatically anti-industrial environmentalists; supported by scientists peddling scare stories to chase funding; and propped up by complicit politicians and the media". The programme's publicity materials claim that man-made global warming is "a lie" and "the biggest scam of modern times." The film received strong criticism from many scientists and others. Journalist George Monbiot called it "the same old conspiracy theory that we've been hearing from the denial industry for the past ten years". The climate deniers involved in the Climatic Research Unit email controversy ("Climategate") in 2009 claimed that researchers faked the data in their research publications and suppressed their critics in order to receive more funding (i.e. taxpayer money). Eight committees investigated these allegations and published reports, each finding no evidence of fraud or scientific misconduct. According to the Muir Russell report, the scientists' "rigor and honesty as scientists are not in doubt", the investigators "did not find any evidence of behavior that might undermine the conclusions of the IPCC assessments", but there had been "a consistent pattern of failing to display the proper degree of openness." The scientific consensus that climate change is occurring as a result of human activity remained unchanged at the end of the investigations. Being "lukewarm" or "skeptical" In 2012, Clive Hamilton published the essay "Climate change and the soothing message of luke-warmism". He defined luke-warmists as "those who appear to accept the body of climate science but interpret it in a way that is least threatening: emphasising uncertainties, playing down dangers, and advocating a slow and cautious response. They are politically conservative and anxious about the threat to the social structure posed by the implications of climate science. Their 'pragmatic' approach is therefore alluring to political leaders looking for a justification for policy minimalism." He cited Ted Nordhaus and Michael Shellenberger of the Breakthrough Institute, and also Roger A. Pielke Jr., Daniel Sarewitz, Steve Rayner, Mike Hulme and "the pre-eminent luke-warmist" Danish economist Bjørn Lomborg. Climate change skepticism, while in some cases professing to do research on climate change, has focused instead on influencing the opinion of the public, legislators and the media, in contrast to legitimate science. Pope Francis groups together four types of respondents rejecting climate change: those who "deny, conceal, gloss over or relativize the issue". Pushing for adaptation only The conservative National Center for Policy Analysis, whose "Environmental Task Force" contains a number of climate change deniers, including Sherwood Idso and S. Fred Singer, has said, "The growing consensus on climate change policies is that adaptation will protect present and future generations from climate-sensitive risks far more than efforts to restrict emissions." The adaptation-only plan is also endorsed by oil companies like ExxonMobil. According to a Ceres report, "ExxonMobil's plan appears to be to stay the course and try to adjust when changes occur. The company's plan is one that involves adaptation, as opposed to leadership." The George W. Bush administration also voiced support for an adaptation-only policy in 2002. "In a stark shift for the Bush administration, the United States has sent a climate report [U.S. Climate Action Report 2002] to the United Nations detailing specific and far-reaching effects it says global warming will inflict on the American environment. In the report, the administration also for the first time places most of the blame for recent global warming on human actions—mainly the burning of fossil fuels that send heat-trapping greenhouse gases into the atmosphere." The report "does not propose any major shift in the administration's policy on greenhouse gases. Instead it recommends adapting to inevitable changes instead of making rapid and drastic reductions in greenhouse gases to limit warming." This position apparently precipitated a similar shift in emphasis at the COP 8 climate talks in New Delhi several months later; "The shift satisfies the Bush administration, which has fought to avoid mandatory cuts in emissions for fear it would harm the economy. 'We're welcoming a focus on more of a balance on adaptation versus mitigation', said a senior American negotiator in New Delhi. 'You don't have enough money to do everything. Some find this shift and attitude disingenuous and indicative of a bias against prevention (i.e. reducing emissions/consumption) and toward prolonging the oil industry's profits at the environment's expense. In an article addressing the supposed economic hazards of addressing climate change, writer and environmental activist George Monbiot wrote: "Now that the dismissal of climate change is no longer fashionable, the professional deniers are trying another means of stopping us from taking action. It would be cheaper, they say, to wait for the impacts of climate change and then adapt to them". Delaying climate change mitigation measures Climate change deniers often debate whether action (such as the restrictions on the use of fossil fuels to reduce carbon-dioxide emissions) should be taken now or in the near future. They fear the economic ramifications of such restrictions. For example, in a 1998 speech, a staff member of the Cato Institute, a libertarian think tank, argued that emission controls' negative economic effects outweighed their environmental benefits. Climate change deniers tend to argue that even if global warming is caused solely by the burning of fossil fuels, restricting their use would damage the world economy more than the increases in global temperature. Conversely, the general consensus is that early action to reduce emissions would help avoid much greater economic costs later, and reduce the risk of catastrophic, irreversible change. Earlier, climate change deniers' online YouTube content focused on denying global warming, or saying such warming is not caused by humans burning fossil fuel. As such denials became untenable, content shifted to asserting that climate solutions are unworkable, that global warming is harmless or even beneficial, and that the environmental movement is unreliable. A 2016 article in Science made the case that opposition to climate policy was beginning to take a "rhetorical shift away from outright skepticism" and called this neoskepticism. Rather than denying the existence of global warming, neoskeptics instead "question the magnitude of the risks and assert that reducing them has more costs than benefits." According to the authors, the emergence of neoskepticism "heightens the need for science to inform decision making under uncertainty and to improve communication and education." There is a range of possible mitigation policies. Disagreement over the sufficiency, viability, or desirability of a given policy is not necessarily neoskepticism. But neoskepticism is marked by failure to appreciate the increased risks associated with delayed action. Gavin Schmidt has called neoskepticism a form of confirmation bias and the tendency to always take "as gospel the lowest estimate of a plausible range". Neoskeptics err on the side of the least disruptive projections and least active policies and, as such, neglect or misapprehend the full spectrum of risks associated with global warming. In political terms, soft climate denial can stem from concerns about the economics and economic impacts of climate change, particularly the concern that strong measures to combat global warming or mitigate its impacts will seriously inhibit economic growth. Promoting conspiracy theories Climate change denial is commonly rooted in a phenomenon known as conspiracy theory, in which people misattribute events to a powerful group's secret plot or plan. People with certain cognitive tendencies are also more drawn than others to conspiracy theories about climate change. Conspiratorial beliefs are more predominantly found in narcissistic people and those who consistently look for meanings or patterns in their world, including believers in paranormal activity. Climate change conspiracy disbelief is also linked to lower levels of education and analytic thinking. Scientists are investigating which factors associated with conspiracy belief can be influenced and changed. They have identified "uncertainty, feelings of powerlessness, political cynicism, magical thinking, and errors in logical and probabilistic reasoning". In 2012, researchers found that belief in other conspiracy theories was associated with being more likely to endorse climate change denial. Examples of science-related conspiracy theories that some people believe include that aliens exist, childhood vaccines are linked to autism, Bigfoot is real, the government "adds fluoride to drinking water for 'sinister' purposes", and the moon landing was faked. Examples of alleged climate change conspiracies include: Aiming at New World Order: Senator James Inhofe, a Republican from Oklahoma, suggested in 2006 that supporters of the Kyoto Protocol such as Jacques Chirac are aiming at global governance. In his speech, Inhofe said: "So, I wonder: are the French going to be dictating U.S. policy?" William M. Gray also claimed in 2006 that scientists support the scientific consensus on climate change because they were promoted by government leaders and environmentalists seeking world government. He added that its purpose was to exercise political influence, to try to introduce world government, and to control people. To promote other types of energy sources: Some have claimed that the "threat of global warming is an attempt to promote nuclear power". Another claim is that "because many people have invested in renewable energy companies, they stand to lose a lot of money if global warming is shown to be a myth. According to this theory, environmental groups therefore bribe climate scientists to doctor their data so that they are able to secure their financial investment in green energy." Psychology A study published in PLOS One in 2024 found that even a single repetition of a claim was sufficient to increase the perceived truth of both climate science-aligned claims and climate change skeptic/denial claims—"highlighting the insidious effect of repetition". This effect was found even among climate science endorsers. Connections to other debates Links with other environmental issues Many of the climate change deniers have disagreed, in whole or part, with the scientific consensus regarding other issues, particularly those relating to environmental risks, such as ozone depletion, DDT, and passive smoking. In the 1990s, the Marshall Institute began campaigning against increased regulations on environmental issues such as acid rain, ozone depletion, second-hand smoke, and the dangers of DDT. In each case their argument was that the science was too uncertain to justify any government intervention, a strategy it borrowed from earlier efforts to downplay the health effects of tobacco in the 1980s. This campaign would continue for the next two decades. Links with nationalism and right-wing groups In 2023, an increase in climate change denial was noted, particularly among supporters of the far right. It has been suggested that climate change can conflict with a nationalistic view because it is "unsolvable" at the national level and requires collective action between nations or between local communities, and that therefore populist nationalism tends to reject the science of climate change. The UK Independence Party's policy on climate change has been influenced by climate change denier Christopher Monckton and by its energy spokesman Roger Helmer, who has said, "It is not clear that the rise in atmospheric CO2 is anthropogenic." Jerry Taylor of the Niskanen Center posits that climate change denial is an important component of Trumpian historical consciousness, and "plays a significant role in the architecture of Trumpism as a developing philosophical system". Though climate change denial was apparently waning circa 2021, some right-wing nationalist organizations have adopted a theory of "environmental populism" advocating that natural resources be preserved for a nation's existing residents, to the exclusion of immigrants. Other such right-wing organizations have contrived new "green wings" that falsely assert that refugees from poor nations cause environmental pollution and climate change and should therefore be excluded. A study published in PLOS Climate studied two forms of national identity—defensive or "national narcissism" and "secure national identification"—for their correlation to support for policies to mitigate climate change and transition to renewable energy. The authors defined national narcissism as "a belief that one's national group is exceptional and deserves external recognition underlain by unsatisfied psychological needs". They defined secure national identification as "reflect[ing] feelings of strong bonds and solidarity with one's ingroup members, and sense of satisfaction in group membership". The researchers concluded that secure national identification tends to support policies promoting renewable energy, while national narcissism is inversely correlated with support for such policies—except to the extent that such policies, as well as greenwashing, enhance the national image. Right-wing political orientation, which may indicate susceptibility to climate conspiracy beliefs, was also found to be negatively correlated with support for genuine climate mitigation policies. Conservative views One worldview that often leads to climate change denial is belief in free enterprise capitalism. The "freedom of the commons" (tragedy of the commons), or the freedom to use natural resources as a public good as it is practiced in free enterprise capitalism, destroys important ecosystems and their functions, and so having a stake in this worldview does not correlate with climate change mitigation behavior. Political worldview plays an important role in environmental policy and action. Liberals tend to focus on environmental risks, while conservatives focus on the benefits of economic development. Because of this difference, conflicting opinions on the acceptance of climate change arise. A study of climate change denial indicators in public opinion data from ten Gallup surveys from 2001 to 2010 shows that conservative white men in the U.S. are significantly more likely to deny climate change than other Americans. Conservative white men who report understanding climate change very well are even more likely to deny climate change. Another reason for the discrepancy in climate change denial between liberals and conservatives is that "contemporary environmental discourse is based largely on moral concerns related to harm and care, which are more deeply held by liberals than by conservatives"; if the discourse is instead framed using moral concerns related to purity that are more deeply held by conservatives, the discrepancy is resolved. In the U.S., climate change denial largely correlates with political affiliation. This is partially because Democrats focus more on tighter government regulations and taxation, which are the basis for most environmental policy. Political affiliation also affects how different people interpret the same facts. More highly educated people are less likely to rely on their own interpretation and political ideology rather than on scientists' opinions. Therefore, political worldviews override expert opinion on the interpretation of climate facts and evidence of anthropogenic climate change. Affiliation with a political group, especially in the U.S., is an important personal and social identity for many. Because of this, many people hold the popular values of their political affiliation, regardless of their personal beliefs, so as not to be ostracized by the group. History U.S. fossil fuel companies have known about global warming since at least the 1960s. In 1966, a coal industry research organization, Bituminous Coal Research Inc., published its finding that if then prevailing trends of coal consumption continued, "the temperature of the earth's atmosphere will increase" and "vast changes in the climates of the earth will result. [...] Such changes in temperature will cause melting of the polar icecaps, which, in turn, would result in the inundation of many coastal cities, including New York and London." In a discussion following this paper in the same publication, a combustion engineer for Peabody Coal, now Peabody Energy, the world's largest coal supplier, added that the coal industry was merely "buying time" before additional government air pollution regulations would be promulgated to clean the air. Nevertheless, the coal industry publicly advocated for decades thereafter the position that increased carbon dioxide in the atmosphere is beneficial for the planet. In response to increasing public awareness of the greenhouse effect in the 1970s, conservative reaction built up, denying environmental concerns that could lead to government regulation. In 1977, the first Secretary of Energy, James Schlesinger, suggested President Jimmy Carter take no action regarding a climate change memo, citing uncertainty. During the presidency of Ronald Reagan, global warming became a political issue, with immediate plans to cut spending on environmental research, particularly climate-related, and stop funding for monitoring. Congressman Al Gore was aware of the developing science: he joined others in arranging congressional hearings from 1981 onward, with testimony from scientists including Revelle, Stephen Schneider, and Wallace Smith Broecker. An Environmental Protection Agency (EPA) report in 1983 said global warming was "not a theoretical problem but a threat whose effects will be felt within a few years", with potentially "catastrophic" consequences. The Reagan administration called the report "alarmist" and the dispute was widely covered. Public attention turned to other issues, then the 1985 finding of a polar ozone hole brought a swift international response. To the public, this was related to climate change and the possibility of effective action, but news interest faded. Public attention was renewed amid summer droughts and heat waves when James Hansen testified to a Congressional hearing on 23 June 1988, saying with high confidence that long-term warming was underway with severe warming likely within the next 50 years, and warning of likely storms and floods. There was increasing media attention: the scientific community had reached a broad consensus that the climate was warming, human activity was very likely the primary cause, and there would be significant consequences if the trend was not curbed. These facts encouraged discussion about new environmental regulations, which the fossil fuel industry opposed. From 1989 onward, industry-funded organizations, including the Global Climate Coalition and the George C. Marshall Institute, sought to spread doubt, in a strategy already developed by the tobacco industry. A small group of scientists opposed to the consensus on global warming became politically involved, and with support from conservative political interests, began publishing in books and the press rather than in scientific journals. Historian Spencer Weart identifies this period as the point where skepticism about basic aspects of climate science was no longer justified, and those spreading mistrust about these issues became deniers. As the scientific community and new data increasingly refuted their arguments, deniers turned to political arguments, making personal attacks on scientists' reputations, and promoting ideas of global warming conspiracies. With the 1989 fall of communism, the attention of U.S. conservative think tanks, which had been organized in the 1970s as an intellectual counter-movement to socialism, turned from the "red scare" to the "green scare" tactic, which they saw as a threat to their aims of private property, free trade market economies, and global capitalism. They used environmental skepticism to promote denial of environmental problems such as loss of biodiversity and climate change. The campaign to spread doubt continued into the 1990s, including an advertising campaign funded by coal industry advocates intended to "reposition global warming as theory rather than fact". There was also a 1998 proposal by the American Petroleum Institute to recruit scientists to convince politicians, the media, and the public that climate science was too uncertain to warrant environmental regulation. In 1998, journalists Ross Gelbspan noted that his fellow journalists accepted that global warming was occurring, but were in "'stage-two' denial of the climate crisis", unable to accept the feasibility of solutions to the problem. His book, Boiling Point, published in 2004, detailed the fossil-fuel industry's campaign to deny climate change and undermine public confidence in climate science. In Newsweek August 2007 cover story "The Truth About Denial", Sharon Begley reported that "the denial machine is running at full throttle", and that this "well-coordinated, well-funded campaign" by contrarian scientists, free-market think tanks, and industry had "created a paralyzing fog of doubt around climate change." Similarities with tobacco industry tactics In 2006, George Monbiot published an article about similarities between the methods of groups funded by Exxon and those of the tobacco giant Philip Morris, including direct attacks on peer-reviewed science and attempts to create public controversy and doubt. The approach to downplay climate change's significance was copied from tobacco lobbyists, who attempted to prevent or delay the introduction of regulation in the face of scientific evidence linking tobacco to lung cancer. They attempted to discredit the research by creating doubt, manipulating debate, discrediting the scientists involved, disputing their findings, and creating and maintaining an apparent controversy by promoting claims that contradicted scientific research. Doubt shielded the tobacco industry from litigation and regulation for decades. For example, in 1992 an EPA report linked secondhand smoke with lung cancer. In response, the tobacco industry engaged the APCO Worldwide public relations company, which set out a strategy of astroturfing campaigns to cast doubt on the science by linking smoking anxieties with other issues, including global warming, in order to turn public opinion against calls for government intervention. The campaign depicted public concerns as "unfounded fears" supposedly based only on "junk science" in contrast to their "sound science", and operated through front groups, primarily the Advancement of Sound Science Center (TASSC) and its Junk Science website, run by Steven Milloy. A tobacco company memo read, "Doubt is our product since it is the best means of competing with the 'body of fact' that exists in the mind of the general public. It is also the means of establishing a controversy." During the 1990s, the tobacco campaign died away, and TASSC began taking funding from oil companies, including Exxon. Its website became central in distributing "almost every kind of climate-change denial that has found its way into the popular press." Monbiot wrote that TASSC "has done more damage to the campaign to halt [climate change] than any other body" by trying to manufacture the appearance of a grassroots movement against "unfounded fear" and "over-regulation". Republican Party in the United States The Republican Party in the United States is unique in denying anthropogenic climate change among conservative political parties in the Western world. In 1994, according to a leaked memo, the Republican strategist Frank Luntz advised members of the Republican Party, with regard to climate change, that "you need to continue to make the lack of scientific certainty a primary issue" and "challenge the science" by "recruiting experts who are sympathetic to your view". (In 2006, Luntz said he still believes "back [in] '97, '98, the science was uncertain", but now agreed with the scientific consensus.) From 2008 to 2017, the Republican Party went from "debating how to combat human-caused climate change to arguing that it does not exist". In 2011, "more than half of the Republicans in the House and three-quarters of Republican senators" said "that the threat of global warming, as a human-made and highly threatening phenomenon, is at best an exaggeration and at worst an utter 'hoax. In 2014, more than 55% of congressional Republicans were reported to be climate change deniers. According to PolitiFact in May 2014, Jerry Brown's statement that "virtually no Republican" in Washington accepts climate change science was "mostly true"; PolitiFact counted "eight out of 278, or about 3 percent" of Republican members of Congress who "accept the prevailing scientific conclusion that global warming is both real and man-made." In 2005, The New York Times reported that Philip Cooney, a former fossil fuel lobbyist and "climate team leader" at the American Petroleum Institute and President George W. Bush's chief of staff of the Council on Environmental Quality, had "repeatedly edited government climate reports in ways that play down links between such emissions and global warming, according to internal documents". Sharon Begley reported in Newsweek that Cooney "edited a 2002 report on climate science by sprinkling it with phrases such as 'lack of understanding' and 'considerable uncertainty'." Cooney reportedly removed an entire section on climate in one report, whereupon another lobbyist sent him a fax saying "You are doing a great job." In the 2016 U.S. election cycle, every Republican presidential candidate, and the Republican leader in the U.S. Senate, questioned or denied climate change, and opposed U.S. government steps to address it. In 2016, Aaron McCright argued that anti-environmentalism—and climate change denial specifically—had expanded in the U.S. to become "a central tenet of the current conservative and Republican identity". In a 2017 interview, United States Secretary of Energy Rick Perry acknowledged the existence of climate change and impact from humans, but said that he did not agree that carbon dioxide was its primary driver, pointing instead to "the ocean waters and this environment that we live in". The American Meteorological Society responded in a letter to Perry that it is "critically important that you understand that emissions of carbon dioxide and other greenhouse gases are the primary cause", pointing to conclusions of scientists worldwide. Climate denial has started to decrease among the Republican Party leadership toward acknowledgment that "the climate is changing"; a 2019 study by several major think tanks called the climate right "fragmented and underfunded". Florida Republican Tom Lee described people's emotional impact and reactions to climate change, saying: "I mean, you have to be the Grim Reaper of reality in a world that isn't real fond of the Grim Reaper. That's why I use the term 'emotionally shut down', because I think I think you lose people at hello a lot times in the Republican conversation over this." When a moderator at the August 23, 2023, Republican presidential debate asked the candidates to raise their hands if they believed human behavior is causing climate change, none did. Entrepreneur Vivek Ramaswamy said, "the climate change agenda is a hoax" and that "more people are dying of climate change policies than they actually are of climate change"; none of his competitors challenged him directly on climate. After investigating Ramaswamy's latter claim, a Washington Post fact check found no supporting evidence. Denial networks Conservative and libertarian think tanks A 2000 article explored the connection between conservative think tanks and climate change denial. Research found that specific groups were marshaling skepticism against climate change; a 2008 University of Central Florida study found that 92% of "environmentally skeptical" literature published in the U.S. was partly or wholly affiliated with self-proclaimed conservative think tanks. In 2013, the Center for Media and Democracy reported that the State Policy Network (SPN), an umbrella group of 64 U.S. think tanks, had been lobbying on behalf of major corporations and conservative donors to oppose climate change regulation. Conservative and libertarian think tanks in the U.S., such as The Heritage Foundation, Marshall Institute, Cato Institute, and the American Enterprise Institute, were significant participants in lobbying attempts seeking to halt or eliminate environmental regulations. Between 2002 and 2010, the combined annual income of 91 climate change counter-movement organizations—think tanks, advocacy groups and industry associations—was roughly $900 million. During the same period, billionaires secretively donated nearly $120 million (£77 million) via the Donors Trust and Donors Capital Fund to more than 100 organizations seeking to undermine the public perception of the science on climate change. Publishers, websites and networks In November 2021, a study by the Center for Countering Digital Hate identified "ten fringe publishers" that together were responsible for nearly 70 percent of Facebook user interactions with content that denied climate change. Facebook said the percentage was overstated and called the study misleading. The "toxic ten" publishers: Breitbart News, The Western Journal, Newsmax, Townhall, Media Research Center, The Washington Times, The Federalist, The Daily Wire, RT (TV network), and The Patriot Post. The Rebel Media and its director, Ezra Levant, have promoted climate change denial and oil sands extraction in Alberta. Willard Anthony Watts is an American blogger who runs Watts Up With That?, a climate change denial blog. A piece of research from 2015 identified 4,556 people with overlapping network ties to 164 organizations that were responsible for most efforts to downplay the threat of climate change in the U.S. Publications for school children According to documents leaked in February 2012, The Heartland Institute is developing a curriculum for use in schools that frames climate change as a scientific controversy. In 2017, deputy director of the National Center for Science Education (NCSE) Glenn Branch wrote, "the Heartland Institute is continuing to inflict its climate change denial literature on science teachers across the country". Each significant claim was rated for accuracy by scientists who were experts on that topic. It was found that "the 'Key Findings' section are incorrect, misleading, based on flawed logic, or simply factually inaccurate". The NCSE has prepared Classroom Resources in response to Heartland and other anti-science threats. In 2023, Republican politician and Baptist minister Mike Huckabee published Kids Guide to the Truth About Climate Change, which acknowledges global warming but minimizes the influence of human emissions. Marketed as an alternative to mainstream education, the publication does not attribute authorship or cite scientific credentials. The NCSE's deputy director called the publication "propaganda" and "very unreliable as a guide to climate change for kids", saying it represented "present-day" atmospheric concentrations of carbon dioxide as 280 parts per million (ppm), which was true in 391 BC but short of 2023's actual concentration of 420 ppm. In 2023, the state of Florida approved a public school curriculum including videos produced by conservative advocacy group PragerU that liken climate change skeptics to those who fought Communism and Nazism, imply renewable energy harms the environment, and say current global warming occurs naturally. Texas, which has a large influence on school textbooks published nationwide, proposed textbooks in 2023 that included more information about the climate crisis than editions a decade earlier. But some books clouded the human causes of climate change and downplayed the role of fossil fuels, with Texas U.S. Representative August Pfluger emphasizing the importance of "secure, reliable energy" (oil and natural gas) produced in the Permian Basin. In September 2023, Pfluger's Congressional website said, "we cannot allow the radical climate lobby to infiltrate Texas middle schools and brainwash our children", claiming that liquefied natural gas is "not only...good for our economy, but it's good for the environment". Notable people who deny climate change Politicians Acknowledgment of climate change by politicians, while expressing uncertainty as to how much of it is due to human activity, has been described as a new form of climate denial, and "a reliable tool to manipulate public perception of climate change and stall political action". Former U.S. Senator Tom Coburn in 2017 discussed the Paris agreement and denied the scientific consensus on human-caused global warming. Coburn claimed that sea level rise had been no more than 5 mm in 25 years, and asserted there was now global cooling. In 2013, he said, "I am a global warming denier. I don't deny that." In 2010, Donald Trump (who later became president of the United States from 2017 to 2021) said, "With the coldest winter ever recorded, with snow setting record levels up and down the coast, the Nobel committee should take the Nobel Prize back from Al Gore....Gore wants us to clean up our factories and plants in order to protect us from global warming, when China and other countries couldn't care less. It would make us totally noncompetitive in the manufacturing world, and China, Japan and India are laughing at America's stupidity." In 2012, Trump tweeted, "The concept of global warming was created by and for the Chinese in order to make U.S. manufacturing non-competitive." Republican Jim Bridenstine, the first elected politician to serve as NASA administrator, had previously said that global temperatures were not rising. But a month after the Senate confirmed his NASA position in April 2018, he acknowledged that human emissions of greenhouse gases are raising global temperatures. During a May 2018 meeting of the United States House Committee on Science, Space, and Technology, Representative Mo Brooks claimed that sea level rise is caused not by melting glaciers but rather by coastal erosion and silt that flows from rivers into the ocean. In 2019, Ernesto Araújo, the minister of foreign affairs appointed by Brazil's newly elected president Jair Bolsonaro, called global warming a plot by "cultural Marxists" and eliminated the ministry's climate change division. An April 15, 2023, tweet by Republican U.S. Representative Marjorie Taylor Greene said climate change was a "scam", that "fossil fuels are natural and amazing", and that "there are some very powerful people that are getting rich beyond their wildest dreams convincing many that carbon is the enemy". Her tweet included a chart that omitted carbon dioxide and methane—the two most dominant greenhouse gas emissions. A 2024 analysis found 100 U.S. representatives and 23 U.S. senators—23% of the 535 members of Congress—to be climate change deniers, all the deniers being Republicans. Scientists American and New Zealand climate scientist Kevin Trenberth has published widely on climate change science and fought back against climate change misinformation for decades. He describes in his memoirs his "close encounters with deniers and skeptics"—with fellow meteorologists or climate change scientists. These included Richard Lindzen ("he is quite beguiling but is criticized as "intellectually dishonest" by his peers"; Lindzen was a professor of meteorology at MIT and has been called a contrarian in relation to climate change and other issues.), Roy Spencer (who has "repeatedly made errors that always resulted in lower temperature trends than were really present"), John Christy ("his decisions on climate work and statements appear to be heavily colored by his religion"), Roger Pielke Jr, Christopher Landsea, Pat Michaels ("long associated with the Cato Institute, he changed his bombastic tune gradually over time as climate change became more evident"). Sherwood B. Idso is a natural scientist and is the president of the Center for the Study of Carbon Dioxide and Global Change, which rejects the scientific consensus on climate change. In 1982 he published his book Carbon Dioxide: Friend or Foe?, which said increases in would not warm the planet, but would fertilize crops and were "something to be encouraged and not suppressed". William M. Gray was a climate scientist (emeritus professor of atmospheric science at Colorado State University) who supported climate change denial: he agreed that global warming was taking place, but argued that humans were responsible for only a tiny portion of it and it was largely part of the Earth's natural cycle. In 1998, Frederick Seitz, an American physicist and former National Academy of Sciences president, wrote the Oregon Petition, a controversial document in opposition to the Kyoto Protocol. The petition and accompanying "Research Review of Global Warming Evidence" claimed that "We are living in an increasingly lush environment of plants and animals as a result of the carbon dioxide increase. [...] This is a wonderful and unexpected gift from the Industrial Revolution". In their book Merchants of Doubt, the authors write that Seitz and a group of other scientists fought the scientific evidence and spread confusion on many of the most important issues of our time, like the harmfulness of tobacco smoke, acid rains, CFCs, pesticides, and global warming. Lobbying and related activities Efforts to lobby against environmental regulation have included campaigns to manufacture doubt about the science behind climate change and to obscure the scientific consensus and data. These have undermined public confidence in climate science. As of 2015, the climate change denial industry is most powerful in the U.S. Efforts by climate change denial groups played a significant role in the United States' rejection of the Kyoto Protocol in 1997. Fossil fuel companies and other private sector actors Research conducted at an Exxon archival collection at the University of Texas and interviews with former Exxon employees indicate that the company's scientific opinion and its public posture toward climate change were contradictory. A systematic review of Exxon's climate modeling projections concluded that in private and academic circles since the late 1970s and early 1980s, ExxonMobil predicted global warming correctly and skillfully, correctly dismissed the possibility of a coming ice age in favor of a "carbon dioxide induced super-interglacial", and reasonably estimated how much would lead to dangerous warming. Between 1989 and 2002, the Global Climate Coalition, a group of mainly U.S. businesses, used aggressive lobbying and public relations tactics to oppose action to reduce greenhouse gas emissions and fight the Kyoto Protocol. Large corporations and trade groups from the oil, coal and auto industries financed the coalition. The New York Times reported, "even as the coalition worked to sway opinion [toward skepticism], its own scientific and technical experts were advising that the science backing the role of greenhouse gases in global warming could not be refuted". In 2000, the Ford Motor Company was the first company to leave the coalition as a result of pressure from environmentalists. Daimler-Chrysler, Texaco, the Southern Company and General Motors subsequently left the GCC. It closed in 2002. From January 2009 through June 2010, the oil, coal and utility industries spent $500 million in lobby expenditures in opposition to legislation to address climate change. A study in 2022 traced the history of an influential group of economic consultants hired by the petroleum industry from the 1990s to the 2010s to estimate the costs of various proposed climate policies. The economists used models that inflated predicted costs while ignoring policy benefits, and their results were often portrayed to the public as independent rather than industry-sponsored. Their work played a key role in undermining numerous major climate policy initiatives in the US over a span of decades. This study illustrates how the fossil fuel industry has funded biased economic analyses to oppose climate policy. ExxonMobil Attacks and threats towards scientists Climate change deniers attacked the work of climate scientist Michael E. Mann for years. On 8 February 2024, Mann won a $1 million judgment for punitive damages in a defamation lawsuit filed in 2012 against bloggers who attacked his hockey stick graph of the Northern Hemisphere temperature rise. One of the bloggers had called Mann's work "fraudulent", contrary to numerous investigations that had already cleared Mann of any misconduct and supported the validity of his research. After Elon Musk's 2022 takeover of Twitter (now X), key figures at the company who ensured trusted content was prioritized were removed, and climate scientists received a large increase in hostile, threatening, harassing, and personally abusive tweets from deniers. In 2023, increases in climate change denial were reported, particularly on the far right. Climate change deniers threatened meteorologists, accusing them of causing a drought, falsifying thermometer readings, and cherry-picking warmer weather stations to misrepresent global warming. Also in 2023, CNN reported that meteorologists and climate communicators worldwide were receiving increased harassment and false accusations that they were lying about or controlling the weather, inflating temperature records to make climate change seem worse, and changing color palettes of weather maps to make them look more dramatic. The German television news service Tagesschau called this a global phenomenon. Funding for deniers Journalists reported in 2015 that oil companies had known since the 1970s that burning oil and gas could cause climate change but nonetheless funded deniers for years. Several large fossil fuel corporations provide significant funding for attempts to mislead the public about climate science's trustworthiness. ExxonMobil and the Koch family foundations have been identified as especially influential funders of climate change contrarianism. The bankruptcy of the coal company Cloud Peak Energy revealed it funded the Institute for Energy Research, a climate denial think tank, as well as several other policy influencers. After the IPCC released its Fourth Assessment Report in 2007, the American Enterprise Institute (AEI) offered British, American, and other scientists $10,000 plus travel expenses to publish articles critical of the assessment. The institute had received more than $1.6 million from Exxon, and its vice-chairman of trustees was former Exxon head Lee Raymond. Raymond sent letters that alleged the IPCC report was not "supported by the analytical work". More than 20 AEI employees worked as consultants to the George W. Bush administration. The authors of the 2010 book Merchants of Doubt provide documentation for the assertion that professional deniers have tried to sow seeds of doubt in public opinion in order to halt any meaningful social or political action to reduce the impact of human carbon emissions. That only half of the American population believes global warming is caused by human activity could be seen as a victory for these deniers. One of the authors' main arguments is that most prominent scientists who have opposed the near-universal consensus are funded by industries, such as automotive and oil, that stand to lose money by government actions to regulate greenhouse gases. The Global Climate Coalition was an industry coalition that funded several scientists who expressed skepticism about global warming. In 2000, several members left the coalition when they became the target of a national divestiture campaign run by John Passacantando and Phil Radford at Ozone Action. When Ford Motor Company left the coalition, it was regarded as "the latest sign of divisions within heavy industry over how to respond to global warming". After that, between December 1999 and early March 2000, the GCC was deserted by Daimler-Chrysler, Texaco, energy firm the Southern Company and General Motors. The Global Climate Coalition closed in 2002. In early 2015, several media reports emerged saying that Willie Soon, a popular scientist among climate change deniers, had failed to disclose conflicts of interest in at least 11 scientific papers published since 2008. They reported that he received a total of $1.25 million from ExxonMobil, Southern Company, the American Petroleum Institute, and a foundation run by the Koch brothers. Documents obtained by Greenpeace under the Freedom of Information Act show that the Charles G. Koch Foundation gave Soon two grants totaling $175,000 in 2005/6 and again in 2010. Grants to Soon between 2001 and 2007 from the American Petroleum Institute totaled $274,000, and between 2005 and 2010 from ExxonMobil totaled $335,000. The Mobil Foundation, the Texaco Foundation, and the Electric Power Research Institute also funded Soon. Acknowledging that he received this money, Soon said that he had "never been motivated by financial reward in any of my scientific research". In 2015, Greenpeace disclosed papers documenting that Soon failed to disclose to academic journals funding including more than $1.2 million from fossil fuel industry-related interests, including ExxonMobil, the American Petroleum Institute, the Charles G. Koch Charitable Foundation, and the Southern Company. Science editor-in-chief Donald Kennedy has said that deniers such as Michaels are lobbyists more than researchers, and "I don't think it's unethical any more than most lobbying is unethical". He said donations to deniers amount to "trying to get a political message across". Robert Brulle analyzed the funding of 91 organizations opposed to restrictions on carbon emissions, which he called the "climate change counter-movement". Between 2003 and 2013, the donor-advised funds Donors Trust and Donors Capital Fund, combined, were the largest funders, accounting for about a quarter of the funds, and the American Enterprise Institute was the largest recipient, with 16% of the total funds. The study also found that the amount of money donated to these organizations by means of foundations whose funding sources cannot be traced had risen. Effects on public opinion Public opinion on climate change is significantly affected by media coverage of climate change and the effects of climate change denial campaigns. Campaigns to undermine public confidence in climate science have decreased public belief in climate change, which in turn has affected legislative efforts to curb emissions. Climate change conspiracy theories and denial have resulted in poor action or no action at all to effectively mitigate the damage done by global warming. 40% of Americans believed (ca. 2017) that climate change is a hoax even though 100% of climate scientists (as of 2019) believe it is real. A study in 2015 stated: "Exposure to conspiracy theories reduced people's intentions to reduce their carbon footprint, relative to people who were given refuting information." Manufactured uncertainty over climate change, the fundamental strategy of climate change denial, has been very effective, particularly in the U.S. It has contributed to low levels of public concern and to government inaction worldwide. A 2010 Angus Reid poll found that global warming skepticism in the U.S., Canada, and the United Kingdom has been rising. There may be multiple causes of this trend, including a focus on economic rather than environmental issues, and a negative perception of the United Nations and its role in discussing climate change. According to Tim Wirth, "They patterned what they did after the tobacco industry. ... Both figured, sow enough doubt, call the science uncertain and in dispute. That's had a huge impact on both the public and Congress." American media has propagated this approach, presenting a false balance between climate science and climate skeptics. In 2006 Newsweek reported that most Europeans and Japanese accepted the consensus on scientific climate change, but only one third of Americans thought human activity plays a major role in climate change; 64% believed that scientists disagreed about it "a lot". Deliberate attempts by the Western Fuels Association "to confuse the public" have succeeded. This has been "exacerbated by media treatment of the climate issue". According to a 2012 Pew poll, 57% of Americans are unaware of, or outright reject, the scientific consensus on climate change. Some organizations promoting climate change denial have asserted that scientists are increasingly rejecting climate change, but this is contradicted by research showing that 97% of published papers endorse the scientific consensus, and that percentage is increasing with time. On the other hand, global oil companies have begun to acknowledge the existence of climate change and its risks. Still, top oil firms are spending millions lobbying to delay, weaken, or block policies to tackle climate change. Manufactured climate change denial is also influencing how scientific knowledge is communicated to the public. According to climate scientist Michael E. Mann, "universities and scientific societies and organizations, publishers, etc.—are too often risk averse when it comes to defending and communicating science that is perceived as threatening by powerful interests". United States A study found that public climate change policy support and behavior are significantly influenced by public beliefs, attitudes and risk perceptions. As of March 2018 the rate of acceptance among U.S. TV forecasters that the climate is changing has increased to 95 percent. The number of local TV stories about global warming has also increased, by a factor of 15. Climate Central has received some credit for this, because it provides classes for meteorologists and graphics for TV stations. Popular media in the U.S. gives greater attention to climate change skeptics than the scientific community as a whole, and the level of agreement within the scientific community has not been accurately communicated. In some cases, news outlets have let climate change skeptics instead of experts in climatology explain the science of climate change. US and UK media coverage differ from that in other countries, where reporting is more consistent with the scientific literature. Some journalists attribute the difference to climate change denial being propagated, mainly in the U.S., by business-centered organizations employing tactics worked out previously by the U.S. tobacco lobby. Denial of climate change is most prevalent among white, politically conservative men in the U.S. In France, the U.S., and the U.K., climate change skeptics' opinions appear much more frequently in conservative news outlets than others, and in many cases those opinions are left uncontested. In 2018, the National Science Teachers Association urged teachers to "emphasize to students that no scientific controversy exists regarding the basic facts of climate change". Europe Climate change denial has been promoted by several far-right European parties, including Spain's Vox, Finland's far-right Finns Party, Austria's far-right Freedom Party, and Germany's anti-immigration Alternative for Deutschland (AfD). In April 2023, French political scientist Jean-Yves Dormagen said that the modest and conservative classes were the most skeptical about climate change. In a study by the Jean-Jaurès Foundation published the same month, climate skepticism was compared to a new populism whose representative and spokesman is Steven E. Koonin. Responses to denialism The role of emotions and persuasive argument Climate denial "is not simply overcome by reasoned argument", because it is not a rational response. Attempting to overcome denial using techniques of persuasive argument, such as supplying a missing piece of information, or providing general scientific education may be ineffective. A person who is in denial about climate is most likely taking a position based on their feelings, especially their feelings about things they fear. Academics have stated that "It is pretty clear that fear of the solutions drives much opposition to the science." It can be useful to respond to emotions, including with the statement "It can be painful to realise that our own lifestyles are responsible", in order to help move "from denial to acceptance to constructive action." Following people who have changed their position Some climate change skeptics have changed their positions regarding global warming. Ronald Bailey, author of Global Warming and Other Eco-Myths (published in 2002), stated in 2005, "Anyone still holding onto the idea that there is no global warming ought to hang it up." By 2007, he wrote "Details like sea level rise will continue to be debated by researchers, but if the debate over whether or not humanity is contributing to global warming wasn't over before, it is now.... as the new IPCC Summary makes clear, climate change Pollyannaism is no longer looking very tenable." Jerry Taylor promoted climate denialism for 20 years as former staff director for the energy and environment task force at the American Legislative Exchange Council (ALEC) and former vice president of the Cato Institute. Taylor began to change his mind after climate scientist James Hansen challenged him to reread some Senate testimony. He became President of the Niskanen Center in 2014, where he is involved in turning climate skeptics into climate activists, and making the business case for climate action. Michael Shermer, the publisher of Skeptic magazine, reached a tipping point in 2006 as a result of his increasing familiarity with scientific evidence, and decided there was "overwhelming evidence for anthropogenic global warming". Journalist Gregg Easterbrook, an early skeptic of climate change who authored the influential book A Moment on the Earth, also changed his mind in 2006, and wrote an essay titled "Case Closed: The Debate About Global Warming is Over". In 2006, he stated, "based on the data I'm now switching sides regarding global warming, from skeptic to convert." In 2009, Russian president Dmitri Medvedev expressed his opinion that climate change was "some kind of tricky campaign made up by some commercial structures to promote their business projects". After the devastating 2010 Russian wildfires damaged agriculture and left Moscow choking in smoke, Medvedev commented, "Unfortunately, what is happening now in our central regions is evidence of this global climate change." Bob Inglis, a former US representative for South Carolina, changed his mind in around 2010 after appeals from his son on his environmental positions, and after spending time with climate scientist Scott Heron studying coral bleaching in the Great Barrier Reef. Richard A. Muller, professor of physics at the University of California, Berkeley, and the co-founder of the Berkeley Earth Surface Temperature project, funded by Charles Koch Charitable Foundation, had been a prominent critic of prevailing climate science. In 2011, he stated that "following an intensive research effort involving a dozen scientists, I concluded that global warming was real and that the prior estimates of the rate of warming were correct. I'm now going a step further: Humans are almost entirely the cause." "I used to be a climate-change skeptic", conservative columnist Max Boot admitted in 2018, one who believed that "the science was inconclusive" and that worry was "overblown". Now, he says, referencing the Fourth National Climate Assessment, "the scientific consensus is so clear and convincing." Effective approaches to dialogue Explaining the techniques of science denial and misinformation, by presenting "examples of people using cherrypicking or fake experts or false balance to mislead the public", has been shown to inoculate people somewhat against misinformation. Dialogue focused on the question of how belief differs from scientific theory may provide useful insights into how the scientific method works, and how beliefs may have strong or minimal supporting evidence. Wong-Parodi's survey of the literature shows four effective approaches to dialogue, including "[encouraging] people to openly share their values and stance on climate change before introducing actual scientific climate information into the discussion." Approaches with farmers One study of climate change denial among farmers in Australia found that farmers were less likely to take a position of climate denial if they had experienced improved production from climate-friendly practices, or identified a younger person as a successor for their farm. Therefore, seeing positive economic results from efforts at climate-friendly agricultural practices, or becoming involved in intergenerational stewardship of a farm may play a role in turning farmers away from denial. In the United States, rural climate dialogues sponsored by the Sierra Club have helped neighbors overcome their fears of political polarization and exclusion, and come together to address shared concerns about climate impacts in their communities. Some participants who start out with attitudes of anthropogenic climate change denial have shifted to identifying concerns which they would like to see addressed by local officials. Statements of well known people calling for climate action In May 2013 Charles, Prince of Wales took a strong stance criticising both climate change deniers and corporate lobbyists by likening the Earth to a dying patient. "A scientific hypothesis is tested to absolute destruction, but medicine can't wait. If a doctor sees a child with a fever, he can't wait for [endless] tests. He has to act on what is there."
Physical sciences
Climate change
Earth science
27058
https://en.wikipedia.org/wiki/Steel
Steel
Steel is an alloy of iron and carbon with improved strength and fracture resistance compared to other forms of iron. Because of its high tensile strength and low cost, steel is one of the most commonly manufactured materials in the world. Steel is used in buildings, as concrete reinforcing rods, in bridges, infrastructure, tools, ships, trains, cars, bicycles, machines, electrical appliances, furniture, and weapons. Iron is always the main element in steel, but many other elements may be present or added. Stainless steels, which are resistant to corrosion and oxidation, typically are 11% chromium. Iron is the base metal of steel. Depending on the temperature, it can take two crystalline forms (allotropic forms): body-centred cubic and face-centred cubic. The interaction of the allotropes of iron with the alloying elements, primarily carbon, gives steel and cast iron their range of unique properties. In pure iron, the crystal structure has relatively little resistance to the iron atoms slipping past one another, and so pure iron is quite ductile, or soft and easily formed. In steel, small amounts of carbon, other elements, and inclusions within the iron act as hardening agents that prevent the movement of dislocations. The carbon in typical steel alloys may contribute up to 2.14% of its weight. Varying the amount of carbon and many other alloying elements, as well as controlling their chemical and physical makeup in the final steel (either as solute elements, or as precipitated phases), impedes the movement of the dislocations that make pure iron ductile, and thus controls and enhances its qualities. These qualities include the hardness, quenching behaviour, need for annealing, tempering behaviour, yield strength, and tensile strength of the resulting steel. The increase in steel's strength compared to pure iron is possible only by reducing iron's ductility. Steel was produced in bloomery furnaces for thousands of years, but its large-scale, industrial use began only after more efficient production methods were devised in the 17th century, with the introduction of the blast furnace and production of crucible steel. This was followed by the Bessemer process in England in the mid-19th century, and then by the open-hearth furnace. With the invention of the Bessemer process, a new era of mass-produced steel began. Mild steel replaced wrought iron. The German states were the major steel producers in Europe in the 19th century. American steel production was centred in Pittsburgh, Bethlehem, Pennsylvania, and Cleveland until the late 20th century. Currently, world steel production is centered in China, which produced 54% of the world's steel in 2023. Further refinements in the process, such as basic oxygen steelmaking (BOS), largely replaced earlier methods by further lowering the cost of production and increasing the quality of the final product. Today more than 1.6 billion tons of steel is produced annually. Modern steel is generally identified by various grades defined by assorted standards organizations. The modern steel industry is one of the largest manufacturing industries in the world, but also one of the most energy and greenhouse gas emission intense industries, contributing 8% of global emissions. However, steel is also very reusable: it is one of the world's most-recycled materials, with a recycling rate of over 60% globally. Definitions and related materials The noun steel originates from the Proto-Germanic adjective or 'made of steel', which is related to or 'standing firm'. The carbon content of steel is between 0.02% and 2.14% by weight for plain carbon steel (iron-carbon alloys). Too little carbon content leaves (pure) iron quite soft, ductile, and weak. Carbon contents higher than those of steel make a brittle alloy commonly called pig iron. Alloy steel is steel to which other alloying elements have been intentionally added to modify the characteristics of steel. Common alloying elements include: manganese, nickel, chromium, molybdenum, boron, titanium, vanadium, tungsten, cobalt, and niobium. Additional elements, most frequently considered undesirable, are also important in steel: phosphorus, sulphur, silicon, and traces of oxygen, nitrogen, and copper. Plain carbon-iron alloys with a higher than 2.1% carbon content are known as cast iron. With modern steelmaking techniques such as powder metal forming, it is possible to make very high-carbon (and other alloy material) steels, but such are not common. Cast iron is not malleable even when hot, but it can be formed by casting as it has a lower melting point than steel and good castability properties. Certain compositions of cast iron, while retaining the economies of melting and casting, can be heat treated after casting to make malleable iron or ductile iron objects. Steel is distinguishable from wrought iron (now largely obsolete), which may contain a small amount of carbon but large amounts of slag. Material properties Origins and production Iron is commonly found in the Earth's crust in the form of an ore, usually an iron oxide, such as magnetite or hematite. Iron is extracted from iron ore by removing the oxygen through its combination with a preferred chemical partner such as carbon which is then lost to the atmosphere as carbon dioxide. This process, known as smelting, was first applied to metals with lower melting points, such as tin, which melts at about , and copper, which melts at about , and the combination, bronze, which has a melting point lower than . In comparison, cast iron melts at about . Small quantities of iron were smelted in ancient times, in the solid-state, by heating the ore in a charcoal fire and then welding the clumps together with a hammer and in the process squeezing out the impurities. With care, the carbon content could be controlled by moving it around in the fire. Unlike copper and tin, liquid or solid iron dissolves carbon quite readily. All of these temperatures could be reached with ancient methods used since the Bronze Age. Since the oxidation rate of iron increases rapidly beyond , it is important that smelting take place in a low-oxygen environment. Smelting, using carbon to reduce iron oxides, results in an alloy (pig iron) that retains too much carbon to be called steel. The excess carbon and other impurities are removed in a subsequent step. Other materials are often added to the iron/carbon mixture to produce steel with the desired properties. Nickel and manganese in steel add to its tensile strength and make the austenite form of the iron-carbon solution more stable, chromium increases hardness and melting temperature, and vanadium also increases hardness while making it less prone to metal fatigue. To inhibit corrosion, at least 11% chromium can be added to steel so that a hard oxide forms on the metal surface; this is known as stainless steel. Tungsten slows the formation of cementite, keeping carbon in the iron matrix and allowing martensite to preferentially form at slower quench rates, resulting in high-speed steel. The addition of lead and sulphur decrease grain size, thereby making the steel easier to turn, but also more brittle and prone to corrosion. Such alloys are nevertheless frequently used for components such as nuts, bolts, and washers in applications where toughness and corrosion resistance are not paramount. For the most part, however, p-block elements such as sulphur, nitrogen, phosphorus, and lead are considered contaminants that make steel more brittle and are therefore removed from steel during the melting processing. Properties The density of steel varies based on the alloying constituents but usually ranges between , or . Even in a narrow range of concentrations of mixtures of carbon and iron that make steel, several different metallurgical structures, with very different properties can form. Understanding such properties is essential to making quality steel. At room temperature, the most stable form of pure iron is the body-centred cubic (BCC) structure called alpha iron or α-iron. It is a fairly soft metal that can dissolve only a small concentration of carbon, no more than 0.005% at and 0.021 wt% at . The inclusion of carbon in alpha iron is called ferrite. At 910 °C, pure iron transforms into a face-centred cubic (FCC) structure, called gamma iron or γ-iron. The inclusion of carbon in gamma iron is called austenite. The more open FCC structure of austenite can dissolve considerably more carbon, as much as 2.1%, (38 times that of ferrite) carbon at , which reflects the upper carbon content of steel, beyond which is cast iron. When carbon moves out of solution with iron, it forms a very hard, but brittle material called cementite (Fe3C). When steels with exactly 0.8% carbon (known as a eutectoid steel), are cooled, the austenitic phase (FCC) of the mixture attempts to revert to the ferrite phase (BCC). The carbon no longer fits within the FCC austenite structure, resulting in an excess of carbon. One way for carbon to leave the austenite is for it to precipitate out of solution as cementite, leaving behind a surrounding phase of BCC iron called ferrite with a small percentage of carbon in solution. The two, cementite and ferrite, precipitate simultaneously producing a layered structure called pearlite, named for its resemblance to mother of pearl. In a hypereutectoid composition (greater than 0.8% carbon), the carbon will first precipitate out as large inclusions of cementite at the austenite grain boundaries until the percentage of carbon in the grains has decreased to the eutectoid composition (0.8% carbon), at which point the pearlite structure forms. For steels that have less than 0.8% carbon (hypoeutectoid), ferrite will first form within the grains until the remaining composition rises to 0.8% of carbon, at which point the pearlite structure will form. No large inclusions of cementite will form at the boundaries in hypoeutectoid steel. The above assumes that the cooling process is very slow, allowing enough time for the carbon to migrate. As the rate of cooling is increased the carbon will have less time to migrate to form carbide at the grain boundaries but will have increasingly large amounts of pearlite of a finer and finer structure within the grains; hence the carbide is more widely dispersed and acts to prevent slip of defects within those grains, resulting in hardening of the steel. At the very high cooling rates produced by quenching, the carbon has no time to migrate but is locked within the face-centred austenite and forms martensite. Martensite is a highly strained and stressed, supersaturated form of carbon and iron and is exceedingly hard but brittle. Depending on the carbon content, the martensitic phase takes different forms. Below 0.2% carbon, it takes on a ferrite BCC crystal form, but at higher carbon content it takes a body-centred tetragonal (BCT) structure. There is no thermal activation energy for the transformation from austenite to martensite. There is no compositional change so the atoms generally retain their same neighbours. Martensite has a lower density (it expands during the cooling) than does austenite, so that the transformation between them results in a change of volume. In this case, expansion occurs. Internal stresses from this expansion generally take the form of compression on the crystals of martensite and tension on the remaining ferrite, with a fair amount of shear on both constituents. If quenching is done improperly, the internal stresses can cause a part to shatter as it cools. At the very least, they cause internal work hardening and other microscopic imperfections. It is common for quench cracks to form when steel is water quenched, although they may not always be visible. Heat treatment There are many types of heat treating processes available to steel. The most common are annealing, quenching, and tempering. Annealing is the process of heating the steel to a sufficiently high temperature to relieve local internal stresses. It does not create a general softening of the product but only locally relieves strains and stresses locked up within the material. Annealing goes through three phases: recovery, recrystallization, and grain growth. The temperature required to anneal a particular steel depends on the type of annealing to be achieved and the alloying constituents. Quenching involves heating the steel to create the austenite phase then quenching it in water or oil. This rapid cooling results in a hard but brittle martensitic structure. The steel is then tempered, which is just a specialized type of annealing, to reduce brittleness. In this application the annealing (tempering) process transforms some of the martensite into cementite, or spheroidite and hence it reduces the internal stresses and defects. The result is a more ductile and fracture-resistant steel. Production When iron is smelted from its ore, it contains more carbon than is desirable. To become steel, it must be reprocessed to reduce the carbon to the correct amount, at which point other elements can be added. In the past, steel facilities would cast the raw steel product into ingots which would be stored until use in further refinement processes that resulted in the finished product. In modern facilities, the initial product is close to the final composition and is continuously cast into long slabs, cut and shaped into bars and extrusions and heat treated to produce a final product. Today, approximately 96% of steel is continuously cast, while only 4% is produced as ingots. The ingots are then heated in a soaking pit and hot rolled into slabs, billets, or blooms. Slabs are hot or cold rolled into sheet metal or plates. Billets are hot or cold rolled into bars, rods, and wire. Blooms are hot or cold rolled into structural steel, such as I-beams and rails. In modern steel mills these processes often occur in one assembly line, with ore coming in and finished steel products coming out. Sometimes after a steel's final rolling, it is heat treated for strength; however, this is relatively rare. History Ancient Steel was known in antiquity and was produced in bloomeries and crucibles. The earliest known production of steel is seen in pieces of ironware excavated from an archaeological site in Anatolia (Kaman-Kalehöyük) which are nearly 4,000 years old, dating from 1800 BC. Wootz steel was developed in Southern India and Sri Lanka in the 1st millennium BCE. Metal production sites in Sri Lanka employed wind furnaces driven by the monsoon winds, capable of producing high-carbon steel. Large-scale wootz steel production in India using crucibles occurred by the sixth century BC, the pioneering precursor to modern steel production and metallurgy. High-carbon steel was produced in Britain at Broxmouth Hillfort from 490–375 BC, and ultrahigh-carbon steel was produced in the Netherlands from the 2nd-4th centuries AD. The Roman author Horace identifies steel weapons such as the falcata in the Iberian Peninsula, while Noric steel was used by the Roman military. The Chinese of the Warring States period (403–221 BC) had quench-hardened steel, while Chinese of the Han dynasty (202 BC—AD 220) created steel by melting together wrought iron with cast iron, thus producing a carbon-intermediate steel by the 1st century AD. There is evidence that carbon steel was made in Western Tanzania by the ancestors of the Haya people as early as 2,000 years ago by a complex process of "pre-heating" allowing temperatures inside a furnace to reach 1300 to 1400 °C. Wootz and Damascus Evidence of the earliest production of high carbon steel in South Asia is found in Kodumanal in Tamil Nadu, the Golconda area in Andhra Pradesh and Karnataka, regions of India, as well as in Samanalawewa and Dehigaha Alakanda, regions of Sri Lanka. This came to be known as wootz steel, produced in South India by about the sixth century BC and exported globally. The steel technology existed prior to 326 BC in the region as they are mentioned in literature of Sangam Tamil, Arabic, and Latin as the finest steel in the world exported to the Roman, Egyptian, Chinese and Arab worlds at that time – what they called Seric Iron. A 200 BC Tamil trade guild in Tissamaharama, in the South East of Sri Lanka, brought with them some of the oldest iron and steel artifacts and production processes to the island from the classical period. The Chinese and locals in Anuradhapura, Sri Lanka had also adopted the production methods of creating wootz steel from the Chera Dynasty Tamils of South India by the 5th century AD. In Sri Lanka, this early steel-making method employed a unique wind furnace, driven by the monsoon winds, capable of producing high-carbon steel. Since the technology was acquired from the Tamilians from South India, the origin of steel technology in India can be conservatively estimated at 400–500 BC. The manufacture of wootz steel and Damascus steel, famous for its durability and ability to hold an edge, may have been taken by the Arabs from Persia, who took it from India. In 327 BC, Alexander the Great was rewarded by the defeated King Porus, not with gold or silver but with 30 pounds of steel. A recent study has speculated that carbon nanotubes were included in its structure, which might explain some of its legendary qualities, though, given the technology of that time, such qualities were produced by chance rather than by design. Natural wind was used where the soil containing iron was heated by the use of wood. The ancient Sinhalese managed to extract a ton of steel for every 2 tons of soil, a remarkable feat at the time. One such furnace was found in Samanalawewa and archaeologists were able to produce steel as the ancients did. Crucible steel, formed by slowly heating and cooling pure iron and carbon (typically in the form of charcoal) in a crucible, was produced in Merv by the 9th to 10th century AD. In the 11th century, there is evidence of the production of steel in Song China using two techniques: a "berganesque" method that produced inferior, inhomogeneous steel, and a precursor to the modern Bessemer process that used partial decarburization via repeated forging under a cold blast. Modern Since the 17th century, the first step in European steel production has been the smelting of iron ore into pig iron in a blast furnace. Originally employing charcoal, modern methods use coke, which has proven more economical. Processes starting from bar iron In these processes, pig iron made from raw iron ore was refined (fined) in a finery forge to produce bar iron, which was then used in steel-making. The production of steel by the cementation process was described in a treatise published in Prague in 1574 and was in use in Nuremberg from 1601. A similar process for case hardening armour and files was described in a book published in Naples in 1589. The process was introduced to England in about 1614 and used to produce such steel by Sir Basil Brooke at Coalbrookdale during the 1610s. The raw material for this process were bars of iron. During the 17th century, it was realized that the best steel came from oregrounds iron of a region north of Stockholm, Sweden. This was still the usual raw material source in the 19th century, almost as long as the process was used. Crucible steel is steel that has been melted in a crucible rather than having been forged, with the result that it is more homogeneous. Most previous furnaces could not reach high enough temperatures to melt the steel. The early modern crucible steel industry resulted from the invention of Benjamin Huntsman in the 1740s. Blister steel (made as above) was melted in a crucible or in a furnace, and cast (usually) into ingots. Processes starting from pig iron The modern era in steelmaking began with the introduction of Henry Bessemer's process in 1855, the raw material for which was pig iron. His method let him produce steel in large quantities cheaply, thus mild steel came to be used for most purposes for which wrought iron was formerly used. The Gilchrist-Thomas process (or basic Bessemer process) was an improvement to the Bessemer process, made by lining the converter with a basic material to remove phosphorus. Another 19th-century steelmaking process was the Siemens-Martin process, which complemented the Bessemer process. It consisted of co-melting bar iron (or steel scrap) with pig iron. These methods of steel production were rendered obsolete by the Linz-Donawitz process of basic oxygen steelmaking (BOS), developed in 1952, and other oxygen steel making methods. Basic oxygen steelmaking is superior to previous steelmaking methods because the oxygen pumped into the furnace limited impurities, primarily nitrogen, that previously had entered from the air used, and because, with respect to the open hearth process, the same quantity of steel from a BOS process is manufactured in one-twelfth the time. Today, electric arc furnaces (EAF) are a common method of reprocessing scrap metal to create new steel. They can also be used for converting pig iron to steel, but they use a lot of electrical energy (about 440 kWh per metric ton), and are thus generally only economical when there is a plentiful supply of cheap electricity. Industry The steel industry is often considered an indicator of economic progress, because of the critical role played by steel in infrastructural and overall economic development. In 1980, there were more than 500,000 U.S. steelworkers. By 2000, the number of steelworkers had fallen to 224,000. The economic boom in China and India caused a massive increase in the demand for steel. Between 2000 and 2005, world steel demand increased by 6%. Since 2000, several Indian and Chinese steel firms have expanded to meet demand, such as Tata Steel (which bought Corus Group in 2007), Baosteel Group and Shagang Group. , though, ArcelorMittal is the world's largest steel producer. In 2005, the British Geological Survey stated China was the top steel producer with about one-third of the world share; Japan, Russia, and the United States were second, third, and fourth, respectively, according to the survey. The large production capacity of steel results also in a significant amount of carbon dioxide emissions inherent related to the main production route. At the end of 2008, the steel industry faced a sharp downturn that led to many cut-backs. In 2021, it was estimated that around 7% of the global greenhouse gas emissions resulted from the steel industry. Reduction of these emissions are expected to come from a shift in the main production route using cokes, more recycling of steel and the application of carbon capture and storage technology. Recycling Steel is one of the world's most-recycled materials, with a recycling rate of over 60% globally; in the United States alone, over were recycled in the year 2008, for an overall recycling rate of 83%. As more steel is produced than is scrapped, the amount of recycled raw materials is about 40% of the total of steel produced – in 2016, of crude steel was produced globally, with recycled. Contemporary Carbon Modern steels are made with varying combinations of alloy metals to fulfil many purposes. Carbon steel, composed simply of iron and carbon, accounts for 90% of steel production. Low alloy steel is alloyed with other elements, usually molybdenum, manganese, chromium, or nickel, in amounts of up to 10% by weight to improve the hardenability of thick sections. High strength low alloy steel has small additions (usually < 2% by weight) of other elements, typically 1.5% manganese, to provide additional strength for a modest price increase. Recent corporate average fuel economy (CAFE) regulations have given rise to a new variety of steel known as Advanced High Strength Steel (AHSS). This material is both strong and ductile so that vehicle structures can maintain their current safety levels while using less material. There are several commercially available grades of AHSS, such as dual-phase steel, which is heat treated to contain both a ferritic and martensitic microstructure to produce a formable, high strength steel. Transformation Induced Plasticity (TRIP) steel involves special alloying and heat treatments to stabilize amounts of austenite at room temperature in normally austenite-free low-alloy ferritic steels. By applying strain, the austenite undergoes a phase transition to martensite without the addition of heat. Twinning Induced Plasticity (TWIP) steel uses a specific type of strain to increase the effectiveness of work hardening on the alloy. Carbon Steels are often galvanized, through hot-dip or electroplating in zinc for protection against rust. Alloy Stainless steel contains a minimum of 11% chromium, often combined with nickel, to resist corrosion. Some stainless steels, such as the ferritic stainless steels are magnetic, while others, such as the austenitic, are nonmagnetic. Corrosion-resistant steels are abbreviated as CRES. Alloy steels are plain-carbon steels in which small amounts of alloying elements like chromium and vanadium have been added. Some more modern steels include tool steels, which are alloyed with large amounts of tungsten and cobalt or other elements to maximize solution hardening. This also allows the use of precipitation hardening and improves the alloy's temperature resistance. Tool steel is generally used in axes, drills, and other devices that need a sharp, long-lasting cutting edge. Other special-purpose alloys include weathering steels such as Cor-ten, which weather by acquiring a stable, rusted surface, and so can be used un-painted. Maraging steel is alloyed with nickel and other elements, but unlike most steel contains little carbon (0.01%). This creates a very strong but still malleable steel. Eglin steel uses a combination of over a dozen different elements in varying amounts to create a relatively low-cost steel for use in bunker buster weapons. Hadfield steel, named after Robert Hadfield, or manganese steel, contains 12–14% manganese which, when abraded, strain-hardens to form a very hard skin which resists wearing. Uses of this particular alloy include tank tracks, bulldozer blade edges, and cutting blades on the jaws of life. Standards Most of the more commonly used steel alloys are categorized into various grades by standards organizations. For example, the Society of Automotive Engineers has a series of grades defining many types of steel. The American Society for Testing and Materials has a separate set of standards, which define alloys such as A36 steel, the most commonly used structural steel in the United States. The JIS also defines a series of steel grades that are being used extensively in Japan as well as in developing countries. Uses Iron and steel are used widely in the construction of roads, railways, other infrastructure, appliances, and buildings. Most large modern structures, such as stadiums and skyscrapers, bridges, and airports, are supported by a steel skeleton. Even those with a concrete structure employ steel for reinforcing. It sees widespread use in major appliances and cars. Despite the growth in usage of aluminium, steel is still the main material for car bodies. Steel is used in a variety of other construction materials, such as bolts, nails and screws, and other household products and cooking utensils. Other common applications include shipbuilding, pipelines, mining, offshore construction, aerospace, white goods (e.g. washing machines), heavy equipment such as bulldozers, office furniture, steel wool, tool, and armour in the form of personal vests or vehicle armour (better known as rolled homogeneous armour in this role). Historical Before the introduction of the Bessemer process and other modern production techniques, steel was expensive and was only used where no cheaper alternative existed, particularly for the cutting edge of knives, razors, swords, and other items where a hard, sharp edge was needed. It was also used for springs, including those used in clocks and watches. With the advent of faster and cheaper production methods, steel has become easier to obtain and much cheaper. It has replaced wrought iron for a multitude of purposes. However, the availability of plastics in the latter part of the 20th century allowed these materials to replace steel in some applications due to their lower fabrication cost and weight. Carbon fibre is replacing steel in some cost-insensitive applications such as sports equipment and high-end automobiles. Long As reinforcing bars and mesh in reinforced concrete Railroad tracks Structural steel in modern buildings and bridges Wires Input to reforging applications Flat carbon Major appliances Magnetic cores The inside and outside body of automobiles, trains, and ships. Weathering (COR-TEN) Intermodal containers Outdoor sculptures Architecture Highliner train cars Stainless Cutlery Rulers Surgical instruments Watches Guns Rail passenger vehicles Tablets Trash Cans Body piercing jewellery Inexpensive rings Components of spacecraft and space stations Low-background Steel manufactured after World War II became contaminated with radionuclides by nuclear weapons testing. Low-background steel, steel manufactured prior to 1945, is used for certain radiation-sensitive applications such as Geiger counters and radiation shielding.
Physical sciences
Chemistry
null
27059
https://en.wikipedia.org/wiki/Stainless%20steel
Stainless steel
Stainless steel, also known as inox, corrosion-resistant steel (CRES), and rustless steel, is an iron-based alloy containing a minimum level of chromium that is resistant to rusting and corrosion. Stainless steel's resistance to corrosion results from the 10.5%, or more, chromium content which forms a passive film that can protect the material and self-heal in the presence of oxygen. It can also be alloyed with other elements such as molybdenum, carbon, nickel and nitrogen to develop a range of different properties depending on its specific use. The alloy's properties, such as luster and resistance to corrosion, are useful in many applications. Stainless steel can be rolled into sheets, plates, bars, wire, and tubing. These can be used in cookware, cutlery, surgical instruments, major appliances, vehicles, construction material in large buildings, industrial equipment (e.g., in paper mills, chemical plants, water treatment), and storage tanks and tankers for chemicals and food products. Some grades are also suitable for forging and casting. The biological cleanability of stainless steel is superior to both aluminium and copper, and comparable to glass. Its cleanability, strength, and corrosion resistance have prompted the use of stainless steel in pharmaceutical and food processing plants. Different types of stainless steel are labeled with an AISI three-digit number. The ISO 15510 standard lists the chemical compositions of stainless steels of the specifications in existing ISO, ASTM, EN, JIS, and GB standards in a useful interchange table. Properties Corrosion resistance Although stainless steel does rust, this only affects the outer few layers of atoms, its chromium content shielding deeper layers from oxidation. The addition of nitrogen also improves resistance to pitting corrosion and increases mechanical strength. Thus, there are numerous grades of stainless steel with varying chromium and molybdenum contents to suit the environment the alloy must endure. Corrosion resistance can be increased further by the following means: increasing chromium content to more than 11% adding nickel to at least 8% adding molybdenum (which also improves resistance to pitting corrosion) Strength The most common type of stainless steel, 304, has a tensile yield strength around in the annealed condition. It can be strengthened by cold working to a strength of in the full-hard condition. The strongest commonly available stainless steels are precipitation hardening alloys such as 17-4 PH and Custom 465. These can be heat treated to have tensile yield strengths up to . Melting point Melting point of stainless steel is near that of ordinary steel, and much higher than the melting points of aluminium or copper. As with most alloys, the melting point of stainless steel is expressed in the form of a range of temperatures, and not a single temperature. This temperature range goes from depending on the specific consistency of the alloy in question. Conductivity Like steel, stainless steels are relatively poor conductors of electricity, with significantly lower electrical conductivities than copper. In particular, the electrical contact resistance (ECR) of stainless steel arises as a result of the dense protective oxide layer and limits its functionality in applications as electrical connectors. Copper alloys and nickel-coated connectors tend to exhibit lower ECR values and are preferred materials for such applications. Nevertheless, stainless steel connectors are employed in situations where ECR poses a lower design criteria and corrosion resistance is required, for example in high temperatures and oxidizing environments. Magnetism Martensitic, duplex and ferritic stainless steels are magnetic, while austenitic stainless steel is usually non-magnetic. Ferritic steel owes its magnetism to its body-centered cubic crystal structure, in which iron atoms are arranged in cubes (with one iron atom at each corner) and an additional iron atom in the center. This central iron atom is responsible for ferritic steel's magnetic properties. This arrangement also limits the amount of carbon the steel can absorb to around 0.025%. Grades with low coercive field have been developed for electro-valves used in household appliances and for injection systems in internal combustion engines. Some applications require non-magnetic materials, such as magnetic resonance imaging. Austenitic stainless steels, which are usually non-magnetic, can be made slightly magnetic through work hardening. Sometimes, if austenitic steel is bent or cut, magnetism occurs along the edge of the stainless steel because the crystal structure rearranges itself. Wear Galling, sometimes called cold welding, is a form of severe adhesive wear, which can occur when two metal surfaces are in relative motion to each other and under heavy pressure. Austenitic stainless steel fasteners are particularly susceptible to thread galling, though other alloys that self-generate a protective oxide surface film, such as aluminum and titanium, are also susceptible. Under high contact-force sliding, this oxide can be deformed, broken, and removed from parts of the component, exposing the bare reactive metal. When the two surfaces are of the same material, these exposed surfaces can easily fuse. Separation of the two surfaces can result in surface tearing and even complete seizure of metal components or fasteners. Galling can be mitigated by the use of dissimilar materials (bronze against stainless steel) or using different stainless steels (martensitic against austenitic). Additionally, threaded joints may be lubricated to provide a film between the two parts and prevent galling. Nitronic 60, made by selective alloying with manganese, silicon, and nitrogen, has demonstrated a reduced tendency to gall. Density The density of stainless steel ranges from depending on the alloy. History The invention of stainless steel followed a series of scientific developments, starting in 1798 when chromium was first shown to the French Academy by Louis Vauquelin. In the early 1800s, British scientists James Stoddart, Michael Faraday, and Robert Mallet observed the resistance of chromium-iron alloys ("chromium steels") to oxidizing agents. Robert Bunsen discovered chromium's resistance to strong acids. The corrosion resistance of iron-chromium alloys may have been first recognized in 1821 by Pierre Berthier, who noted their resistance against attack by some acids and suggested their use in cutlery. In the 1840s, both Britain's Sheffield steelmakers and then Krupp of Germany were producing chromium steel with the latter employing it for cannons in the 1850s. In 1861, Robert Forester Mushet took out a patent on chromium steel in Britain. These events led to the first American production of chromium-containing steel by J. Baur of the Chrome Steel Works of Brooklyn for the construction of bridges. A US patent for the product was issued in 1869. This was followed with recognition of the corrosion resistance of chromium alloys by Englishmen John T. Woods and John Clark, who noted ranges of chromium from 5–30%, with added tungsten and "medium carbon". They pursued the commercial value of the innovation via a British patent for "Weather-Resistant Alloys". Scientists researching steel corrosion in the second half of the 19th century didn't pay attention to the amount of carbon in the alloyed steels they were testing until in 1898 Adolphe Carnot and E. Goutal noted that chromium steels better resist to oxidation with acids the less carbon they contain. Also in the late 1890s, German chemist Hans Goldschmidt developed an aluminothermic (thermite) process for producing carbon-free chromium. Between 1904 and 1911, several researchers, particularly Leon Guillet of France, prepared alloys that would be considered stainless steel today. In 1908, the Essen firm Friedrich Krupp Germaniawerft built the 366-ton sailing yacht Germania featuring a chrome-nickel steel hull, in Germany. In 1911, Philip Monnartz reported on the relationship between chromium content and corrosion resistance. On 17 October 1912, Krupp engineers Benno Strauss and Eduard Maurer patented as Nirosta the austenitic stainless steel known today as 18/8 or AISI type 304. Similar developments were taking place in the United States, where Christian Dantsizen of General Electric and Frederick Becket (1875–1942) at Union Carbide were industrializing ferritic stainless steel. In 1912, Elwood Haynes applied for a US patent on a martensitic stainless steel alloy, which was not granted until 1919. Harry Brearley While seeking a corrosion-resistant alloy for gun barrels in 1913, Harry Brearley of the Brown-Firth research laboratory in Sheffield, England, discovered and subsequently industrialized a martensitic stainless steel alloy, today known as AISI type 420. The discovery was announced two years later in a January 1915 newspaper article in The New York Times. The metal was later marketed under the "Staybrite" brand by Firth Vickers in England and was used for the new entrance canopy for the Savoy Hotel in London in 1929. Brearley applied for a US patent during 1915 only to find that Haynes had already registered one. Brearley and Haynes pooled their funding and, with a group of investors, formed the American Stainless Steel Corporation, with headquarters in Pittsburgh, Pennsylvania. Rustless steel Brearley initially called his new alloy "rustless steel". The alloy was sold in the US under different brand names like "Allegheny metal" and "Nirosta steel". Even within the metallurgy industry, the name remained unsettled; in 1921, one trade journal called it "unstainable steel". Brearley worked with a local cutlery manufacturer, who gave it the name "stainless steel". As late as 1932, Ford Motor Company continued calling the alloy "rustless steel" in automobile promotional materials. In 1929, before the Great Depression, over 25,000 tons of stainless steel were manufactured and sold in the US annually. Major technological advances in the 1950s and 1960s allowed the production of large tonnages at an affordable cost: AOD process (argon oxygen decarburization), for the removal of carbon and sulfur Continuous casting and hot strip rolling The Z-Mill, or Sendzimir cold rolling mill The Creusot-Loire Uddeholm (CLU) and related processes which use steam instead of some or all of the argon Families Stainless steel is classified into five different "families" of alloys, each having a distinct set of attributes. Four of the families are defined by their predominant crystalline structure - the austenitic, ferritic, martensitic, and duplex alloys. The fifth family, precipitation hardening, is defined by the type of heat treatment used to develop its properties. Austenitic Austenitic stainless steel is the largest family of stainless steels, making up about two-thirds of all stainless steel production. They have a face-centered cubic crystal structure. This microstructure is achieved by alloying steel with sufficient nickel, manganese, or nitrogen to maintain an austenitic microstructure at all temperatures, ranging from the cryogenic region to the melting point. Thus, austenitic stainless steels are not hardenable by heat treatment since they possess the same microstructure at all temperatures. Austenitic stainless steels consist of two subfamilies: 200 series are chromium-manganese-nickel alloys that maximize the use of manganese and nitrogen to minimize the use of nickel. Due to their nitrogen addition, they possess approximately 50% higher yield strength than 300-series stainless sheets of steel. Representative alloys include Type 201 and Type 202. 300 series are chromium-nickel alloys that achieve their austenitic microstructure almost exclusively by nickel alloying; some very highly alloyed grades include some nitrogen to reduce nickel requirements. 300 series is the largest group and the most widely used. Representative alloys include Type 304 and Type 316. Ferritic Ferritic stainless steels have a body-centered cubic crystal structure, are magnetic, and are hardenable by cold working, but not by heat treating. They contain between 10.5% and 27% chromium with very little or no nickel. Due to the near-absence of nickel, they are less expensive than austenitic stainless steels. Representative alloys include Type 409, Type 429, Type 430, and Type 446. Ferritic stainless steels are present in many products, which include: Automobile exhaust pipes Architectural and structural applications Building components, such as slate hooks, roofing, and chimney ducts Power plates in solid oxide fuel cells operating at temperatures around Martensitic Martensitic stainless steels have a body-centered tetragonal crystal structure, are magnetic, and are hardenable by heat treating and by cold working. They offer a wide range of properties and are used as stainless engineering steels, stainless tool steels, and creep-resistant steels. They are not as corrosion-resistant as ferritic and austenitic stainless steels due to their low chromium content. They fall into four categories (with some overlap): Fe-Cr-C grades. These were the first grades used and are still widely used in engineering and wear-resistant applications. Representative grades include Type 410, Type 420, and Type 440C. Fe-Cr-Ni-C grades. Some carbon is replaced by nickel. They offer higher toughness and higher corrosion resistance. Representative grades include Type 431. Martensitic precipitation hardening grades. 17-4 PH (UNS S17400), the best-known grade, combines martensitic hardening and precipitation hardening to increase strength and toughness. Creep-resisting grades. Small additions of niobium, vanadium, boron, and cobalt increase the strength and creep resistance up to about . Martensitic stainless steels can be heat treated to provide better mechanical properties. The heat treatment typically involves three steps: Austenitizing, in which the steel is heated to a temperature in the range , depending on grade. The resulting austenite has a face-centered cubic crystal structure. Quenching. The austenite is transformed into martensite, a hard body-centered tetragonal crystal structure. The quenched martensite is very hard and too brittle for most applications. Some residual austenite may remain. Tempering. Martensite is heated to around , held at temperature, then air-cooled. Higher tempering temperatures decrease yield strength and ultimate tensile strength but increase the elongation and impact resistance. Duplex Duplex stainless steels have a mixed microstructure of austenite and ferrite, the ideal ratio being a 50:50 mix, though commercial alloys may have ratios of 40:60. They are characterized by higher chromium (19–32%) and molybdenum (up to 5%) and lower nickel contents than austenitic stainless steels. Duplex stainless steels have roughly twice the yield strength of austenitic stainless steel. Their mixed microstructure provides improved resistance to chloride stress corrosion cracking in comparison to austenitic stainless steel types 304 and 316. Duplex grades are usually divided into three sub-groups based on their corrosion resistance: lean duplex, standard duplex, and super duplex. The properties of duplex stainless steels are achieved with an overall lower alloy content than similar-performing super-austenitic grades, making their use cost-effective for many applications. The pulp and paper industry was one of the first to extensively use duplex stainless steel. Today, the oil and gas industry is the largest user and has pushed for more corrosion resistant grades, leading to the development of super duplex and hyper duplex grades. More recently, the less expensive (and slightly less corrosion-resistant) lean duplex has been developed, chiefly for structural applications in building and construction (concrete reinforcing bars, plates for bridges, coastal works) and in the water industry. Precipitation hardening Precipitation hardening stainless steels are characterized by the abiity to be precipitation hardened to higher strength. There are three types of precipitation hardening stainless steels which are classified according to their crystalline structure: Martensitic precipitation hardenable stainless steels are martensitic at room temperature in both the solution annealed and precipitation hardened conditions. Representative alloys include 17-4 PH (UNS S17400), 15-5 PH (UNS S15500), Custom 450 (UNS S45000) and Custom 465 (UNS S46500). Semi-austenitic precipitation hardenable stainless steels are initially austenitic in the solution annealed condition for ease of fabrication, but are subsequently transformed to martensite to provide higher strength and to be precipitation hardened. Representative alloys include 17-7 PH (UNS S17700), 15-7 PH (UNS S15700), AM-350 (UNS S35000), and AM-355 (UNS S35500). Austenitic precipitation hardenable stainless steels are austenitic at room temperature in both the solution annealed and precipitation hardened conditions. Representative alloys include A-286 (UNS S66286) and Discalloy (UNS S66220). Classification systems Several different classification systems have been developed for designating stainless steels. The main system used in the United States has been the SAE steel grades numbering system. The SAE numbering system designates stainless steels by "Type" followed by a three-digit number and sometimes a letter suffix. A newer system that was jointly developed by ASTM and SAE in 1974 is The Unified Numbering System for Metals and Alloys (UNS). The Unified Numbering System classifies stainless steels using an alpha-numeric identifier consisting of "S" followed by five digits, although some austenitic stainless steels with high nickel content may fall into the nickel-base designation which uses "N" as the alpha identifer. The UNS designations incorporate previously used designations, whether from the SAE numbering system or proprietary alloy designations. Europe has adopted EN 10088 for classification of stainless steels. Corrosion resistance Unlike carbon steel, stainless steels do not suffer uniform corrosion when exposed to wet environments. Unprotected carbon steel rusts readily when exposed to a combination of air and moisture. The resulting iron oxide surface layer is porous and fragile. In addition, as iron oxide occupies a larger volume than the original steel, this layer expands and tends to flake and fall away, exposing the underlying steel to further attack. In comparison, stainless steels contain sufficient chromium to undergo passivation, spontaneously forming a microscopically thin inert surface film of chromium oxide by reaction with the oxygen in the air and even the small amount of dissolved oxygen in the water. This passive film prevents further corrosion by blocking oxygen diffusion to the steel surface and thus prevents corrosion from spreading into the bulk of the metal. This film is self-repairing, even when scratched or temporarily disturbed by conditions that exceed the inherent corrosion resistance of that grade. The resistance of this film to corrosion depends upon the chemical composition of the stainless steel, chiefly the chromium content. It is customary to distinguish between four forms of corrosion: uniform, localized (pitting), galvanic, and SCC (stress corrosion cracking). Any of these forms of corrosion can occur when the grade of stainless steel is not suited for the working environment. Uniform Uniform corrosion takes place in very aggressive environments, typically where chemicals are produced or heavily used, such as in the pulp and paper industries. The entire surface of the steel is attacked, and the corrosion is expressed as corrosion rate in mm/year (usually less than 0.1 mm/year is acceptable for such cases). Corrosion tables provide guidelines. This is typically the case when stainless steels are exposed to acidic or basic solutions. Whether stainless steel corrodes depends on the kind and concentration of acid or base and the solution temperature. Uniform corrosion is typically easy to avoid because of extensive published corrosion data or easily performed laboratory corrosion testing. Acidic solutions can be put into two general categories: reducing acids, such as hydrochloric acid and dilute sulfuric acid, and oxidizing acids, such as nitric acid and concentrated sulfuric acid. Increasing chromium and molybdenum content provides increased resistance to reducing acids while increasing chromium and silicon content provides increased resistance to oxidizing acids. Sulfuric acid is one of the most-produced industrial chemicals. At room temperature, type 304 stainless steel is only resistant to 3% acid, while type 316 is resistant to 3% acid up to and 20% acid at room temperature. Thus type 304 SS is rarely used in contact with sulfuric acid. Type 904L and Alloy 20 are resistant to sulfuric acid at even higher concentrations above room temperature. Concentrated sulfuric acid possesses oxidizing characteristics like nitric acid, and thus silicon-bearing stainless steels are also useful. Hydrochloric acid damages any kind of stainless steel and should be avoided. All types of stainless steel resist attack from phosphoric acid and nitric acid at room temperature. At high concentrations and elevated temperatures, attack will occur, and higher-alloy stainless steels are required. In general, organic acids are less corrosive than mineral acids such as hydrochloric and sulfuric acid. Type 304 and type 316 stainless steels are unaffected by weak bases such as ammonium hydroxide, even in high concentrations and at high temperatures. The same grades exposed to stronger bases such as sodium hydroxide at high concentrations and high temperatures will likely experience some etching and cracking. Increasing chromium and nickel contents provide increased resistance. All grades resist damage from aldehydes and amines, though in the latter case type 316 is preferable to type 304; cellulose acetate damages type 304 unless the temperature is kept low. Fats and fatty acids only affect type 304 at temperatures above and type 316 SS above , while type 317 SS is unaffected at all temperatures. Type 316L is required for the processing of urea. Localized Localized corrosion can occur in several ways, e.g. pitting corrosion and crevice corrosion. These localized attacks are most common in the presence of chloride ions. Higher chloride levels require more highly alloyed stainless steels. Localized corrosion can be difficult to predict because it is dependent on many factors, including: Chloride ion concentration. Even when chloride solution concentration is known, it is still possible for localized corrosion to occur unexpectedly. Chloride ions can become unevenly concentrated in certain areas, such as in crevices (e.g. under gaskets) or on surfaces in vapor spaces due to evaporation and condensation. Temperature: increasing temperature increases susceptibility. Acidity: increasing acidity increases susceptibility. Stagnation: stagnant conditions increase susceptibility. Oxidizing species: the presence of oxidizing species, such as ferric and cupric ions, increases susceptibility. Pitting corrosion is considered the most common form of localized corrosion. The corrosion resistance of stainless steels to pitting corrosion is often expressed by the PREN, obtained through the formula: , where the terms correspond to the proportion of the contents by mass of chromium, molybdenum, and nitrogen in the steel. For example, if the steel consisted of 15% chromium %Cr would be equal to 15. The higher the PREN, the higher the pitting corrosion resistance. Thus, increasing chromium, molybdenum, and nitrogen contents provide better resistance to pitting corrosion. Though the PREN of certain steel may be theoretically sufficient to resist pitting corrosion, crevice corrosion can still occur when the poor design has created confined areas (overlapping plates, washer-plate interfaces, etc.) or when deposits form on the material. In these select areas, the PREN may not be high enough for the service conditions. Good design, fabrication techniques, alloy selection, proper operating conditions based on the concentration of active compounds present in the solution causing corrosion, pH, etc. can prevent such corrosion. Stress Stress corrosion cracking (SCC) is caused by combination of tensile stress and a corrosive environment and can lead to unexpected and sudden failure of a stainless steel component. It may occur when three conditions are met: The part contains either applied or residual tensile stresses. The part is in a corrosive environment. The stainless steel is susceptible to SCC. SCC can be prevented by eliminating one of these three conditions. The SCC mechanism results from the following sequence of events: Pitting occurs. Cracks start from a pit initiation site. Cracks then propagate through the metal in a transgranular or intergranular mode. Failure occurs. Galvanic Galvanic corrosion (also called "dissimilar-metal corrosion") refers to corrosion damage induced when two dissimilar materials are coupled in a corrosive electrolyte. The most common electrolyte is water, ranging from freshwater to seawater. When a galvanic couple forms, one of the metals in the couple becomes the anode and corrodes faster than it would alone, while the other becomes the cathode and corrodes slower than it would alone. Stainless steel, due to having a more positive electrode potential than for example carbon steel and aluminium, becomes the cathode, accelerating the corrosion of the anodic metal. An example is the corrosion of aluminium rivets fastening stainless steel sheets in contact with water. The relative surface areas of the anode and the cathode are important in determining the rate of corrosion. In the above example, the surface area of the rivets is small compared to that of the stainless steel sheet, resulting in rapid corrosion. However, if stainless steel fasteners are used to assemble aluminium sheets, galvanic corrosion will be much slower because the galvanic current density on the aluminium surface will be many orders of magnitude smaller. A frequent mistake is to assemble stainless steel plates with carbon steel fasteners; whereas using stainless steel to fasten carbon-steel plates is usually acceptable, the reverse is not. Providing electrical insulation between the dissimilar metals, where possible, is effective at preventing this type of corrosion. High-temperature At elevated temperatures, all metals react with hot gases. The most common high-temperature gaseous mixture is air, of which oxygen is the most reactive component. To avoid corrosion in air, carbon steel is limited to approximately . Oxidation resistance in stainless steels increases with additions of chromium, silicon, and aluminium. Small additions of cerium and yttrium increase the adhesion of the oxide layer on the surface. The addition of chromium remains the most common method to increase high-temperature corrosion resistance in stainless steels; chromium reacts with oxygen to form a chromium oxide scale, which reduces oxygen diffusion into the material. The minimum 10.5% chromium in stainless steels provides resistance to approximately , while 16% chromium provides resistance up to approximately . Type 304, the most common grade of stainless steel with 18% chromium, is resistant to approximately . Other gases, such as sulfur dioxide, hydrogen sulfide, carbon monoxide, chlorine, also attack stainless steel. Resistance to other gases is dependent on the type of gas, the temperature, and the alloying content of the stainless steel. With the addition of up to 5% aluminium, ferritic grades Fe-Cr-Al are designed for electrical resistance and oxidation resistance at elevated temperatures. Such alloys include Kanthal, produced in the form of wire or ribbons. Standard finishes Standard mill finishes can be applied to flat rolled stainless steel directly by the rollers and by mechanical abrasives. Steel is first rolled to size and thickness and then annealed to change the properties of the final material. Any oxidation that forms on the surface (mill scale) is removed by pickling, and a passivation layer is created on the surface. A final finish can then be applied to achieve the desired aesthetic appearance. The following designations are used in the U.S. to describe stainless steel finishes by ASTM A480/A480M-18 (DIN): No. 0: Hot-rolled, annealed, thicker plates No. 1 (1D): Hot-rolled, annealed and passivated No. 2D (2D): Cold rolled, annealed, pickled and passivated No. 2B (2B): Same as above with additional pass through highly polished rollers No. 2BA (2R): Bright annealed (BA or 2R) same as above then bright annealed under oxygen-free atmospheric condition No. 3 (G-2G:) Coarse abrasive finish applied mechanically No. 4 (1J-2J): Brushed finish No. 5: Satin finish No. 6 (1K-2K): Matte finish (brushed but smoother than #4) No. 7 (1P-2P): Reflective finish No. 8: Mirror finish No. 9: Bead blast finish No. 10: Heat colored finish – offering a wide range of electropolished and heat colored surfaces Joining A wide range of joining processes are available for stainless steels, though welding is by far the most common. The ease of welding largely depends on the type of stainless steel used. Austenitic stainless steels are the easiest to weld by electric arc, with weld properties similar to those of the base metal (not cold-worked). Martensitic stainless steels can also be welded by electric-arc but, as the heat-affected zone (HAZ) and the fusion zone (FZ) form martensite upon cooling, precautions must be taken to avoid cracking of the weld. Improper welding practices can additionally cause sugaring (oxide scaling) and heat tint on the backside of the weld. This can be prevented with the use of back-purging gases, backing plates, and fluxes. Post-weld heat treatment is almost always required while preheating before welding is also necessary in some cases. Electric arc welding of type 430 ferritic stainless steel results in grain growth in the HAZ, which leads to brittleness. This has largely been overcome with stabilized ferritic grades, where niobium, titanium, and zirconium form precipitates that prevent grain growth. Duplex stainless steel welding by electric arc is a common practice but requires careful control of the process parameters. Otherwise, the precipitation of unwanted intermetallic phases occurs, which reduces the toughness of the welds. Electric arc welding processes include: Gas metal arc welding, also known as MIG/MAG welding Gas tungsten arc welding, also known as tungsten inert gas (TIG) welding Plasma arc welding Flux-cored arc welding Shielded metal arc welding (covered electrode) Submerged arc welding MIG, MAG and TIG welding are the most common methods. Other welding processes include: Stud welding Resistance spot welding Resistance seam welding Flash welding Laser beam welding Oxy-acetylene welding Stainless steel may be bonded with adhesives such as silicone, silyl modified polymers, and epoxies. Acrylic and polyurethane adhesives are also used in some situations. Production Most of the world's stainless steel production is produced by the following processes: Electric arc furnace (EAF): stainless steel scrap, other ferrous scrap, and ferrous alloys (Fe Cr, Fe Ni, Fe Mo, Fe Si) are melted together. The molten metal is then poured into a ladle and transferred into the AOD process (see below). Argon oxygen decarburization (AOD): carbon in the molten steel is removed (by turning it into carbon monoxide gas) and other compositional adjustments are made to achieve the desired chemical composition. Continuous casting (CC): the molten metal is solidified into slabs for flat products (a typical section is thick and wide) or blooms (sections vary widely but is the average size). Hot rolling (HR): slabs and blooms are reheated in a furnace and hot-rolled. Hot rolling reduces the thickness of the slabs to produce about -thick coils. Blooms, on the other hand, are hot-rolled into bars, which are cut into lengths at the exit of the rolling mill, or wire rod, which is coiled. Cold finishing (CF) depends on the type of product being finished: Hot-rolled coils are pickled in acid solutions to remove the oxide scale on the surface, then subsequently cold rolled in Sendzimir rolling mills and annealed in a protective atmosphere until the desired thickness and surface finish is obtained. Further operations such as slitting and tube forming can be performed in downstream facilities. Hot-rolled bars are straightened, then machined to the required tolerance and finish. Wire rod coils are subsequently processed to produce cold-finished bars on drawing benches, fasteners on boltmaking machines, and wire on single or multipass drawing machines. World stainless steel production figures are published yearly by the International Stainless Steel Forum. Of the EU production figures, Italy, Belgium and Spain were notable, while Canada and Mexico produced none. China, Japan, South Korea, Taiwan, India the US and Indonesia were large producers while Russia reported little production. Breakdown of production by stainless steels families in 2017: Austenitic stainless steels Cr-Ni (also called 300-series, see "Grades" section above): 54% Austenitic stainless steels Cr-Mn (also called 200-series): 21% Ferritic and martensitic stainless steels (also called 400-series): 23% Applications Stainless steel is used in a multitude of fields including architecture, art, chemical engineering, food and beverage manufacture, vehicles, medicine, energy and firearms. Life cycle cost Life cycle cost (LCC) calculations are used to select the design and the materials that will lead to the lowest cost over the whole life of a project, such as a building or a bridge. The formula, in a simple form, is the following: where LCC is the overall life cycle cost, AC is the acquisition cost, IC the installation cost, OC the operating and maintenance costs, LP the cost of lost production due to downtime, and RC the replacement materials cost. In addition, N is the planned life of the project, i the interest rate, and n the year in which a particular OC or LP or RC is taking place. The interest rate (i) is used to convert expenses from different years to their present value (a method widely used by banks and insurance companies) so they can be added and compared fairly. The usage of the sum formula () captures the fact that expenses over the lifetime of a project must be cumulated after they are corrected for interest rate. Application of LCC in materials selection Stainless steel used in projects often results in lower LCC values compared to other materials. The higher acquisition cost (AC) of stainless steel components are often offset by improvements in operating and maintenance costs, reduced loss of production (LP) costs, and the higher resale value of stainless steel components. LCC calculations are usually limited to the project itself. However, there may be other costs that a project stakeholder may wish to consider: Utilities, such as power plants, water supply & wastewater treatment, and hospitals, cannot be shut down. Any maintenance will require extra costs associated with continuing service. Indirect societal costs (with possible political fallout) may be incurred in some situations such as closing or reducing traffic on bridges, creating queues, delays, loss of working hours to the people, and increased pollution by idling vehicles. Sustainability – recycling and reuse The average carbon footprint of stainless steel (all grades, all countries) is estimated to be 2.90 kg of CO2 per kg of stainless steel produced, of which 1.92 kg are emissions from raw materials (Cr, Ni, Mo); 0.54 kg from electricity and steam, and 0.44 kg are direct emissions (i.e., by the stainless steel plant). Note that stainless steel produced in countries that use cleaner sources of electricity (such as France, which uses nuclear energy) will have a lower carbon footprint. Ferritics without Ni will have a lower CO2 footprint than austenitics with 8% Ni or more. Carbon footprint must not be the only sustainability-related factor for deciding the choice of materials: Over any product life, maintenance, repairs or early end of life (planned obsolescence) can increase its overall footprint far beyond initial material differences. In addition, loss of service (typically for bridges) may induce large hidden costs, such as queues, wasted fuel, and loss of man-hours. How much material is used to provide a given service varies with the performance, particularly the strength level, which allows lighter structures and components. Stainless steel is 100% recyclable. An average stainless steel object is composed of about 60% recycled material of which approximately 40% originates from end-of-life products, while the remaining 60% comes from manufacturing processes. What prevents a higher recycling content is the availability of stainless steel scrap, in spite of a very high recycling rate. According to the International Resource Panel's Metal Stocks in Society report, the per capita stock of stainless steel in use in society is in more developed countries and in less-developed countries. There is a secondary market that recycles usable scrap for many stainless steel markets. The product is mostly coil, sheet, and blanks. This material is purchased at a less-than-prime price and sold to commercial quality stampers and sheet metal houses. The material may have scratches, pits, and dents but is made to the current specifications. The stainless steel cycle starts with carbon steel scrap, primary metals, and slag. The next step is the production of hot-rolled and cold-finished steel products in steel mills. Some scrap is produced, which is directly reused in the melting shop. The manufacturing of components is the third step. Some scrap is produced and enters the recycling loop. Assembly of final goods and their use does not generate any material loss. The fourth step is the collection of stainless steel for recycling at the end of life of the goods (such as kitchenware, pulp and paper plants, or automotive parts). This is where it is most difficult to get stainless steel to enter the recycling loop, as shown in the table below: Nanoscale stainless steel Stainless steel nanoparticles have been produced in the laboratory. These may have applications as additives for high-performance applications. For example, sulfurization, phosphorization, and nitridation treatments to produce nanoscale stainless steel based catalysts could enhance the electrocatalytic performance of stainless steel for water splitting. Health effects There is extensive research indicating some probable increased risk of cancer (particularly lung cancer) from inhaling fumes while welding stainless steel. Stainless steel welding is suspected of producing carcinogenic fumes from cadmium oxides, nickel, and chromium. According to Cancer Council Australia, "In 2017, all types of welding fumes were classified as a Group 1 carcinogen." Stainless steel is generally considered to be biologically inert. However, during cooking, small amounts of nickel and chromium leach out of new stainless steel cookware into highly acidic food. Nickel can contribute to cancer risks—particularly lung cancer and nasal cancer. However, no connection between stainless steel cookware and cancer has been established.
Physical sciences
Specific alloys
null
27065
https://en.wikipedia.org/wiki/Standardization
Standardization
Standardization (American English) or standardisation (British English) is the process of implementing and developing technical standards based on the consensus of different parties that include firms, users, interest groups, standards organizations and governments. Standardization can help maximize compatibility, interoperability, safety, repeatability, efficiency, and quality. It can also facilitate a normalization of formerly custom processes. In social sciences, including economics, the idea of standardization is close to the solution for a coordination problem, a situation in which all parties can realize mutual gains, but only by making mutually consistent decisions. Divergent national standards impose costs on consumers and can be a form of non-tariff trade barrier. History Early examples Standard weights and measures were developed by the Indus Valley civilization. The centralized weight and measure system served the commercial interest of Indus merchants as smaller weight measures were used to measure luxury goods while larger weights were employed for buying bulkier items, such as food grains etc. Weights existed in multiples of a standard weight and in categories. Technical standardisation enabled gauging devices to be effectively used in angular measurement and measurement for construction. Uniform units of length were used in the planning of towns such as Lothal, Surkotada, Kalibangan, Dolavira, Harappa, and Mohenjo-daro. The weights and measures of the Indus civilization also reached Persia and Central Asia, where they were further modified. Shigeo Iwata describes the excavated weights unearthed from the Indus civilization: 18th century attempts The implementation of standards in industry and commerce became highly important with the onset of the Industrial Revolution and the need for high-precision machine tools and interchangeable parts. Henry Maudslay developed the first industrially practical screw-cutting lathe in 1800. This allowed for the standardization of screw thread sizes for the first time and paved the way for the practical application of interchangeability (an idea that was already taking hold) to nuts and bolts. Before this, screw threads were usually made by chipping and filing (that is, with skilled freehand use of chisels and files). Nuts were rare; metal screws, when made at all, were usually for use in wood. Metal bolts passing through wood framing to a metal fastening on the other side were usually fastened in non-threaded ways (such as clinching or upsetting against a washer). Maudslay standardized the screw threads used in his workshop and produced sets of taps and dies that would make nuts and bolts consistently to those standards, so that any bolt of the appropriate size would fit any nut of the same size. This was a major advance in workshop technology. National standard Maudslay's work, as well as the contributions of other engineers, accomplished a modest amount of industry standardization; some companies' in-house standards spread a bit within their industries. Joseph Whitworth's screw thread measurements were adopted as the first (unofficial) national standard by companies around the country in 1841. It came to be known as the British Standard Whitworth, and was widely adopted in other countries. This new standard specified a 55° thread angle and a thread depth of 0.640327p and a radius of 0.137329p, where p is the pitch. The thread pitch increased with diameter in steps specified on a chart. An example of the use of the Whitworth thread is the Royal Navy's Crimean War gunboats. These were the first instance of "mass-production" techniques being applied to marine engineering. With the adoption of BSW by British railway lines, many of which had previously used their own standard both for threads and for bolt head and nut profiles, and improving manufacturing techniques, it came to dominate British manufacturing. American Unified Coarse was originally based on almost the same imperial fractions. The Unified thread angle is 60° and has flattened crests (Whitworth crests are rounded). Thread pitch is the same in both systems except that the thread pitch for the  in. (inch) bolt is 12 threads per inch (tpi) in BSW versus 13 tpi in the UNC. National standards body By the end of the 19th century, differences in standards between companies were making trade increasingly difficult and strained. For instance, an iron and steel dealer recorded his displeasure in The Times: "Architects and engineers generally specify such unnecessarily diverse types of sectional material or given work that anything like economical and continuous manufacture becomes impossible. In this country no two professional men are agreed upon the size and weight of a girder to employ for given work." The Engineering Standards Committee was established in London in 1901 as the world's first national standards body. It subsequently extended its standardization work and became the British Engineering Standards Association in 1918, adopting the name British Standards Institution in 1931 after receiving its Royal Charter in 1929. The national standards were adopted universally throughout the country, and enabled the markets to act more rationally and efficiently, with an increased level of cooperation. After the First World War, similar national bodies were established in other countries. The was set up in Germany in 1917, followed by its counterparts, the American National Standard Institute and the French Commission Permanente de Standardisation, both in 1918. Regional standards organization At a regional level (e.g. Europa, the Americas, Africa, etc) or at subregional level (e.g. Mercosur, Andean Community, South East Asia, South East Africa, etc), several Regional Standardization Organizations exist (see also Standards Organization). The three regional standards organizations in Europe – European Standardization Organizations (ESOs), recognised by the EU Regulation on Standardization (Regulation (EU) 1025/2012) – are CEN, CENELEC and ETSI. CEN develops standards for numerous kinds of products, materials, services and processes. Some sectors covered by CEN include transport equipment and services, chemicals, construction, consumer products, defence and security, energy, food and feed, health and safety, healthcare, digital sector, machinery or services. The European Committee for Electrotechnical Standardization (CENELEC) is the European Standardization organization developing standards in the electrotechnical area and corresponding to the International Electrotechnical Commission (IEC) in Europe. International standards The first modern International Organization (Intergovernmental Organization) the International Telegraph Union (now International Telecommunication Union) was created in 1865 to set international standards in order to connect national telegraph networks, as a merger of two predecessor organizations (Bern and Paris treaties) that had similar objectives, but in more limited territories. With the advent of radiocommunication soon after the creation, the work of the ITU quickly expanded from the standardization of Telegraph communications, to developing standards for telecommunications in general. International Standards Associations By the mid to late 19th century, efforts were being made to standardize electrical measurement. Lord Kelvin was an important figure in this process, introducing accurate methods and apparatus for measuring electricity. In 1857, he introduced a series of effective instruments, including the quadrant electrometer, which cover the entire field of electrostatic measurement. He invented the current balance, also known as the Kelvin balance or Ampere balance (SiC), for the precise specification of the ampere, the standard unit of electric current. R. E. B. Crompton became concerned by the large range of different standards and systems used by electrical engineering companies and scientists in the early 20th century. Many companies had entered the market in the 1890s and all chose their own settings for voltage, frequency, current and even the symbols used on circuit diagrams. Adjacent buildings would have totally incompatible electrical systems simply because they had been fitted out by different companies. Crompton could see the lack of efficiency in this system and began to consider proposals for an international standard for electric engineering. In 1904, Crompton represented Britain at the International Electrical Congress, held in connection with Louisiana Purchase Exposition in Saint Louis as part of a delegation by the Institute of Electrical Engineers. He presented a paper on standardisation, which was so well received that he was asked to look into the formation of a commission to oversee the process. By 1906 his work was complete and he drew up a permanent constitution for the International Electrotechnical Commission. The body held its first meeting that year in London, with representatives from 14 countries. In honour of his contribution to electrical standardisation, Lord Kelvin was elected as the body's first President. The International Federation of the National Standardizing Associations (ISA) was founded in 1926 with a broader remit to enhance international cooperation for all technical standards and specifications. The body was suspended in 1942 during World War II. After the war, ISA was approached by the recently formed United Nations Standards Coordinating Committee (UNSCC) with a proposal to form a new global standards body. In October 1946, ISA and UNSCC delegates from 25 countries met in London and agreed to join forces to create the new International Organization for Standardization (ISO); the new organization officially began operations in February 1947. In general, each country or economy has a single recognized National Standards Body (NSB). Examples include ABNT, AENOR (now called UNE, Spanish Association for Standardization), AFNOR, ANSI, BSI, DGN, DIN, IRAM, JISC, KATS, SABS, SAC, SCC, SIS. An NSB is likely the sole member from that economy in ISO. NSBs may be either public or private sector organizations, or combinations of the two. For example, the three NSBs of Canada, Mexico and the United States are respectively the Standards Council of Canada (SCC), the General Bureau of Standards (, DGN), and the American National Standards Institute (ANSI). SCC is a Canadian Crown Corporation, DGN is a governmental agency within the Mexican Ministry of Economy, and ANSI and AENOR are a 501(c)(3) non-profit organization with members from both the private and public sectors. The determinants of whether an NSB for a particular economy is a public or private sector body may include the historical and traditional roles that the private sector fills in public affairs in that economy or the development stage of that economy. Usage Standards can be: de facto standards which means they are followed by informal convention or dominant usage. de jure standards which are part of legally binding contracts, laws or regulations. Voluntary standards which are published and available for people to consider for use. The existence of a published standard does not necessarily imply that it is useful or correct. Just because an item is stamped with a standard number does not, by itself, indicate that the item is fit for any particular use. The people who use the item or service (engineers, trade unions, etc.) or specify it (building codes, government, industry, etc.) have the responsibility to consider the available standards, specify the correct one, enforce compliance, and use the item correctly: validation and verification. To avoid the proliferation of industry standards, also referred to as private standards, regulators in the United States are instructed by their government offices to adopt "voluntary consensus standards" before relying upon "industry standards" or developing "government standards". Regulatory authorities can reference voluntary consensus standards to translate internationally accepted criteria into public policy. Information exchange In the context of information exchange, standardization refers to the process of developing standards for specific business processes using specific formal languages. These standards are usually developed in voluntary consensus standards bodies such as the United Nations Center for Trade Facilitation and Electronic Business (UN/CEFACT), the World Wide Web Consortium (W3C), the Telecommunications Industry Association (TIA), and the Organization for the Advancement of Structured Information Standards (OASIS). There are many specifications that govern the operation and interaction of devices and software on the Internet, which do not use the term "standard" in their names. The W3C, for example, publishes "Recommendations", and the IETF publishes "Requests for Comments" (RFCs). Nevertheless, these publications are often referred to as "standards", because they are the products of regular standardization processes. Environmental protection Standardized product certifications such as of organic food, buildings or possibly sustainable seafood as well as standardized product safety evaluation and dis/approval procedures (e.g. regulation of chemicals, cosmetics and food safety) can protect the environment. This effect may depend on associated modified consumer choices, strategic product support/obstruction, requirements and bans as well as their accordance with a scientific basis, the robustness and applicability of a scientific basis, whether adoption of the certifications is voluntary, and the socioeconomic context (systems of governance and the economy), with possibly most certifications being so far mostly largely ineffective. Moreover, standardized scientific frameworks can enable evaluation of levels of environmental protection, such as of marine protected areas, and serve as, potentially evolving, guides for improving, planning and monitoring the protection-quality, -scopes and -extents. Moreover, technical standards could decrease electronic waste and reduce resource-needs such as by thereby requiring (or enabling) products to be interoperable, compatible (with other products, infrastructures, environments, etc), durable, energy-efficient, modular, upgradeable/repairable and recyclable and conform to versatile, optimal standards and protocols. Such standardization is not limited to the domain of electronic devices like smartphones and phone chargers but could also be applied to e.g. the energy infrastructure. Policy-makers could develop policies "fostering standard design and interfaces, and promoting the re-use of modules and components across plants to develop more sustainable energy infrastructure". Computers and the Internet are some of the tools that could be used to increase practicability and reduce suboptimal results, detrimental standards and bureaucracy, which is often associated with traditional processes and results of standardization. Taxes and subsidies, and funding of research and development could be used complementarily. Standardized measurement is used in monitoring, reporting and verification frameworks of environmental impacts, usually of companies, for example to prevent underreporting of greenhouse gas emissions by firms. Product testing and analysis In routine product testing and product analysis results can be reported using official or informal standards. It can be done to increase consumer protection, to ensure safety or healthiness or efficiency or performance or sustainability of products. It can be carried out by the manufacturer, an independent laboratory, a government agency, a magazine or others on a voluntary or commissioned/mandated basis. Estimating the environmental impacts of food products in a standardized way – as has been done with a dataset of >57,000 food products in supermarkets – could e.g. be used to inform consumers or in policy. For example, such may be useful for approaches using personal carbon allowances (or similar quota) or for targeted alteration of (ultimate overall) costs. Safety Public information symbols Public information symbols (e.g. hazard symbols), especially when related to safety, are often standardized, sometimes on the international level. Biosafety Standardization is also used to ensure safe design and operation of laboratories and similar potentially dangerous workplaces, e.g. to ensure biosafety levels. There is research into microbiology safety standards used in clinical and research laboratories. Defense In the context of defense, standardization has been defined by NATO as The development and implementation of concepts, doctrines, procedures and designs to achieve and maintain the required levels of compatibility, interchangeability or commonality in the operational, procedural, material, technical and administrative fields to attain interoperability. Ergonomics, workplace and health In some cases, standards are being used in the design and operation of workplaces and products that can impact consumers' health. Some of such standards seek to ensure occupational safety and health and ergonomics. For example, chairs (see e.g. active sitting and steps of research) could be potentially be designed and chosen using standards that may or may not be based on adequate scientific data. Standards could reduce the variety of products and lead to convergence on fewer broad designs – which can often be efficiently mass-produced via common shared automated procedures and instruments – or formulations deemed to be the most healthy, most efficient or best compromise between healthiness and other factors. Standardization is sometimes or could also be used to ensure or increase or enable consumer health protection beyond the workplace and ergonomics such as standards in food, food production, hygiene products, tab water, cosmetics, drugs/medicine, drink and dietary supplements, especially in cases where there is robust scientific data that suggests detrimental impacts on health (e.g. of ingredients) despite being substitutable and not necessarily of consumer interest. Clothing Clinical assessment In the context of assessment, standardization may define how a measuring instrument or procedure is similar to every subjects or patients. For example, educational psychologist may adopt structured interview to systematically interview the people in concern. By delivering the same procedures, all subjects is evaluated using same criteria and minimising any confounding variable that reduce the validity. Some other example includes mental status examination and personality test. Social science In the context of social criticism and social science, standardization often means the process of establishing standards of various kinds and improving efficiency to handle people, their interactions, cases, and so forth. Examples include formalization of judicial procedure in court, and establishing uniform criteria for diagnosing mental disease. Standardization in this sense is often discussed along with (or synonymously to) such large-scale social changes as modernization, bureaucratization, homogenization, and centralization of society. Customer service In the context of customer service, standardization refers to the process of developing an international standard that enables organizations to focus on customer service, while at the same time providing recognition of success through a third party organization, such as the British Standards Institution. An international standard has been developed by The International Customer Service Institute. Supply and materials management In the context of supply chain management and materials management, standardization covers the process of specification and use of any item the company must buy in or make, allowable substitutions, and build or buy decisions. Process The process of standardization can itself be standardized. There are at least four levels of standardization: compatibility, interchangeability, commonality and reference. These standardization processes create compatibility, similarity, measurement, and symbol standards. There are typically four different techniques for standardization Simplification or variety control Codification Value engineering Statistical process control. Types of standardization process: Emergence as de facto standard: tradition, market domination, etc. Written by a Standards organization: in a closed consensus process: Restricted membership and often having formal procedures for due-process among voting members in a full consensus process: usually open to all interested and qualified parties and with formal procedures for due-process considerations Written by a government or regulatory body Written by a corporation, union, trade association, etc. Agile standardization. A group of entities, themselves or through an association, creates and publishes a drafted version shared for public review based on actual examples of use. Effects Standardization has a variety of benefits and drawbacks for firms and consumers participating in the market, and on technology and innovation. Effect on firms The primary effect of standardization on firms is that the basis of competition is shifted from integrated systems to individual components within the system. Prior to standardization a company's product must span the entire system because individual components from different competitors are incompatible, but after standardization each company can focus on providing an individual component of the system. When the shift toward competition based on individual components takes place, firms selling tightly integrated systems must quickly shift to a modular approach, supplying other companies with subsystems or components. Effect on consumers Standardization has a variety of benefits for consumers, but one of the greatest benefits is enhanced network effects. Standards increase compatibility and interoperability between products, allowing information to be shared within a larger network and attracting more consumers to use the new technology, further enhancing network effects. Other benefits of standardization to consumers are reduced uncertainty, because consumers can be more certain that they are not choosing the wrong product, and reduced lock-in, because the standard makes it more likely that there will be competing products in the space. Consumers may also get the benefit of being able to mix and match components of a system to align with their specific preferences. Once these initial benefits of standardization are realized, further benefits that accrue to consumers as a result of using the standard are driven mostly by the quality of the technologies underlying that standard. Probably the greatest downside of standardization for consumers is lack of variety. There is no guarantee that the chosen standard will meet all consumers' needs or even that the standard is the best available option. Another downside is that if a standard is agreed upon before products are available in the market, then consumers are deprived of the penetration pricing that often results when rivals are competing to rapidly increase market share in an attempt to increase the likelihood that their product will become the standard. It is also possible that a consumer will choose a product based upon a standard that fails to become dominant. In this case, the consumer will have spent resources on a product that is ultimately less useful to him or her as the result of the standardization process. Effect on technology Much like the effect on consumers, the effect of standardization on technology and innovation is mixed. Meanwhile, the various links between research and standardization have been identified, also as a platform of knowledge transfer and translated into policy measures (e.g. WIPANO). Increased adoption of a new technology as a result of standardization is important because rival and incompatible approaches competing in the marketplace can slow or even kill the growth of the technology (a state known as market fragmentation). The shift to a modularized architecture as a result of standardization brings increased flexibility, rapid introduction of new products, and the ability to more closely meet individual customer's needs. The negative effects of standardization on technology have to do with its tendency to restrict new technology and innovation. Standards shift competition from features to price because the features are defined by the standard. The degree to which this is true depends on the specificity of the standard. Standardization in an area also rules out alternative technologies as options while encouraging others.
Technology
Basics_6
null
27114
https://en.wikipedia.org/wiki/Silicon
Silicon
Silicon is a chemical element; it has symbol Si and atomic number 14. It is a hard, brittle crystalline solid with a blue-grey metallic lustre, and is a tetravalent metalloid and semiconductor. It is a member of group 14 in the periodic table: carbon is above it; and germanium, tin, lead, and flerovium are below it. It is relatively unreactive. Silicon is a significant element that is essential for several physiological and metabolic processes in plants. Silicon is widely regarded as the predominant semiconductor material due to its versatile applications in various electrical devices such as transistors, solar cells, integrated circuits, and others. These may be due to its significant band gap, expansive optical transmission range, extensive absorption spectrum, surface roughening, and effective anti-reflection coating. Because of its high chemical affinity for oxygen, it was not until 1823 that Jöns Jakob Berzelius was first able to prepare it and characterize it in pure form. Its oxides form a family of anions known as silicates. Its melting and boiling points of 1414 °C and 3265 °C, respectively, are the second highest among all the metalloids and nonmetals, being surpassed only by boron. Silicon is the eighth most common element in the universe by mass, but very rarely occurs in its pure form in the Earth's crust. It is widely distributed throughout space in cosmic dusts, planetoids, and planets as various forms of silicon dioxide (silica) or silicates. More than 90% of the Earth's crust is composed of silicate minerals, making silicon the second most abundant element in the Earth's crust (about 28% by mass), after oxygen. Most silicon is used commercially without being separated, often with very little processing of the natural minerals. Such use includes industrial construction with clays, silica sand, and stone. Silicates are used in Portland cement for mortar and stucco, and mixed with silica sand and gravel to make concrete for walkways, foundations, and roads. They are also used in whiteware ceramics such as porcelain, and in traditional silicate-based soda–lime glass and many other specialty glasses. Silicon compounds such as silicon carbide are used as abrasives and components of high-strength ceramics. Silicon is the basis of the widely used synthetic polymers called silicones. The late 20th century to early 21st century has been described as the Silicon Age (also known as the Digital Age or Information Age) because of the large impact that elemental silicon has on the modern world economy. The small portion of very highly purified elemental silicon used in semiconductor electronics (<15%) is essential to the transistors and integrated circuit chips used in most modern technology such as smartphones and other computers. In 2019, 32.4% of the semiconductor market segment was for networks and communications devices, and the semiconductors industry is projected to reach $726.73 billion by 2027. Silicon is an essential element in biology. Only traces are required by most animals, but some sea sponges and microorganisms, such as diatoms and radiolaria, secrete skeletal structures made of silica. Silica is deposited in many plant tissues. History Owing to the abundance of silicon in the Earth's crust, natural silicon-based materials have been used for thousands of years. Silicon rock crystals were familiar to various ancient civilizations, such as the predynastic Egyptians who used it for beads and small vases, as well as the ancient Chinese. Glass containing silica was manufactured by the Egyptians since at least 1500 BC, as well as by the ancient Phoenicians. Natural silicate compounds were also used in various types of mortar for construction of early human dwellings. Discovery In 1787, Antoine Lavoisier suspected that silica might be an oxide of a fundamental chemical element, but the chemical affinity of silicon for oxygen is high enough that he had no means to reduce the oxide and isolate the element. After an attempt to isolate silicon in 1808, Sir Humphry Davy proposed the name "silicium" for silicon, from the Latin , silicis for flint, and adding the "-ium" ending because he believed it to be a metal. Most other languages use transliterated forms of Davy's name, sometimes adapted to local phonology (e.g. German , Turkish , Catalan , Armenian or Silitzioum). A few others use instead a calque of the Latin root (e.g. Russian , from "flint"; Greek from "fire"; Finnish from "flint", Czech from "quartz", "flint"). Gay-Lussac and Thénard are thought to have prepared impure amorphous silicon in 1811, through the heating of recently isolated potassium metal with silicon tetrafluoride, but they did not purify and characterize the product, nor identify it as a new element. Silicon was given its present name in 1817 by Scottish chemist Thomas Thomson. He retained part of Davy's name but added "-on" because he believed that silicon was a nonmetal similar to boron and carbon. In 1824, Jöns Jacob Berzelius prepared amorphous silicon using approximately the same method as Gay-Lussac (reducing potassium fluorosilicate with molten potassium metal), but purifying the product to a brown powder by repeatedly washing it. As a result, he is usually given credit for the element's discovery. The same year, Berzelius became the first to prepare silicon tetrachloride; silicon tetrafluoride had already been prepared long before in 1771 by Carl Wilhelm Scheele by dissolving silica in hydrofluoric acid. In 1823 for the first time Jacob Berzelius discovered silicon tetrachloride (SiCl4). In 1846 Von Ebelman's synthesized tetraethyl orthosilicate (Si(OC2H5)4). Silicon in its more common crystalline form was not prepared until 31 years later, by Deville. By electrolyzing a mixture of sodium chloride and aluminium chloride containing approximately 10% silicon, he was able to obtain a slightly impure allotrope of silicon in 1854. Later, more cost-effective methods have been developed to isolate several allotrope forms, the most recent being silicene in 2010. Meanwhile, research on the chemistry of silicon continued; Friedrich Wöhler discovered the first volatile hydrides of silicon, synthesising trichlorosilane in 1857 and silane itself in 1858, but a detailed investigation of the silanes was only carried out in the early 20th century by Alfred Stock, despite early speculation on the matter dating as far back as the beginnings of synthetic organic chemistry in the 1830s. Similarly, the first organosilicon compound, tetraethylsilane, was synthesised by Charles Friedel and James Crafts in 1863, but detailed characterisation of organosilicon chemistry was only done in the early 20th century by Frederic Kipping. Starting in the 1920s, the work of William Lawrence Bragg on X-ray crystallography elucidated the compositions of the silicates, which had previously been known from analytical chemistry but had not yet been understood, together with Linus Pauling's development of crystal chemistry and Victor Goldschmidt's development of geochemistry. The middle of the 20th century saw the development of the chemistry and industrial use of siloxanes and the growing use of silicone polymers, elastomers, and resins. In the late 20th century, the complexity of the crystal chemistry of silicides was mapped, along with the solid-state physics of doped semiconductors. Silicon semiconductors The first semiconductor devices did not use silicon, but used galena, including German physicist Ferdinand Braun's crystal detector in 1874 and Indian physicist Jagadish Chandra Bose's radio crystal detector in 1901. The first silicon semiconductor device was a silicon radio crystal detector, developed by American engineer Greenleaf Whittier Pickard in 1906. In 1940, Russell Ohl discovered the p–n junction and photovoltaic effects in silicon. In 1941, techniques for producing high-purity germanium and silicon crystals were developed for radar microwave detector crystals during World War II. In 1947, physicist William Shockley theorized a field-effect amplifier made from germanium and silicon, but he failed to build a working device, before eventually working with germanium instead. The first working transistor was a point-contact transistor built by John Bardeen and Walter Brattain later that year while working under Shockley. In 1954, physical chemist Morris Tanenbaum fabricated the first silicon junction transistor at Bell Labs. In 1955, Carl Frosch and Lincoln Derick at Bell Labs accidentally discovered that silicon dioxide () could be grown on silicon. By 1957 Frosch and Derick published their work on the first manufactured semiconductor oxide transistor: the first planar transistors, in which drain and source were adjacent at the same surface. Silicon Age The "Silicon Age" refers to the late 20th century to early 21st century. This is due to silicon being the dominant material used in electronics and information technology (also known as the Digital Age or Information Age), similar to how the Stone Age, Bronze Age and Iron Age were defined by the dominant materials during their respective ages of civilization. Because silicon is an important element in high-technology semiconductor devices, many places in the world bear its name. For example, the Santa Clara Valley in California acquired the nickname Silicon Valley, as the element is the base material in the semiconductor industry there. Since then, many other places have been similarly dubbed, including Silicon Wadi in Israel; Silicon Forest in Oregon; Silicon Hills in Austin, Texas; Silicon Slopes in Salt Lake City, Utah; Silicon Saxony in Germany; Silicon Valley in India; Silicon Border in Mexicali, Mexico; Silicon Fen in Cambridge, England; Silicon Roundabout in London; Silicon Glen in Scotland; Silicon Gorge in Bristol, England; Silicon Alley in New York City; and Silicon Beach in Los Angeles. Characteristics Physical and atomic A silicon atom has fourteen electrons. In the ground state, they are arranged in the electron configuration [Ne]3s23p2. Of these, four are valence electrons, occupying the 3s orbital and two of the 3p orbitals. Like the other members of its group, the lighter carbon and the heavier germanium, tin, and lead, it has the same number of valence electrons as valence orbitals: hence, it can complete its octet and obtain the stable noble gas configuration of argon by forming sp3 hybrid orbitals, forming tetrahedral derivatives where the central silicon atom shares an electron pair with each of the four atoms it is bonded to. The first four ionisation energies of silicon are 786.3, 1576.5, 3228.3, and 4354.4 kJ/mol respectively; these figures are high enough to preclude the possibility of simple cationic chemistry for the element. Following periodic trends, its single-bond covalent radius of 117.6 pm is intermediate between those of carbon (77.2 pm) and germanium (122.3 pm). The hexacoordinate ionic radius of silicon may be considered to be 40 pm, although this must be taken as a purely notional figure given the lack of a simple cation in reality. Electrical At standard temperature and pressure, silicon is a shiny semiconductor with a bluish-grey metallic lustre; as typical for semiconductors, its resistivity drops as temperature rises. This arises because silicon has a small energy gap (band gap) between its highest occupied energy levels (the valence band) and the lowest unoccupied ones (the conduction band). The Fermi level is about halfway between the valence and conduction bands and is the energy at which a state is as likely to be occupied by an electron as not. Hence pure silicon is effectively an insulator at room temperature. However, doping silicon with a pnictogen such as phosphorus, arsenic, or antimony introduces one extra electron per dopant and these may then be excited into the conduction band either thermally or photolytically, creating an n-type semiconductor. Similarly, doping silicon with a group 13 element such as boron, aluminium, or gallium results in the introduction of acceptor levels that trap electrons that may be excited from the filled valence band, creating a p-type semiconductor. Joining n-type silicon to p-type silicon creates a p–n junction with a common Fermi level; electrons flow from n to p, while holes flow from p to n, creating a voltage drop. This p–n junction thus acts as a diode that can rectify alternating current that allows current to pass more easily one way than the other. A transistor is an n–p–n junction, with a thin layer of weakly p-type silicon between two n-type regions. Biasing the emitter through a small forward voltage and the collector through a large reverse voltage allows the transistor to act as a triode amplifier. Crystal structure Silicon crystallises in a giant covalent structure at standard conditions, specifically in a diamond cubic crystal lattice (space group 227). It thus has a high melting point of 1414 °C, as a lot of energy is required to break the strong covalent bonds and melt the solid. Upon melting silicon contracts as the long-range tetrahedral network of bonds breaks up and the voids in that network are filled in, similar to water ice when hydrogen bonds are broken upon melting. It does not have any thermodynamically stable allotropes at standard pressure, but several other crystal structures are known at higher pressures. The general trend is one of increasing coordination number with pressure, culminating in a hexagonal close-packed allotrope at about 40 gigapascals known as Si–VII (the standard modification being Si–I). An allotrope called BC8 (or bc8), having a body-centred cubic lattice with eight atoms per primitive unit cell (space group 206), can be created at high pressure and remains metastable at low pressure. Its properties have been studied in detail. Silicon boils at 3265 °C: this, while high, is still lower than the temperature at which its lighter congener carbon sublimes (3642 °C) and silicon similarly has a lower heat of vaporisation than carbon, consistent with the fact that the Si–Si bond is weaker than the C–C bond. It is also possible to construct silicene layers analogous to graphene. Isotopes Naturally occurring silicon is composed of three stable isotopes, 28Si (92.23%), 29Si (4.67%), and 30Si (3.10%). Out of these, only 29Si is of use in NMR and EPR spectroscopy, as it is the only one with a nuclear spin (I =). All three are produced in Type Ia supernovae through the oxygen-burning process, with 28Si being made as part of the alpha process and hence the most abundant. The fusion of 28Si with alpha particles by photodisintegration rearrangement in stars is known as the silicon-burning process; it is the last stage of stellar nucleosynthesis before the rapid collapse and violent explosion of the star in question in a type II supernova. Twenty-two radioisotopes have been characterized, the two stablest being 32Si with a half-life of about 150 years, and 31Si with a half-life of 2.62 hours. All the remaining radioactive isotopes have half-lives that are less than seven seconds, and the majority of these have half-lives that are less than one-tenth of a second. Silicon has one known nuclear isomer, 34mSi, with a half-life less than 210 nanoseconds. 32Si undergoes low-energy beta decay to 32P and then stable 32S. 31Si may be produced by the neutron activation of natural silicon and is thus useful for quantitative analysis; it can be easily detected by its characteristic beta decay to stable 31P, in which the emitted electron carries up to 1.48 MeV of energy. The known isotopes of silicon range in mass number from 22 to 46. The most common decay mode of the isotopes with mass numbers lower than the three stable isotopes is inverse beta decay, primarily forming aluminium isotopes (13 protons) as decay products. The most common decay mode for the heavier unstable isotopes is beta decay, primarily forming phosphorus isotopes (15 protons) as decay products. Silicon can enter the oceans through groundwater and riverine transport. Large fluxes of groundwater input have an isotopic composition which is distinct from riverine silicon inputs. Isotopic variations in groundwater and riverine transports contribute to variations in oceanic 30Si values. Currently, there are substantial differences in the isotopic values of deep water in the world's ocean basins. Between the Atlantic and Pacific oceans, there is a deep water 30Si gradient of greater than 0.3 parts per thousand. 30Si is most commonly associated with productivity in the oceans. Chemistry and compounds Crystalline bulk silicon is rather inert, but becomes more reactive at high temperatures. Like its neighbour aluminium, silicon forms a thin, continuous surface layer of silicon dioxide () that protects the material beneath from oxidation. Because of this, silicon does not measurably react with the air below 900 °C. Between 950 °C and 1160 °C, the formation rate of the vitreous dioxide rapidly increases, and when 1400 °C is reached, atmospheric nitrogen also reacts to give the nitrides SiN and . Silicon reacts with gaseous sulfur at 600 °C and gaseous phosphorus at 1000 °C. This oxide layer nevertheless does not prevent reaction with the halogens; fluorine attacks silicon vigorously at room temperature, chlorine does so at about 300 °C, and bromine and iodine at about 500 °C. Silicon does not react with most aqueous acids, but is oxidised and complexed by hydrofluoric acid mixtures containing either chlorine or nitric acid to form hexafluorosilicates. It readily dissolves in hot aqueous alkali to form silicates. At high temperatures, silicon also reacts with alkyl halides; this reaction may be catalysed by copper to directly synthesise organosilicon chlorides as precursors to silicone polymers. Upon melting, silicon becomes extremely reactive, alloying with most metals to form silicides, and reducing most metal oxides because the heat of formation of silicon dioxide is so large. In fact, molten silicon reacts virtually with every known kind of crucible material (except its own oxide, ). This happens due to silicon's high binding forces for the light elements and to its high dissolving power for most elements. As a result, containers for liquid silicon must be made of refractory, unreactive materials such as zirconium dioxide or group 4, 5, and 6 borides. Tetrahedral coordination is a major structural motif in silicon chemistry just as it is for carbon chemistry. However, the 3p subshell is rather more diffuse than the 2p subshell and does not hybridise so well with the 3s subshell. As a result, the chemistry of silicon and its heavier congeners shows significant differences from that of carbon, and thus octahedral coordination is also significant. For example, the electronegativity of silicon (1.90) is much less than that of carbon (2.55), because the valence electrons of silicon are further from the nucleus than those of carbon and hence experience smaller electrostatic forces of attraction from the nucleus. The poor overlap of 3p orbitals also results in a much lower tendency toward catenation (formation of Si–Si bonds) for silicon than for carbon, due to the concomitant weakening of the Si–Si bond compared to the C–C bond: the average Si–Si bond energy is approximately 226 kJ/mol, compared to a value of 356 kJ/mol for the C–C bond. This results in multiply bonded silicon compounds generally being much less stable than their carbon counterparts, an example of the double bond rule. On the other hand, the presence of radial nodes in the 3p orbitals of silicon suggests the possibility of hypervalence, as seen in five and six-coordinate derivatives of silicon such as and . Lastly, because of the increasing energy gap between the valence s and p orbitals as the group is descended, the divalent state grows in importance from carbon to lead, so that a few unstable divalent compounds are known for silicon; this lowering of the main oxidation state, in tandem with increasing atomic radii, results in an increase of metallic character down the group. Silicon already shows some incipient metallic behavior, particularly in the behavior of its oxide compounds and its reaction with acids as well as bases (though this takes some effort), and is hence often referred to as a metalloid rather than a nonmetal. Germanium shows more, and tin is generally considered a metal. Silicon shows clear differences from carbon. For example, organic chemistry has very few analogies with silicon chemistry, while silicate minerals have a structural complexity unseen in oxocarbons. Silicon tends to resemble germanium far more than it does carbon, and this resemblance is enhanced by the d-block contraction, resulting in the size of the germanium atom being much closer to that of the silicon atom than periodic trends would predict. Nevertheless, there are still some differences because of the growing importance of the divalent state in germanium compared to silicon. Additionally, the lower Ge–O bond strength compared to the Si–O bond strength results in the absence of "germanone" polymers that would be analogous to silicone polymers. Occurrence Silicon is the eighth most abundant element in the universe, coming after hydrogen, helium, carbon, nitrogen, oxygen, iron, and neon. These abundances are not replicated well on Earth due to substantial separation of the elements taking place during the formation of the Solar System. Silicon makes up 27.2% of the Earth's crust by weight, second only to oxygen at 45.5%, with which it always is associated in nature. Further fractionation took place in the formation of the Earth by planetary differentiation: Earth's core, which makes up 31.5% of the mass of the Earth, has approximate composition ; the mantle makes up 68.1% of the Earth's mass and is composed mostly of denser oxides and silicates, an example being olivine, ; while the lighter siliceous minerals such as aluminosilicates rise to the surface and form the crust, making up 0.4% of the Earth's mass. The crystallisation of igneous rocks from magma depends on a number of factors; among them are the chemical composition of the magma, the cooling rate, and some properties of the individual minerals to be formed, such as lattice energy, melting point, and complexity of their crystal structure. As magma is cooled, olivine appears first, followed by pyroxene, amphibole, biotite mica, orthoclase feldspar, muscovite mica, quartz, zeolites, and finally, hydrothermal minerals. This sequence shows a trend toward increasingly complex silicate units with cooling, and the introduction of hydroxide and fluoride anions in addition to oxides. Many metals may substitute for silicon. After these igneous rocks undergo weathering, transport, and deposition, sedimentary rocks like clay, shale, and sandstone are formed. Metamorphism also may occur at high temperatures and pressures, creating an even vaster variety of minerals. There are four sources for silicon fluxes into the ocean: chemical weathering of continental rocks, river transport, dissolution of continental terrigenous silicates, and the reaction between submarine basalts and hydrothermal fluid which release dissolved silicon. All four of these fluxes are interconnected in the ocean's biogeochemical cycle as they all were initially formed from the weathering of Earth's crust. Approximately 300–900 megatonnes of Aeolian dust is deposited into the world's oceans each year. Of that value, 80–240 megatonnes are in the form of particulate silicon. The total amount of particulate silicon deposition into the ocean is still less than the amount of silicon influx into the ocean via riverine transportation. Aeolian inputs of particulate lithogenic silicon into the North Atlantic and Western North Pacific oceans are the result of dust settling on the oceans from the Sahara and Gobi Desert, respectively. Riverine transports are the major source of silicon influx into the ocean in coastal regions, while silicon deposition in the open ocean is greatly influenced by the settling of Aeolian dust. Production Silicon of 96–99% purity is made by carbothermically reducing quartzite or sand with highly pure coke. The reduction is carried out in an electric arc furnace, with an excess of used to stop silicon carbide (SiC) from accumulating: + 2 C → Si + 2 CO 2 SiC + → 3 Si + 2 CO This reaction, known as carbothermal reduction of silicon dioxide, usually is conducted in the presence of scrap iron with low amounts of phosphorus and sulfur, producing ferrosilicon. Ferrosilicon, an iron-silicon alloy that contains varying ratios of elemental silicon and iron, accounts for about 80% of the world's production of elemental silicon, with China, the leading supplier of elemental silicon, providing 4.6 million tonnes (or 2/3rds of world output) of silicon, most of it in the form of ferrosilicon. It is followed by Russia (610,000 t), Norway (330,000 t), Brazil (240,000 t), and the United States (170,000 t). Ferrosilicon is primarily used by the iron and steel industry (see below) with primary use as alloying addition in iron or steel and for de-oxidation of steel in integrated steel plants. Another reaction, sometimes used, is aluminothermal reduction of silicon dioxide, as follows: 3 + 4 Al → 3 Si + 2 Leaching powdered 96–97% pure silicon with water results in ~98.5% pure silicon, which is used in the chemical industry. However, even greater purity is needed for semiconductor applications, and this is produced from the reduction of tetrachlorosilane (silicon tetrachloride) or trichlorosilane. The former is made by chlorinating scrap silicon and the latter is a byproduct of silicone production. These compounds are volatile and hence can be purified by repeated fractional distillation, followed by reduction to elemental silicon with very pure zinc metal as the reducing agent. The spongy pieces of silicon thus produced are melted and then grown to form cylindrical single crystals, before being purified by zone refining. Other routes use the thermal decomposition of silane or tetraiodosilane (). Another process used is the reduction of sodium hexafluorosilicate, a common waste product of the phosphate fertilizer industry, by metallic sodium: this is highly exothermic and hence requires no outside energy source. Hyperfine silicon is made at a higher purity than almost any other material: transistor production requires impurity levels in silicon crystals less than 1 part per 1010, and in special cases impurity levels below 1 part per 1012 are needed and attained. Silicon nanostructures can directly be produced from silica sand using conventional metalothermic processes, or the combustion synthesis approach. Such nanostructured silicon materials can be used in various functional applications including the anode of lithium-ion batteries (LIBs), other ion batteries, future computing devices like memristors or photocatalytic applications. Applications Compounds Most silicon is used industrially without being purified, often with comparatively little processing from its natural form. More than 90% of the Earth's crust is composed of silicate minerals, which are compounds of silicon and oxygen, often with metallic ions when negatively charged silicate anions require cations to balance the charge. Many of these have direct commercial uses, such as clays, silica sand, and most kinds of building stone. Thus, the vast majority of uses for silicon are as structural compounds, either as the silicate minerals or silica (crude silicon dioxide). Silicates are used in making Portland cement (made mostly of calcium silicates) which is used in building mortar and modern stucco, but more importantly, combined with silica sand, and gravel (usually containing silicate minerals such as granite), to make the concrete that is the basis of most of the very largest industrial building projects of the modern world. Silica is used to make fire brick, a type of ceramic. Silicate minerals are also in whiteware ceramics, an important class of products usually containing various types of fired clay minerals (natural aluminium phyllosilicates). An example is porcelain, which is based on the silicate mineral kaolinite. Traditional glass (silica-based soda–lime glass) also functions in many of the same ways, and also is used for windows and containers. In addition, specialty silica based glass fibers are used for optical fiber, as well as to produce fiberglass for structural support and glass wool for thermal insulation. Silicones often are used in waterproofing treatments, molding compounds, mold-release agents, mechanical seals, high temperature greases and waxes, and caulking compounds. Silicone is also sometimes used in breast implants, contact lenses, explosives and pyrotechnics. Silly Putty was originally made by adding boric acid to silicone oil. Other silicon compounds function as high-technology abrasives and new high-strength ceramics based upon silicon carbide. Silicon is a component of some superalloys. Alloys Elemental silicon is added to molten cast iron as ferrosilicon or silicocalcium alloys to improve performance in casting thin sections and to prevent the formation of cementite where exposed to outside air. The presence of elemental silicon in molten iron acts as a sink for oxygen, so that the steel carbon content, which must be kept within narrow limits for each type of steel, can be more closely controlled. Ferrosilicon production and use is a monitor of the steel industry, and although this form of elemental silicon is grossly impure, it accounts for 80% of the world's use of free silicon. Silicon is an important constituent of transformer steel, modifying its resistivity and ferromagnetic properties. The properties of silicon may be used to modify alloys with metals other than iron. "Metallurgical grade" silicon is silicon of 95–99% purity. About 55% of the world consumption of metallurgical purity silicon goes for production of aluminium-silicon alloys (silumin alloys) for aluminium part casts, mainly for use in the automotive industry. Silicon's importance in aluminium casting is that a significantly high amount (12%) of silicon in aluminium forms a eutectic mixture which solidifies with very little thermal contraction. This greatly reduces tearing and cracks formed from stress as casting alloys cool to solidity. Silicon also significantly improves the hardness and thus wear-resistance of aluminium. Electronics Most elemental silicon produced remains as a ferrosilicon alloy, and only approximately 20% is refined to metallurgical grade purity (a total of 1.3–1.5 million metric tons/year). An estimated 15% of the world production of metallurgical grade silicon is further refined to semiconductor purity. This typically is the "nine-9" or 99.9999999% purity, nearly defect-free single crystalline material. Monocrystalline silicon of such purity is usually produced by the Czochralski process, and is used to produce silicon wafers used in the semiconductor industry, in electronics, and in some high-cost and high-efficiency photovoltaic applications. Pure silicon is an intrinsic semiconductor, which means that unlike metals, it conducts electron holes and electrons released from atoms by heat; silicon's electrical conductivity increases with higher temperatures. Pure silicon has too low a conductivity (i.e., too high a resistivity) to be used as a circuit element in electronics. In practice, pure silicon is doped with small concentrations of certain other elements, which greatly increase its conductivity and adjust its electrical response by controlling the number and charge (positive or negative) of activated carriers. Such control is necessary for transistors, solar cells, semiconductor detectors, and other semiconductor devices used in the computer industry and other technical applications. In silicon photonics, silicon may be used as a continuous wave Raman laser medium to produce coherent light. In common integrated circuits, a wafer of monocrystalline silicon serves as a mechanical support for the circuits, which are created by doping and insulated from each other by thin layers of silicon oxide, an insulator that is easily produced on Si surfaces by processes of thermal oxidation or local oxidation (LOCOS), which involve exposing the element to oxygen under the proper conditions that can be predicted by the Deal–Grove model. Silicon has become the most popular material for both high power semiconductors and integrated circuits because it can withstand the highest temperatures and greatest electrical activity without suffering avalanche breakdown (an electron avalanche is created when heat produces free electrons and holes, which in turn pass more current, which produces more heat). In addition, the insulating oxide of silicon is not soluble in water, which gives it an advantage over germanium (an element with similar properties which can also be used in semiconductor devices) in certain fabrication techniques. Monocrystalline silicon is expensive to produce, and is usually justified only in production of integrated circuits, where tiny crystal imperfections can interfere with tiny circuit paths. For other uses, other types of pure silicon may be employed. These include hydrogenated amorphous silicon and upgraded metallurgical-grade silicon (UMG-Si) used in the production of low-cost, large-area electronics in applications such as liquid crystal displays and of large-area, low-cost, thin-film solar cells. Such semiconductor grades of silicon are either slightly less pure or polycrystalline rather than monocrystalline, and are produced in comparable quantities as the monocrystalline silicon: 75,000 to 150,000 metric tons per year. The market for the lesser grade is growing more quickly than for monocrystalline silicon. By 2013, polycrystalline silicon production, used mostly in solar cells, was projected to reach 200,000 metric tons per year, while monocrystalline semiconductor grade silicon was expected to remain less than 50,000 tons per year. Quantum dots Silicon quantum dots are created through the thermal processing of hydrogen silsesquioxane into nanocrystals ranging from a few nanometers to a few microns, displaying size dependent luminescent properties. The nanocrystals display large Stokes shifts converting photons in the ultraviolet range to photons in the visible or infrared, depending on the particle size, allowing for applications in quantum dot displays and luminescent solar concentrators due to their limited self absorption. A benefit of using silicon based quantum dots over cadmium or indium is the non-toxic, metal-free nature of silicon. Another application of silicon quantum dots is for sensing of hazardous materials. The sensors take advantage of the luminescent properties of the quantum dots through quenching of the photoluminescence in the presence of the hazardous substance. There are many methods used for hazardous chemical sensing with a few being electron transfer, fluorescence resonance energy transfer, and photocurrent generation. Electron transfer quenching occurs when the lowest unoccupied molecular orbital (LUMO) is slightly lower in energy than the conduction band of the quantum dot, allowing for the transfer of electrons between the two, preventing recombination of the holes and electrons within the nanocrystals. The effect can also be achieved in reverse with a donor molecule having its highest occupied molecular orbital (HOMO) slightly higher than a valence band edge of the quantum dot, allowing electrons to transfer between them, filling the holes and preventing recombination. Fluorescence resonance energy transfer occurs when a complex forms between the quantum dot and a quencher molecule. The complex will continue to absorb light but when the energy is converted to the ground state it does not release a photon, quenching the material. The third method uses different approach by measuring the photocurrent emitted by the quantum dots instead of monitoring the photoluminescent display. If the concentration of the desired chemical increases then the photocurrent given off by the nanocrystals will change in response. Thermal energy storage Biological role Although silicon is readily available in the form of silicates, very few organisms use it directly. Diatoms, radiolaria, and siliceous sponges use biogenic silica as a structural material for their skeletons. Some plants accumulate silica in their tissues and require silicon for their growth, for example rice. Silicon may be taken up by plants as orthosilicic acid (also known as monosilicic acid) and transported through the xylem, where it forms amorphous complexes with components of the cell wall. This has been shown to improve cell wall strength and structural integrity in some plants, thereby reducing insect herbivory and pathogenic infections. In certain plants, silicon may also upregulate the production of volatile organic compounds and phytohormones which play a significant role in plant defense mechanisms. In more advanced plants, the silica phytoliths (opal phytoliths) are rigid microscopic bodies occurring in the cell. Several horticultural crops are known to protect themselves against fungal plant pathogens with silica, to such a degree that fungicide application may fail unless accompanied by sufficient silicon nutrition. Silicaceous plant defense molecules activate some phytoalexins, meaning some of them are signalling substances producing acquired immunity. When deprived, some plants will substitute with increased production of other defensive substances. Life on Earth is largely composed of carbon, but astrobiology considers that extraterrestrial life may have other hypothetical types of biochemistry. Silicon is considered an alternative to carbon, as it can create complex and stable molecules with four covalent bonds, required for a DNA-analog, and it is available in large quantities. Marine microbial influences Diatoms use silicon in the biogenic silica (bSi) form, which is taken up by the silicon transport protein (SIT) to be predominantly used in the cell wall structure as frustules. Silicon enters the ocean in a dissolved form such as silicic acid or silicate. Since diatoms are one of the main users of these forms of silicon, they contribute greatly to the concentration of silicon throughout the ocean. Silicon forms a nutrient-like profile in the ocean due to the diatom productivity in shallow depths. Therefore, concentration of silicon is lower in the shallow ocean and higher in the deep ocean. Diatom productivity in the upper ocean contributes to the amount of silicon exported to the lower ocean. When diatom cells are lysed in the upper ocean, their nutrients such as iron, zinc, and silicon, are brought to the lower ocean through a process called marine snow. Marine snow involves the downward transfer of particulate organic matter by vertical mixing of dissolved organic matter. It has been suggested that silicon is considered crucial to diatom productivity and as long as there is silicic acid available for diatoms to use, the diatoms can contribute to other important nutrient concentrations in the deep ocean as well. In coastal zones, diatoms serve as the major phytoplanktonic organisms and greatly contribute to biogenic silica production. In the open ocean, however, diatoms have a reduced role in global annual silica production. Diatoms in North Atlantic and North Pacific subtropical gyres only contribute about 5–7% of global annual marine silica production. The Southern Ocean produces about one-third of global marine biogenic silica. The Southern Ocean is referred to as having a "biogeochemical divide" since only minuscule amounts of silicon are transported out of this region. Human nutrition There is some evidence that silicon is important to human health for their nail, hair, bone, and skin tissues, for example, in studies that demonstrate that premenopausal women with higher dietary silicon intake have higher bone density, and that silicon supplementation can increase bone volume and density in patients with osteoporosis. Silicon is needed for synthesis of elastin and collagen, of which the aorta contains the greatest quantity in the human body, and has been considered an essential element; nevertheless, it is difficult to prove its essentiality, because silicon is very common, and hence, deficiency symptoms are difficult to reproduce. Silicon is currently under consideration for elevation to the status of a "plant beneficial substance by the Association of American Plant Food Control Officials (AAPFCO)." Safety People may be exposed to elemental silicon in the workplace by breathing it in, swallowing it, or having contact with the skin or eye. In the latter two cases, silicon poses a slight hazard as an irritant. It is hazardous if inhaled. The Occupational Safety and Health Administration (OSHA) has set the legal limit for silicon exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an eight-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an eight-hour workday. Inhalation of crystalline silica dust may lead to silicosis, an occupational lung disease marked by inflammation and scarring in the form of nodular lesions in the upper lobes of the lungs.
Physical sciences
Chemistry
null
27116
https://en.wikipedia.org/wiki/Scandium
Scandium
Scandium is a chemical element with the symbol Sc and atomic number 21. It is a silvery-white metallic d-block element. Historically, it has been classified as a rare-earth element, together with yttrium and the lanthanides. It was discovered in 1879 by spectral analysis of the minerals euxenite and gadolinite from Scandinavia. Scandium is present in most of the deposits of rare-earth and uranium compounds, but it is extracted from these ores in only a few mines worldwide. Because of the low availability and difficulties in the preparation of metallic scandium, which was first done in 1937, applications for scandium were not developed until the 1970s, when the positive effects of scandium on aluminium alloys were discovered. Its use in such alloys remains its only major application. The global trade of scandium oxide is 15–20 tonnes per year. The properties of scandium compounds are intermediate between those of aluminium and yttrium. A diagonal relationship exists between the behavior of magnesium and scandium, just as there is between beryllium and aluminium. In the chemical compounds of the elements in group 3, the predominant oxidation state is +3. Properties Chemical characteristics Scandium is a soft metal with a silvery appearance. It develops a slightly yellowish or pinkish cast when oxidized by air. It is susceptible to weathering and dissolves slowly in most dilute acids. It does not react with a 1:1 mixture of nitric acid () and 48.0% hydrofluoric acid (), possibly due to the formation of an impermeable passive layer. Scandium turnings ignite in the air with a brilliant yellow flame to form scandium oxide. Isotopes In nature, scandium is found exclusively as the isotope 45Sc, which has a nuclear spin of ; this is its only stable isotope. The known isotopes of scandium range from 37Sc to 62Sc. The most stable radioisotope is 46Sc, which has a half-life of 83.8 days. Others are 47Sc, 3.35 days; the positron emitter 44Sc, 4 hours; and 48Sc, 43.7 hours. All of the remaining radioactive isotopes have half-lives less than 4 hours, and the majority of them have half-lives less than 2 minutes. The low mass isotopes are very difficult to create. The initial detection of 37Sc and 38Sc only resulted in the characterization of their mass excess. Scandium also has five nuclear isomers: the most stable of these is 44m2Sc (t1/2 = 58.6 h). The primary decay mode of ground-state scandium isotopes at masses lower than the only stable isotope, 45Sc, is electron capture (or positron emission), but the lightest isotopes (37Sc to 39Sc) undergo proton emission instead, all three of these producing calcium isotopes. The primary decay mode at masses above 45Sc is beta emission, producing titanium isotopes. Occurrence In Earth's crust, scandium is not rare. Estimates vary from 18 to 25 ppm, which is comparable to the abundance of cobalt (20–30 ppm). Scandium is only the 50th most common element on Earth (35th most abundant element in the crust), but it is the 23rd most common element in the Sun and the 26th most abundant element in the stars. However, scandium is distributed sparsely and occurs in trace amounts in many minerals. Rare minerals from Scandinavia and Madagascar such as thortveitite, euxenite, and gadolinite are the only known concentrated sources of this element. Thortveitite can contain up to 45% of scandium in the form of scandium oxide. The stable form of scandium is created in supernovas via the r-process. Also, scandium is created by cosmic ray spallation of the more abundant iron nuclei. 28Si + 17n → 45Sc (r-process) 56Fe + p → 45Sc + 11C + n (cosmic ray spallation) Production The world production of scandium is in the order of 15–20 tonnes per year, in the form of scandium oxide. The demand is slightly higher, and both the production and demand keep increasing. In 2003, only three mines produced scandium: the uranium and iron mines in Zhovti Vody in Ukraine, the rare-earth mines in Bayan Obo, China, and the apatite mines in the Kola Peninsula, Russia. Since then, many other countries have built scandium-producing facilities, including 5 tonnes/year (7.5 tonnes/year ) by Nickel Asia Corporation and Sumitomo Metal Mining in the Philippines. In the United States, NioCorp Development hopes to raise $1 billion toward opening a niobium mine at its Elk Creek site in southeast Nebraska, which may be able to produce as much as 95 tonnes of scandium oxide annually. In each case, scandium is a byproduct of the extraction of other elements and is sold as scandium oxide. To produce metallic scandium, the oxide is converted to scandium fluoride and then reduced with metallic calcium. Madagascar and the Iveland-Evje region in Norway have the only deposits of minerals with high scandium content, thortveitite ), but these are not being exploited. The mineral kolbeckite has a very high scandium content but is not available in any larger deposits. The absence of reliable, secure, stable, long-term production has limited the commercial applications of scandium. Despite this low level of use, scandium offers significant benefits. Particularly promising is the strengthening of aluminium alloys with as little as 0.5% scandium. Scandium-stabilized zirconia enjoys a growing market demand for use as a high-efficiency electrolyte in solid oxide fuel cells. The USGS reports that, from 2015 to 2019 in the US, the price of small quantities of scandium ingot has been $107 to $134 per gram, and that of scandium oxide $4 to $5 per gram. Compounds Scandium chemistry is almost completely dominated by the trivalent ion, Sc3+. The radii of M3+ ions in the table below indicate that the chemical properties of scandium ions have more in common with yttrium ions than with aluminium ions. In part because of this similarity, scandium is often classified as a lanthanide-like element. {|class="wikitable" |+ Ionic radius (pm) |- |Al||Sc||Y||La||Lu |- |53.5||74.5||90.0||103.2||86.1 |} Oxides and hydroxides The oxide and the hydroxide are amphoteric: + 3 → (scandate ion) + 3 + 3 → α- and γ-ScOOH are isostructural with their aluminium hydroxide oxide counterparts. Solutions of in water are acidic due to hydrolysis. Halides and pseudohalides The halides , where X= Cl, Br, or I, are very soluble in water, but is insoluble. In all four halides, the scandium is 6-coordinated. The halides are Lewis acids; for example, dissolves in a solution containing excess fluoride ion to form . The coordination number 6 is typical for Sc(III). In the larger Y3+ and La3+ ions, coordination numbers of 8 and 9 are common. Scandium triflate is sometimes used as a Lewis acid catalyst in organic chemistry. Organic derivatives Scandium forms a series of organometallic compounds with cyclopentadienyl ligands (Cp), similar to the behavior of the lanthanides. One example is the chlorine-bridged dimer, and related derivatives of pentamethylcyclopentadienyl ligands. Uncommon oxidation states Compounds that feature scandium in oxidation states other than +3 are rare but well characterized. The blue-black compound is one of the simplest. This material adopts a sheet-like structure that exhibits extensive bonding between the scandium(II) centers. Scandium hydride is not well understood, although it appears not to be a saline hydride of Sc(II). As is observed for most elements, a diatomic scandium hydride has been observed spectroscopically at high temperatures in the gas phase. Scandium borides and carbides are non-stoichiometric, as is typical for neighboring elements. Lower oxidation states (+2, +1, 0) have also been observed in organoscandium compounds. History Dmitri Mendeleev, who is referred to as the father of the periodic table, predicted the existence of an element ekaboron, with an atomic mass between 40 and 48 in 1869. Lars Fredrik Nilson and his team detected this element in the minerals euxenite and gadolinite in 1879. Nilson prepared 2 grams of scandium oxide of high purity. He named the element scandium, from the Latin Scandia meaning "Scandinavia". Nilson was apparently unaware of Mendeleev's prediction, but Per Teodor Cleve recognized the correspondence and notified Mendeleev. Metallic scandium was produced for the first time in 1937 by electrolysis of a eutectic mixture of potassium, lithium, and scandium chlorides, at 700–800 °C. The first pound of 99% pure scandium metal was produced in 1960. Production of aluminium alloys began in 1971, following a US patent. Aluminium-scandium alloys were also developed in the USSR. Laser crystals of gadolinium-scandium-gallium garnet (GSGG) were used in strategic defense applications developed for the Strategic Defense Initiative (SDI) in the 1980s and 1990s. Applications Aluminium alloys The main application of scandium by weight is in aluminium-scandium alloys for minor aerospace industry components. These alloys contain between 0.1% and 0.5% of scandium. They were used in Russian military aircraft, specifically the Mikoyan-Gurevich MiG-21 and MiG-29. The addition of scandium to aluminium limits the grain growth in the heat zone of welded aluminium components. This has two beneficial effects: the precipitated forms smaller crystals than in other aluminium alloys, and the volume of precipitate-free zones at the grain boundaries of age-hardening aluminium alloys is reduced. The precipitate is a coherent precipitate that strengthens the aluminum matrix by applying elastic strain fields that inhibit dislocation movement (i.e., plastic deformation). has an equilibrium L12 superlattice structure exclusive to this system. A fine dispersion of nano scale precipitate can be achieved via heat treatment that can also strengthen the alloys through order hardening. Recent developments include the additions of transition metals such as zirconium (Zr) and rare earth metals like erbium (Er) produce shells surrounding the spherical precipitate that reduce coarsening. These shells are dictated by the diffusivity of the alloying element and lower the cost of the alloy due to less Sc being substituted in part by Zr while maintaining stability and less Sc being needed to form the precipitate. These have made somewhat competitive with titanium alloys along with a wide array of applications. However, titanium alloys, which are similar in lightness and strength, are cheaper and much more widely used. The alloy is as strong as titanium, light as aluminium, and hard as some ceramics. Some items of sports equipment, which rely on lightweight high-performance materials, have been made with scandium-aluminium alloys, including baseball bats, tent poles and bicycle frames and components. Lacrosse sticks are also made with scandium. The American firearm manufacturing company Smith & Wesson produces semi-automatic pistols and revolvers with frames of scandium alloy and cylinders of titanium or carbon steel. Since 2013, Apworks GmbH, a spin-off of Airbus, have marketed a high strength Scandium containing aluminium alloy processed using metal 3D-Printing (Laser Powder Bed Fusion) under the trademark Scalmalloy which claims very high strength & ductility. Light sources The first scandium-based metal-halide lamps were patented by General Electric and made in North America, although they are now produced in all major industrialized countries. Approximately 20 kg of scandium (as ) is used annually in the United States for high-intensity discharge lamps. One type of metal-halide lamp, similar to the mercury-vapor lamp, is made from scandium triiodide and sodium iodide. This lamp is a white-light source with high color rendering index that sufficiently resembles sunlight to allow good color-reproduction with TV cameras. About 80 kg of scandium is used in metal-halide lamps/light bulbs globally per year. Dentists use erbium-chromium-doped yttrium-scandium-gallium garnet () lasers for cavity preparation and in endodontics. Other The radioactive isotope 46Sc is used in oil refineries as a tracing agent. Scandium triflate is a catalytic Lewis acid used in organic chemistry. The 12.4 keV nuclear transition of 45Sc has been studied as a reference for timekeeping applications, with a theoretical precision as much as three orders of magnitude better than the current caesium reference clocks. Scandium has been proposed for use in solid oxide fuel cells (SOFCs) as a dopant in the electrolyte material, typically zirconia (ZrO₂). Scandium oxide (Sc₂O₃) is one of several possible additives to enhance the ionic conductivity of the zirconia, improving the overall thermal stability, performance and efficiency of the fuel cell. This application would be particularly valuable in clean energy technologies, as SOFCs can utilize a variety of fuels and have high energy conversion efficiencies. Health and safety Elemental scandium is considered non-toxic, though extensive animal testing of scandium compounds has not been done. The median lethal dose (LD50) levels for scandium chloride for rats have been determined as 755 mg/kg for intraperitoneal and 4 g/kg for oral administration. In the light of these results, compounds of scandium should be handled as compounds of moderate toxicity. Scandium appears to be handled by the body in a manner similar to gallium, with similar hazards involving its poorly soluble hydroxide.
Physical sciences
Chemical elements_2
null
27117
https://en.wikipedia.org/wiki/Selenium
Selenium
Selenium is a chemical element; it has the symbol Se and atomic number 34. It has various physical appearances, including a brick-red powder, a vitreous black solid, and a grey metallic-looking form. It seldom occurs in this elemental state or as pure ore compounds in Earth's crust. Selenium () was discovered in 1817 by , who noted the similarity of the new element to the previously discovered tellurium (named for the Earth). Selenium is found in metal sulfide ores, where it substitutes for sulfur. Commercially, selenium is produced as a byproduct in the refining of these ores. Minerals that are pure selenide or selenate compounds are rare. The chief commercial uses for selenium today are glassmaking and pigments. Selenium is a semiconductor and is used in photocells. Applications in electronics, once important, have been mostly replaced with silicon semiconductor devices. Selenium is still used in a few types of DC power surge protectors and one type of fluorescent quantum dot. Although trace amounts of selenium are necessary for cellular function in many animals, including humans, both elemental selenium and (especially) selenium salts are toxic in even small doses, causing selenosis. Symptoms include (in decreasing order of frequency): diarrhea, fatigue, hair loss, joint pain, nail brittleness or discoloration, nausea, headache, tingling, vomiting, and fever. Selenium is listed as an ingredient in many multivitamins and other dietary supplements, as well as in infant formula, and is a component of the antioxidant enzymes glutathione peroxidase and thioredoxin reductase (which indirectly reduce certain oxidized molecules in animals and some plants) as well as in three deiodinase enzymes. Selenium requirements in plants differ by species, with some plants requiring relatively large amounts and others apparently not requiring any. Characteristics Physical properties Selenium forms several allotropes that interconvert with temperature changes, depending somewhat on the rate of temperature change. When prepared in chemical reactions, selenium is usually an amorphous, brick-red powder. When rapidly melted, it forms the black, vitreous form, usually sold commercially as beads. The structure of black selenium is irregular and complex and consists of polymeric rings with up to 1000 atoms per ring. Black selenium is a brittle, lustrous solid that is slightly soluble in CS2. Upon heating, it softens at 50 °C and converts to gray selenium at 180 °C; the transformation temperature is reduced by presence of halogens and amines. The red α, β, and γ forms are produced from solutions of black selenium by varying the evaporation rate of the solvent (usually CS2). They all have a relatively low, monoclinic crystal symmetry (space group 14) and contain nearly identical puckered cyclooctaselenium (Se8) rings as in sulfur. The eight atoms of a ring are not equivalent (i.e. they are not mapped one onto another by any symmetry operation), and in fact in the γ-monoclinic form, half the rings are in one configuration (and its mirror image) and half in another. The packing is most dense in the α form. In the Se8 rings, the Se–Se distance varies depending on where the pair of atoms is in the ring, but the average is 233.5 pm, and the Se–Se–Se angle is on average 105.7°. Other selenium allotropes may contain Se6 or Se7 rings. The most stable and dense form of selenium is gray and has a chiral hexagonal crystal lattice (space group 152 or 154 depending on the chirality) consisting of helical polymeric chains, where the Se–Se distance is 237.3 pm and Se–Se–Se angle is 103.1°. The minimum distance between chains is 343.6 pm. Gray selenium is formed by mild heating of other allotropes, by slow cooling of molten selenium, or by condensing selenium vapor just below the melting point. Whereas other selenium forms are insulators, gray selenium is a semiconductor showing appreciable photoconductivity. Unlike the other allotropes, it is insoluble in CS2. It resists oxidation by air and is not attacked by nonoxidizing acids. With strong reducing agents, it forms polyselenides. Selenium does not exhibit the changes in viscosity that sulfur undergoes when gradually heated. Isotopes Selenium has seven naturally occurring isotopes. Five of these, 74Se, 76Se, 77Se, 78Se, 80Se, are stable, with 80Se being the most abundant (49.6% natural abundance). Also naturally occurring is the long-lived primordial radionuclide 82Se, with a half-life of 8.76×1019 years. The non-primordial radioisotope 79Se also occurs in minute quantities in uranium ores as a product of nuclear fission. Selenium also has numerous unstable synthetic isotopes ranging from 64Se to 95Se; the most stable are 75Se with a half-life of 119.78 days and 72Se with a half-life of 8.4 days. Isotopes lighter than the stable isotopes primarily undergo beta plus decay to isotopes of arsenic, and isotopes heavier than the stable isotopes undergo beta minus decay to isotopes of bromine, with some minor neutron emission branches in the heaviest known isotopes. Chemical compounds Selenium compounds commonly exist in the oxidation states −2, +2, +4, and +6. It is a nonmetal (more rarely considered a metalloid) with properties that are intermediate between the elements above and below in the periodic table, sulfur and tellurium, and also has similarities to arsenic. Chalcogen compounds Selenium forms two oxides: selenium dioxide (SeO2) and selenium trioxide (SeO3). Selenium dioxide is formed by combustion of elemental selenium: It is a polymeric solid that forms monomeric SeO2 molecules in the gas phase. It dissolves in water to form selenous acid, H2SeO3. Selenous acid can also be made directly by oxidizing elemental selenium with nitric acid: Unlike sulfur, which forms a stable trioxide, selenium trioxide is thermodynamically unstable and decomposes to the dioxide above 185 °C: Selenium trioxide is produced in the laboratory by the reaction of anhydrous potassium selenate (K2SeO4) and sulfur trioxide (SO3). Salts of selenous acid are called selenites. These include silver selenite (Ag2SeO3) and sodium selenite (Na2SeO3). Hydrogen sulfide reacts with aqueous selenous acid to produce selenium disulfide: Selenium disulfide consists of 8-membered rings. It has an approximate composition of SeS2, with individual rings varying in composition, such as Se4S4 and Se2S6. Selenium disulfide has been used in shampoo as an antidandruff agent, an inhibitor in polymer chemistry, a glass dye, and a reducing agent in fireworks. Selenium trioxide may be synthesized by dehydrating selenic acid, H2SeO4, which is itself produced by the oxidation of selenium dioxide with hydrogen peroxide: Hot, concentrated selenic acid reacts with gold to form gold(III) selenate. Halogen compounds Selenium reacts with fluorine to form selenium hexafluoride: In comparison with its sulfur counterpart (sulfur hexafluoride), selenium hexafluoride (SeF6) is more reactive and is a toxic pulmonary irritant. Selenium tetrafluoride is a laboratory-scale fluorinating agent. The only stable chlorides are selenium tetrachloride (SeCl4) and selenium monochloride (Se2Cl2), which might be better known as selenium(I) chloride and is structurally analogous to disulfur dichloride. Metastable solutions of selenium dichloride can be prepared from sulfuryl chloride and selenium (reaction of the elements generates the tetrachloride instead), and constitute an important reagent in the preparation of selenium compounds (e.g. Se7). The corresponding bromides are all known, and recapitulate the same stability and structure as the chlorides. The iodides of selenium are not well known, and for a long time were believed not to exist. There is limited spectroscopic evidence that the lower iodides may form in bi-elemental solutions with nonpolar solvents, such as carbon disulfide and carbon tetrachloride; but even these appear to decompose under illumination. Some selenium oxyhalides—seleninyl fluoride (SeOF2) and selenium oxychloride (SeOCl2)—have been used as specialty solvents. Metal selenides Analogous to the behavior of other chalcogens, selenium forms hydrogen selenide, H2Se. It is a strongly odiferous, toxic, and colorless gas. It is more acidic than H2S. In solution it ionizes to HSe−. The selenide dianion Se2− forms a variety of compounds, including the minerals from which selenium is obtained commercially. Illustrative selenides include mercury selenide (HgSe), lead selenide (PbSe), zinc selenide (ZnSe), and copper indium gallium diselenide (Cu(Ga,In)Se2). These materials are semiconductors. With highly electropositive metals, such as aluminium, these selenides are prone to hydrolysis, which may be described by this idealized equation: Al2Se3 + 6 H2O -> 2 Al(OH)3 + 3 H2Se Alkali metal selenides react with selenium to form polyselenides, , which exist as chains and rings. Other compounds Tetraselenium tetranitride, Se4N4, is an explosive orange compound analogous to tetrasulfur tetranitride (S4N4). It can be synthesized by the reaction of selenium tetrachloride (SeCl4) with . Selenium reacts with cyanides to yield selenocyanates: 8 KCN + Se8 -> 8 KSeCN Organoselenium compounds Selenium, especially in the II oxidation state, forms a variety of organic derivatives. They are structurally analogous to the corresponding organosulfur compounds. Especially common are selenides (R2Se, analogues of thioethers), diselenides (R2Se2, analogues of disulfides), and selenols (RSeH, analogues of thiols). Representatives of selenides, diselenides, and selenols include respectively selenomethionine, diphenyldiselenide, and benzeneselenol. The sulfoxide in sulfur chemistry is represented in selenium chemistry by the selenoxides (formula RSe(O)R), which are intermediates in organic synthesis, as illustrated by the selenoxide elimination reaction. Consistent with trends indicated by the double bond rule, selenoketones, R(C=Se)R, and selenaldehydes, R(C=Se)H, are rarely observed. History Selenium (Greek σελήνη selene meaning "Moon") was discovered in 1817 by Jöns Jacob Berzelius and Johan Gottlieb Gahn. Both chemists owned a chemistry plant near Gripsholm, Sweden, producing sulfuric acid by the lead chamber process. Pyrite samples from the Falun Mine produced a red solid precipitate in the lead chambers, which was presumed to be an arsenic compound, so the use of pyrite to make acid was discontinued. Berzelius and Gahn, who wanted to use the pyrite, observed that the red precipitate gave off an odor like horseradish when burned. This smell was not typical of arsenic, but a similar odor was known from tellurium compounds. Hence, Berzelius's first letter to Alexander Marcet stated that this was a tellurium compound. However, the lack of tellurium compounds in the Falun Mine minerals eventually led Berzelius to reanalyze the red precipitate, and in 1818 he wrote a second letter to Marcet describing a newly found element similar to sulfur and tellurium. Because of its similarity to tellurium, named for the Earth, Berzelius named the new element after the Moon. In 1873, Willoughby Smith found that the electrical conductivity of grey selenium was affected by light. This led to its use as a cell for sensing light. The first commercial products using selenium were developed by Werner Siemens in the mid-1870s. The selenium cell was used in the photophone developed by Alexander Graham Bell in 1879. Selenium transmits an electric current proportional to the amount of light falling on its surface. This phenomenon was used in the design of light meters and similar devices. Selenium's semiconductor properties found numerous other applications in electronics. The development of selenium rectifiers began during the early 1930s, and these replaced copper oxide rectifiers because they were more efficient. These lasted in commercial applications until the 1970s, following which they were replaced with less expensive and even more efficient silicon rectifiers. Selenium came to medical notice later because of its toxicity to industrial workers. Selenium was also recognized as an important veterinary toxin, which is seen in animals that have eaten high-selenium plants. In 1954, the first hints of specific biological functions of selenium were discovered in microorganisms by biochemist, Jane Pinsent. It was discovered to be essential for mammalian life in 1957. In the 1970s, it was shown to be present in two independent sets of enzymes. This was followed by the discovery of selenocysteine in proteins. During the 1980s, selenocysteine was shown to be encoded by the codon UGA. The recoding mechanism was worked out first in bacteria and then in mammals (see SECIS element). Occurrence Native (i.e., elemental) selenium is a rare mineral, which does not usually form good crystals, but, when it does, they are steep rhombohedra or tiny acicular (hair-like) crystals. Isolation of selenium is often complicated by the presence of other compounds and elements. Selenium occurs naturally in a number of inorganic forms, including selenide, selenate, and selenite, but these minerals are rare. The common mineral selenite is not a selenium mineral, and contains no selenite ion, but is rather a type of gypsum (calcium sulfate hydrate) named like selenium for the moon well before the discovery of selenium. Selenium is most commonly found as an impurity, replacing a small part of the sulfur in sulfide ores of many metals. In living systems, selenium is found in the amino acids selenomethionine, selenocysteine, and methylselenocysteine. In these compounds, selenium plays a role analogous to that of sulfur. Another naturally occurring organoselenium compound is dimethyl selenide. Certain soils are selenium-rich, and selenium can be bioconcentrated by some plants. In soils, selenium most often occurs in soluble forms such as selenate (analogous to sulfate), which are leached into rivers very easily by runoff. Ocean water contains significant amounts of selenium. Typical background concentrations of selenium do not exceed 1 ng/m3 in the atmosphere; 1 mg/kg in soil and vegetation and 0.5 μg/L in freshwater and seawater. Anthropogenic sources of selenium include coal burning, and the mining and smelting of sulfide ores. Production Selenium is most commonly produced from selenide in many sulfide ores, such as those of copper, nickel, or lead. Electrolytic metal refining is particularly productive of selenium as a byproduct, obtained from the anode mud of copper refineries. Another source was the mud from the lead chambers of sulfuric acid plants, a process that is no longer used. Selenium can be refined from these muds by a number of methods. However, most elemental selenium comes as a byproduct of refining copper or producing sulfuric acid. Since its invention, solvent extraction and electrowinning (SX/EW) production of copper produces an increasing share of the worldwide copper supply. This changes the availability of selenium because only a comparably small part of the selenium in the ore is leached with the copper. Industrial production of selenium usually involves the extraction of selenium dioxide from residues obtained during the purification of copper. Common production from the residue then begins by oxidation with sodium carbonate to produce selenium dioxide, which is mixed with water and acidified to form selenous acid (oxidation step). Selenous acid is bubbled with sulfur dioxide (reduction step) to give elemental selenium. About 2,000 tonnes of selenium were produced in 2011 worldwide, mostly in Germany (650 t), Japan (630 t), Belgium (200 t), and Russia (140 t), and the total reserves were estimated at 93,000 tonnes. These data exclude two major producers: the United States and China. A previous sharp increase was observed in 2004 from $4–$5 to $27/lb. The price was relatively stable during 2004–2010 at about US$30 per pound (in 100 pound lots) but increased to $65/lb in 2011. The consumption in 2010 was divided as follows: metallurgy – 30%, glass manufacturing – 30%, agriculture – 10%, chemicals and pigments – 10%, and electronics – 10%. China is the dominant consumer of selenium at 1,500–2,000 tonnes/year. Applications Manganese electrolysis During the electrowinning of manganese, the addition of selenium dioxide decreases the power necessary to operate the electrolysis cells. China is the largest consumer of selenium dioxide for this purpose. For every tonne of manganese, an average 2 kg selenium oxide is used. Glass production The largest commercial use of selenium, accounting for about 50% of consumption, is for the production of glass. Selenium compounds confer a red color to glass. This color cancels out the green or yellow tints that arise from iron impurities typical for most glass. For this purpose, various selenite and selenate salts are added. For other applications, a red color may be desired, produced by mixtures of CdSe and CdS. Alloys Selenium is used with bismuth in brasses to replace more toxic lead. The regulation of lead in drinking water applications such as in the US with the Safe Drinking Water Act of 1974, made a reduction of lead in brass necessary. The new brass is marketed under the name EnviroBrass. Like lead and sulfur, selenium improves the machinability of steel at concentrations around 0.15%. Selenium produces the same machinability improvement in copper alloys. Lithium–selenium batteries The lithium–selenium (Li–Se) battery was considered for energy storage in the family of lithium batteries in the 2010s. Solar cells Selenium was used as the photoabsorbing layer in the first solid-state solar cell, which was demonstrated by the English physicist William Grylls Adams and his student Richard Evans Day in 1876. Only a few years later, Charles Fritts fabricated the first thin-film solar cell, also using selenium as the photoabsorber. However, with the emergence of silicon solar cells in the 1950s, research on selenium thin-film solar cells declined. As a result, the record efficiency of 5.0% demonstrated by Tokio Nakada and Akio Kunioka in 1985 remained unchanged for more than 30 years. In 2017, researchers from IBM achieved a new record efficiency of 6.5% by redesigning the device structure. Following this achievement, selenium has gained renewed interest as a wide bandgap photoabsorber with the potential of being integrated in tandem with lower bandgap photoabsorbers. In 2024, the first selenium-based tandem solar cell was demonstrated, showcasing a selenium top cell monolithically integrated with a silicon bottom cell. However, a significant deficit in the open-circuit voltage is currently the main limiting factor to further improve the efficiency, necessitating defect-engineering strategies for selenium thin-films to enhance the carrier lifetime. As of now, the only defect-engineering strategy that has been investigated for selenium thin-film solar cells involves crystallizing selenium using a laser. Photoconductors Amorphous selenium (α-Se) thin films have found application as photoconductors in flat-panel X-ray detectors. These detectors use amorphous selenium to capture and convert incident X-ray photons directly into electric charge. Selenium has been chosen for this application among other semiconductors owing to a combination of its favorable technological and physical properties: Amorphous selenium has a low melting point, high vapor pressure, and uniform structure. These three properties allow quick and easy deposition of large-area uniform films with a thickness up to 1 mm at a rate of 1–5 μm/min. Their uniformity and lack of grain boundaries, which are intrinsic to polycrystalline materials, improve the X-ray image quality. Meanwhile the large area is essential for scanning the human body or luggage items. Selenium is less toxic than many compound semiconductors that contain arsenic or heavy metals such as mercury or lead. The mobility in applied electric field is sufficiently high both for electrons and holes, so that in a typical 0.2 mm thick device, c. 98% of electrons and holes produced by X-rays are collected at the electrodes without being trapped by various defects. Consequently, device sensitivity is high, and its behavior is easy to describe by simple transport equations. Rectifiers Selenium rectifiers were first used in 1933. They have mostly been replaced by silicon-based devices. One notable exception is in power DC surge protection, where the superior energy capabilities of selenium suppressors make them more desirable than metal-oxide varistors. Other uses The demand for selenium by the electronics industry is declining. Its photovoltaic and photoconductive properties are still useful in photocopying, photocells, light meters and solar cells. Its use as a photoconductor in plain-paper copiers once was a leading application, but in the 1980s, the photoconductor application declined (although it was still a large end-use) as more and more copiers switched to organic photoconductors. Zinc selenide was the first material for blue LEDs, but gallium nitride dominates that market. Cadmium selenide was an important component in quantum dots. Sheets of amorphous selenium convert X-ray images to patterns of charge in xeroradiography and in solid-state, flat-panel X-ray cameras. Ionized selenium (Se+24, where 24 of the outer D, S and P orbitals are stripped away due to high input energies) is one of the active mediums used in X-ray lasers. 75Se is used as a gamma source in industrial radiography. Selenium catalyzes some chemical reactions, but it is not widely used because of issues with toxicity. In X-ray crystallography, incorporation of one or more selenium atoms in place of sulfur helps with multiple-wavelength anomalous dispersion and single wavelength anomalous dispersion phasing. Selenium is used in the toning of photographic prints, and it is sold as a toner by numerous photographic manufacturers. Selenium intensifies and extends the tonal range of black-and-white photographic images and improves the permanence of prints. Small amounts of organoselenium compounds have been used to modify the catalysts used for the vulcanization for the production of rubber. Selenium is used in some anti-dandruff shampoos in the form of selenium disulfide such as Selsun and Vichy Dereos brands. Pollution Selenium pollution might impact some aquatic systems and may be caused by anthropogenic factors such as farming runoff and industrial processes. People who eat more fish are generally healthier than those who eat less, which suggests no major human health concern from selenium pollution, although selenium has a potential effect on humans. Selenium poisoning of water systems may result whenever new agricultural run-off courses through dry lands. This process leaches natural soluble selenium compounds (such as selenates) into the water, which may then be concentrated in wetlands as the water evaporates. Selenium pollution of waterways also occurs when selenium is leached from coal flue ash, mining and metal smelting, crude oil processing, and landfill. High selenium levels in waterways were found to cause congenital disorders in oviparous species, including wetland birds and fish. Elevated dietary methylmercury levels can amplify the harm of selenium toxicity in oviparous species. Selenium is bioaccumulated in aquatic habitats, which results in higher concentrations in organisms than the surrounding water. Organoselenium compounds can be concentrated over 200,000 times by zooplankton when water concentrations are in the 0.5 to 0.8 μg Se/L range. Inorganic selenium bioaccumulates more readily in phytoplankton than zooplankton. Phytoplankton can concentrate inorganic selenium by a factor of 3000. Further concentration through bioaccumulation occurs along the food chain, as predators consume selenium-rich prey. It is recommended that a water concentration of 2 μg Se/L be considered highly hazardous to sensitive fish and aquatic birds. Selenium poisoning can be passed from parents to offspring through the egg, and selenium poisoning may persist for many generations. Reproduction of mallard ducks is impaired at dietary concentrations of 7 μg Se/L. Many benthic invertebrates can tolerate selenium concentrations up to 300 μg/L of selenium in their diet. Bioaccumulation of selenium in aquatic environments causes fish kills depending on the species in the affected area. There are, however, a few species that have been seen to survive these events and tolerate the increased selenium. It has also been suggested that the season could have an impact on the harmful effects of selenium on fish. Substantial physiological changes may occur in fish with high tissue concentrations of selenium. Fish affected by selenium may experience swelling of the gill lamellae, which impedes oxygen diffusion across the gills and blood flow within the gills. Respiratory capacity is further reduced due to selenium binding to hemoglobin. Other problems include degeneration of liver tissue, swelling around the heart, damaged egg follicles in ovaries, cataracts, and accumulation of fluid in the body cavity and head. Selenium often causes a malformed fish fetus which may have problems feeding or respiring; distortion of the fins or spine is also common. Adult fish may appear healthy despite their inability to produce viable offspring. Examples In Belews Lake North Carolina, 19 species of fish were eliminated from the lake due to 150–200 μg Se/L wastewater discharged from 1974 to 1986 from a Duke Energy coal-fired power plant. At the Kesterson National Wildlife Refuge in California, thousands of fish and waterbirds were poisoned by selenium in agricultural irrigation drainage. Biological role Although it is toxic in large doses, selenium is an essential micronutrient for animals. In plants, it occurs as a bystander mineral, sometimes in toxic proportions in forage (some plants may accumulate selenium as a defense against being eaten by animals, but other plants, such as locoweed, require selenium, and their growth indicates the presence of selenium in soil). The selenium content in the human body is believed to be in the range of 13–20 mg. Selenium is a component of the unusual amino acids selenocysteine and selenomethionine. In humans, selenium is a trace element nutrient that functions as cofactor for reduction of antioxidant enzymes, such as glutathione peroxidases and certain forms of thioredoxin reductase found in animals and some plants (this enzyme occurs in all living organisms, but not all forms of it in plants require selenium). The glutathione peroxidase family of enzymes (GSH-Px) catalyze reactions that remove reactive oxygen species such as hydrogen peroxide and organic hydroperoxides. The thyroid gland and every cell that uses thyroid hormone also use selenium, which is a cofactor for the three of the four known types of thyroid hormone deiodinases, which activate and then deactivate various thyroid hormones and their metabolites; the iodothyronine deiodinases are the subfamily of deiodinase enzymes that use selenium as the otherwise rare amino acid selenocysteine. Increased dietary selenium reduces the effects of mercury toxicity, although it is effective only at low to modest doses of mercury. Evidence suggests that the molecular mechanisms of mercury toxicity include the irreversible inhibition of selenoenzymes that are required to prevent and reverse oxidative damage in brain and endocrine tissues. The selenium-containing compound selenoneine is present in the blood of bluefin tuna. Certain plants are considered indicators of high selenium content of the soil because they require high levels of selenium to thrive. The main selenium indicator plants are Astragalus species (including some locoweeds), prince's plume (Stanleya sp.), woody asters (Xylorhiza sp.), and false goldenweed (Oonopsis sp.). Evolution in biology From about three billion years ago, prokaryotic selenoprotein families drove the evolution of the amino acid selenocysteine. Several selenoproteins are known in bacteria, archaea, and eukaryotes, invariably owing to the presence of selenocysteine, Just as for mammals, selenoprotein protect unicellular organisms against oxidative damage. Selenoprotein families of GSH-Px and the deiodinases of eukaryotic cells seem to have a bacterial phylogenetic origin. The selenocysteine-containing form occurs in species as diverse as green algae, diatoms, sea urchins, fish, and chickens. Trace elements involved in GSH-Px and superoxide dismutase enzymes activities, i.e., selenium, vanadium, magnesium, copper, and zinc, may have been lacking in some terrestrial mineral-deficient areas. Marine organisms retained and sometimes expanded their selenoproteomes, whereas the selenoproteomes of some terrestrial organisms were lowered or completely lost. These findings suggest that, with the exception of vertebrates, aquatic life supports selenium use, whereas terrestrial habitats lead to lowered use of this trace element. Marine fishes and vertebrate thyroid glands have the highest concentration of selenium and iodine. From about 500 million years ago, freshwater and terrestrial plants slowly optimized the production of "new" endogenous antioxidants such as ascorbic acid (vitamin C), polyphenols (including flavonoids), tocopherols, etc. A few of these appeared in the last 50–200 million years in fruits and flowers of angiosperm plants. In fact, the angiosperms (the dominant type of plant today) and most of their antioxidant pigments evolved during the late Jurassic period. About 200 million years ago, new selenoproteins were developed as mammalian GSH-Px enzymes. Toxicity Although selenium is an essential trace element, it is toxic if taken in excess. Exceeding the Tolerable Upper Intake Level of 400 micrograms per day can lead to selenosis. This 400 μg Tolerable Upper Intake Level is based primarily on a 1986 study of five Chinese patients who exhibited overt signs of selenosis and a follow-up study on the same five people in 1992. The 1992 study found the maximum safe dietary selenium intake to be approximately 800 micrograms per day (15 micrograms per kilogram body weight), but suggested 400 micrograms per day to avoid creating an imbalance of nutrients in the diet and to accord with data from other countries. In China, people who ingested corn grown in extremely selenium-rich stony coal (carbonaceous shale) have suffered from selenium toxicity. This coal was shown to have selenium content as high as 9.1%, the highest concentration in coal ever recorded. Signs and symptoms of selenosis include a garlic odor on the breath, gastrointestinal disorders, hair loss, sloughing of nails, fatigue, irritability, and neurological damage. Extreme cases of selenosis can exhibit cirrhosis of the liver, pulmonary edema, or death. Elemental selenium and most metallic selenides have relatively low toxicities because of low bioavailability. By contrast, selenates and selenites have an oxidant mode of action similar to that of arsenic trioxide and are very toxic. The chronic toxic dose of selenite for humans is about 2400 to 3000 micrograms of selenium per day. Hydrogen selenide is an extremely toxic, corrosive gas. Selenium also occurs in organic compounds, such as dimethyl selenide, selenomethionine, selenocysteine and methylselenocysteine, all of which have high bioavailability and are toxic in large doses. On 19 April 2009, 21 polo ponies died shortly before a match in the United States Polo Open. Three days later, a pharmacy released a statement explaining that the horses had received an incorrect dose of one of the ingredients used in a vitamin/mineral supplement compound that had been incorrectly prepared by a compounding pharmacy. Analysis of blood levels of inorganic compounds in the supplement indicated the selenium concentrations were 10 to 15 times higher than normal in the blood samples and 15 to 20 times higher than normal in the liver samples. Selenium was later confirmed to be the toxic factor. In fish and other wildlife, selenium is necessary for life but toxic in high doses. For salmon, the optimal selenium concentration is about 1 microgram selenium per gram of whole body weight. Much below that level, young salmon die from deficiency; much above, they die from toxic excess. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for selenium in the workplace at 0.2 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a Recommended exposure limit (REL) of 0.2 mg/m3 over an 8-hour workday. At levels of 1 mg/m3, selenium is immediately dangerous to life and health. Detection in biological fluids Selenium may be measured in blood, plasma, serum, or urine to monitor excessive environmental or occupational exposure, to confirm a diagnosis of poisoning in hospitalized victims, or to investigate a suspected case of fatal overdose. Some analytical techniques can distinguish organic from inorganic forms of the element. Both organic and inorganic forms of selenium are largely converted to monosaccharide conjugates (selenosugars) in the body before elimination in the urine. Cancer patients receiving daily oral doses of selenothionine may achieve very high plasma and urine selenium concentrations. Deficiency Selenium deficiency can occur in patients with severely compromised intestinal function, those undergoing total parenteral nutrition, and in those of advanced age (over 90). Also, people dependent on food grown from selenium-deficient soil are at risk. Although New Zealand soil has low levels of selenium, the residents have not detected adverse health effects. Selenium deficiency, defined by low (<60% of normal) selenoenzyme activity levels in brain and endocrine tissues, occurs only when a low selenium level is linked with additional stress, such as high exposures to mercury or increased oxidant stress from vitamin E deficiency. Selenium interacts with other nutrients, such as iodine and vitamin E. The effect of selenium deficiency on health remains uncertain, particularly concerning Kashin–Beck disease. Also, selenium interacts with other minerals, such as zinc and copper. High doses of selenium supplements in pregnant animals might disturb the zinc:copper ratio and lead to zinc reduction; in such treatment cases, zinc levels should be monitored. Further studies are needed to confirm these interactions. In the regions (e.g., regions within North America) where low selenium soil levels lead to low concentrations in the plants, some animal species may be deficient unless selenium is supplemented with diet or injection. Ruminants are particularly susceptible. In general, absorption of dietary selenium is lower in ruminants than in other animals and is lower in forages than in grain. Ruminants grazing certain forages, e.g., some white clover varieties containing cyanogenic glycosides, may have higher selenium requirements, presumably because cyanide is released from the aglycone by glucosidase activity in the rumen and glutathione peroxidases are deactivated by the cyanide acting on the glutathione moiety. Neonate ruminants at risk of white muscle disease may be administered both selenium and vitamin E by injection; some of the WMD myopathies respond only to selenium, some only to vitamin E, and some to either. Nutritional sources of selenium Dietary selenium comes from meat, nuts, cereals, and mushrooms. Brazil nuts are the richest dietary source (though this is soil-dependent since the Brazil nut does not require high levels of the element for its own needs). The US Recommended Dietary Allowance (RDA) of selenium for teenagers and adults is 55 μg/day. Selenium as a dietary supplement is available in many forms, including multi-vitamins/mineral supplements, which typically contain 55 or 70 μg/serving. Selenium-specific supplements typically contain either 100 or 200 μg/serving. In June 2015, the US Food and Drug Administration (FDA) published its final rule establishing a requirement for minimum and maximum levels of selenium in infant formula. General health effects The effects of selenium intake on cancer have been studied in several clinical trials and epidemiologic studies in humans. Selenium may have a chemo-preventive role in cancer risk as an anti-oxidant, and it might trigger the immune response. At low levels, it is used in the body to create anti-oxidant selenoproteins, at higher doses than normal it causes cell death. Selenium (in close interrelation with iodine) plays a role in thyroid health. Selenium is a cofactor for the three thyroid hormone deiodinases, helping activate and then deactivate various thyroid hormones and their metabolites. Isolated selenium deficiency is now being investigated for its role in the induction of autoimmune reactions in the thyroid gland in Hashimoto's disease. In a case of combined iodine and selenium deficiency was shown to play a thyroid-protecting role.
Physical sciences
Chemical elements_2
null
27118
https://en.wikipedia.org/wiki/Strontium
Strontium
Strontium is a chemical element; it has symbol Sr and atomic number 38. An alkaline earth metal, strontium is a soft silver-white yellowish metallic element that is highly chemically reactive. The metal forms a dark oxide layer when it is exposed to air. Strontium has physical and chemical properties similar to those of its two vertical neighbors in the periodic table, calcium and barium. It occurs naturally mainly in the minerals celestine and strontianite, and is mostly mined from these. Both strontium and strontianite are named after Strontian, a village in Scotland near which the mineral was discovered in 1790 by Adair Crawford and William Cruickshank; it was identified as a new element the next year from its crimson-red flame test color. Strontium was first isolated as a metal in 1808 by Humphry Davy using the then newly discovered process of electrolysis. During the 19th century, strontium was mostly used in the production of sugar from sugar beets (see strontian process). At the peak of production of television cathode-ray tubes, as much as 75% of strontium consumption in the United States was used for the faceplate glass. With the replacement of cathode-ray tubes with other display methods, consumption of strontium has dramatically declined. While natural strontium (which is mostly the isotope strontium-88) is stable, the synthetic strontium-90 is radioactive and is one of the most dangerous components of nuclear fallout, as strontium is absorbed by the body in a similar manner to calcium. Natural stable strontium, on the other hand, is not hazardous to health. Characteristics Strontium is a divalent silvery metal with a pale yellow tint whose properties are mostly intermediate between and similar to those of its group neighbors calcium and barium. It is softer than calcium and harder than barium. Its melting (777 °C) and boiling (1377 °C) points are lower than those of calcium (842 °C and 1484 °C respectively); barium continues this downward trend in the melting point (727 °C), but not in the boiling point (1900 °C). The density of strontium (2.64 g/cm3) is similarly intermediate between those of calcium (1.54 g/cm3) and barium (3.594 g/cm3). Three allotropes of metallic strontium exist, with transition points at 235 and 540 °C. The standard electrode potential for the Sr2+/Sr couple is −2.89 V, approximately midway between those of the Ca2+/Ca (−2.84 V) and Ba2+/Ba (−2.92 V) couples, and close to those of the neighboring alkali metals. Strontium is intermediate between calcium and barium in its reactivity toward water, with which it reacts on contact to produce strontium hydroxide and hydrogen gas. Strontium metal burns in air to produce both strontium oxide and strontium nitride, but since it does not react with nitrogen below 380 °C, at room temperature it forms only the oxide spontaneously. Besides the simple oxide SrO, the peroxide SrO2 can be made by direct oxidation of strontium metal under a high pressure of oxygen, and there is some evidence for a yellow superoxide Sr(O2)2. Strontium hydroxide, Sr(OH)2, is a strong base, though it is not as strong as the hydroxides of barium or the alkali metals. All four dihalides of strontium are known. Due to the large size of the heavy s-block elements, including strontium, a vast range of coordination numbers is known, from 2, 3, or 4 all the way to 22 or 24 in SrCd11 and SrZn13. The Sr2+ ion is quite large, so that high coordination numbers are the rule. The large size of strontium and barium plays a significant part in stabilising strontium complexes with polydentate macrocyclic ligands such as crown ethers: for example, while 18-crown-6 forms relatively weak complexes with calcium and the alkali metals, its strontium and barium complexes are much stronger. Organostrontium compounds contain one or more strontium–carbon bonds. They have been reported as intermediates in Barbier-type reactions. Although strontium is in the same group as magnesium, and organomagnesium compounds are very commonly used throughout chemistry, organostrontium compounds are not similarly widespread because they are more difficult to make and more reactive. Organostrontium compounds tend to be more similar to organoeuropium or organosamarium compounds due to the similar ionic radii of these elements (Sr2+ 118 pm; Eu2+ 117 pm; Sm2+ 122 pm). Most of these compounds can only be prepared at low temperatures; bulky ligands tend to favor stability. For example, strontium dicyclopentadienyl, Sr(C5H5)2, must be made by directly reacting strontium metal with mercurocene or cyclopentadiene itself; replacing the C5H5 ligand with the bulkier C5(CH3)5 ligand on the other hand increases the compound's solubility, volatility, and kinetic stability. Because of its extreme reactivity with oxygen and water, strontium occurs naturally only in compounds with other elements, such as in the minerals strontianite and celestine. It is kept under a liquid hydrocarbon such as mineral oil or kerosene to prevent oxidation; freshly exposed strontium metal rapidly turns a yellowish color with the formation of the oxide. Finely powdered strontium metal is pyrophoric, meaning that it will ignite spontaneously in air at room temperature. Volatile strontium salts impart a bright red color to flames, and these salts are used in pyrotechnics and in the production of flares. Like calcium and barium, as well as the alkali metals and the divalent lanthanides europium and ytterbium, strontium metal dissolves directly in liquid ammonia to give a dark blue solution of solvated electrons. Isotopes Natural strontium is a mixture of four stable isotopes: 84Sr, 86Sr, 87Sr, and 88Sr. On these isotopes, 88Sr is the most abundant, makes up about 82.6% of all natural strontium, though the abundance varies due to the production of radiogenic 87Sr as the daughter of long-lived beta-decaying 87Rb. This is the basis of rubidium–strontium dating. Of the unstable isotopes, the primary decay mode of the isotopes lighter than 85Sr is electron capture or positron emission to isotopes of rubidium, and that of the isotopes heavier than 88Sr is electron emission to isotopes of yttrium. Of special note are 89Sr and 90Sr. The former has a half-life of 50.6 days and is used to treat bone cancer due to strontium's chemical similarity and hence ability to replace calcium. While 90Sr (half-life 28.90 years) has been used similarly, it is also an isotope of concern in fallout from nuclear weapons and nuclear accidents due to its production as a fission product. Its presence in bones can cause bone cancer, cancer of nearby tissues, and leukemia. The 1986 Chernobyl nuclear accident contaminated about 30,000 km2 with greater than 10 kBq/m2 with 90Sr, which accounts for about 5% of the 90Sr which was in the reactor core. History Strontium is named after the Scottish village of Strontian (), where it was discovered in the ores of the lead mines. In 1790, Adair Crawford, a physician engaged in the preparation of barium, and his colleague William Cruickshank, recognised that the Strontian ores exhibited properties that differed from those in other "heavy spars" sources. This allowed Crawford to conclude on page 355 "... it is probable indeed, that the scotch mineral is a new species of earth which has not hitherto been sufficiently examined." The physician and mineral collector Friedrich Gabriel Sulzer analysed together with Johann Friedrich Blumenbach the mineral from Strontian and named it strontianite. He also came to the conclusion that it was distinct from the witherite and contained a new earth (neue Grunderde). In 1793 Thomas Charles Hope, a professor of chemistry at the University of Glasgow studied the mineral and proposed the name strontites. He confirmed the earlier work of Crawford and recounted: "... Considering it a peculiar earth I thought it necessary to give it an name. I have called it Strontites, from the place it was found; a mode of derivation in my opinion, fully as proper as any quality it may possess, which is the present fashion." The element was eventually isolated by Sir Humphry Davy in 1808 by the electrolysis of a mixture containing strontium chloride and mercuric oxide, and announced by him in a lecture to the Royal Society on 30 June 1808. In keeping with the naming of the other alkaline earths, he changed the name to strontium. The first large-scale application of strontium was in the production of sugar from sugar beet. Although a crystallisation process using strontium hydroxide was patented by Augustin-Pierre Dubrunfaut in 1849 the large scale introduction came with the improvement of the process in the early 1870s. The German sugar industry used the process well into the 20th century. Before World War I the beet sugar industry used 100,000 to 150,000 tons of strontium hydroxide for this process per year. The strontium hydroxide was recycled in the process, but the demand to substitute losses during production was high enough to create a significant demand initiating mining of strontianite in the Münsterland. The mining of strontianite in Germany ended when mining of the celestine deposits in Gloucestershire started. These mines supplied most of the world strontium supply from 1884 to 1941. Although the celestine deposits in the Granada basin were known for some time the large scale mining did not start before the 1950s. During atmospheric nuclear weapons testing, it was observed that strontium-90 is one of the nuclear fission products with a relatively high yield. The similarity to calcium and the chance that the strontium-90 might become enriched in bones made research on the metabolism of strontium an important topic. Occurrence Strontium commonly occurs in nature, being the 15th most abundant element on Earth (its heavier congener barium being the 14th), estimated to average approximately 360 parts per million in the Earth's crust and is found chiefly as the sulfate mineral celestine (SrSO4) and the carbonate strontianite (SrCO3). Of the two, celestine occurs much more frequently in deposits of sufficient size for mining. Because strontium is used most often in the carbonate form, strontianite would be the more useful of the two common minerals, but few deposits have been discovered that are suitable for development. Because of the way it reacts with air and water, strontium only exists in nature when combined to form minerals. Naturally occurring strontium is stable, but its synthetic isotope Sr-90 is only produced by nuclear fallout. In groundwater strontium behaves chemically much like calcium. At intermediate to acidic pH Sr2+ is the dominant strontium species. In the presence of calcium ions, strontium commonly forms coprecipitates with calcium minerals such as calcite and anhydrite at an increased pH. At intermediate to acidic pH, dissolved strontium is bound to soil particles by cation exchange. The mean strontium content of ocean water is 8 mg/L. At a concentration between 82 and 90 μmol/L of strontium, the concentration is considerably lower than the calcium concentration, which is normally between 9.6 and 11.6 mmol/L. It is nevertheless much higher than that of barium, 13 μg/L. Production The major producers of strontium as celestine as of January 2024 are Spain (200,000 t), Iran (200,000 t), China (80,000 t), Mexico (35,000 t); and Argentina (700 t). Although strontium deposits occur widely in the United States, they have not been mined since 1959. A large proportion of mined celestine (SrSO4) is converted to the carbonate by two processes. Either the celestine is directly leached with sodium carbonate solution or the celestine is roasted with coal to form the sulfide. The second stage produces a dark-coloured material containing mostly strontium sulfide. This so-called "black ash" is dissolved in water and filtered. Strontium carbonate is precipitated from the strontium sulfide solution by introduction of carbon dioxide. The sulfate is reduced to the sulfide by the carbothermic reduction: SrSO4 + 2 C → SrS + 2 CO2 About 300,000 tons are processed in this way annually. The metal is produced commercially by reducing strontium oxide with aluminium. The strontium is distilled from the mixture. Strontium metal can also be prepared on a small scale by electrolysis of a solution of strontium chloride in molten potassium chloride: Sr2+ + 2 → Sr 2 Cl− → Cl2 + 2 Applications Consuming 75% of production, the primary use for strontium was in glass for colour television cathode-ray tubes, where it prevented X-ray emission. This application for strontium has been declining because CRTs are being replaced by other display methods. This decline has a significant influence on the mining and refining of strontium. All parts of the CRT must absorb X-rays. In the neck and the funnel of the tube, lead glass is used for this purpose, but this type of glass shows a browning effect due to the interaction of the X-rays with the glass. Therefore, the front panel is made from a different glass mixture with strontium and barium to absorb the X-rays. The average values for the glass mixture determined for a recycling study in 2005 is 8.5% strontium oxide and 10% barium oxide. Because strontium is so similar to calcium, it is incorporated in the bone. All four stable isotopes are incorporated, in roughly the same proportions they are found in nature. However, the actual distribution of the isotopes tends to vary greatly from one geographical location to another. Thus, analyzing the bone of an individual can help determine the region it came from. This approach helps to identify the ancient migration patterns and the origin of commingled human remains in battlefield burial sites. 87Sr/86Sr ratios are commonly used to determine the likely provenance areas of sediment in natural systems, especially in marine and fluvial environments. Dasch (1969) showed that surface sediments of Atlantic displayed 87Sr/86Sr ratios that could be regarded as bulk averages of the 87Sr/86Sr ratios of geological terrains from adjacent landmasses. A good example of a fluvial-marine system to which Sr isotope provenance studies have been successfully employed is the River Nile-Mediterranean system. Due to the differing ages of the rocks that constitute the majority of the Blue and White Nile, catchment areas of the changing provenance of sediment reaching the River Nile Delta and East Mediterranean Sea can be discerned through strontium isotopic studies. Such changes are climatically controlled in the Late Quaternary. More recently, 87Sr/86Sr ratios have also been used to determine the source of ancient archaeological materials such as timbers and corn in Chaco Canyon, New Mexico. 87Sr/86Sr ratios in teeth may also be used to track animal migrations. Strontium aluminate is frequently used in glow in the dark toys, as it is chemically and biologically inert. Strontium carbonate and other strontium salts are added to fireworks to give a deep red colour. This same effect identifies strontium cations in the flame test. Fireworks consume about 5% of the world's production. Strontium carbonate is used in the manufacturing of hard ferrite magnets. Strontium chloride is sometimes used in toothpastes for sensitive teeth. One popular brand includes 10% total strontium chloride hexahydrate by weight. Small amounts are used in the refining of zinc to remove small amounts of lead impurities. The metal itself has a limited use as a getter, to remove unwanted gases in vacuums by reacting with them, although barium may also be used for this purpose. The ultra-narrow optical transition between the [Kr]5s2 1S0 electronic ground state and the metastable [Kr]5s5p 3P0 excited state of 87Sr is one of the leading candidates for the future re-definition of the second in terms of an optical transition as opposed to the current definition derived from a microwave transition between different hyperfine ground states of 133Cs. Current optical atomic clocks operating on this transition already surpass the precision and accuracy of the current definition of the second. Radioactive strontium 89Sr is the active ingredient in Metastron, a radiopharmaceutical used for bone pain secondary to metastatic bone cancer. The strontium is processed like calcium by the body, preferentially incorporating it into bone at sites of increased osteogenesis. This localization focuses the radiation exposure on the cancerous lesion. 90Sr has been used as a power source for radioisotope thermoelectric generators (RTGs). 90Sr produces approximately 0.93 watts of heat per gram (it is lower for the form of 90Sr used in RTGs, which is strontium fluoride). However, 90Sr has one third the lifetime and a lower density than 238Pu, another RTG fuel. The main advantage of 90Sr is that it is significantly cheaper than 238Pu and is found in nuclear waste. The latter must be prepared by irradiating 237Np with neutrons then separating the modest amounts of 238Pu. The principal disadvantage of 90Sr is the high energy beta particles produce Bremsstrahlung as they encounter nuclei of other nearby heavy atoms such as adjacent strontium. This is mostly in the range of X-rays. Thus strong beta emitters also emit significant secondary X-rays in most cases. This requires significant shielding measures which complicates the design of RTGs using 90Sr. The Soviet Union deployed nearly 1000 of these RTGs on its northern coast as a power source for lighthouses and meteorology stations. Biological role Acantharea, a relatively large group of marine radiolarian protozoa, produce intricate mineral skeletons composed of strontium sulfate. In biological systems, calcium is substituted to a small extent by strontium. In the human body, most of the absorbed strontium is deposited in the bones. The ratio of strontium to calcium in human bones is between 1:1000 and 1:2000, roughly in the same range as in the blood serum. Effect on the human body The human body absorbs strontium as if it were its lighter congener calcium. Because the elements are chemically very similar, stable strontium isotopes do not pose a significant health threat. The average human has an intake of about two milligrams of strontium a day. In adults, strontium consumed tends to attach only to the surface of bones, but in children, strontium can replace calcium in the mineral of the growing bones and thus lead to bone growth problems. The biological half-life of strontium in humans has variously been reported as from 14 to 600 days, 1,000 days, 18 years, 30 years and, at an upper limit, 49 years. The wide-ranging published biological half-life figures are explained by strontium's complex metabolism within the body. However, by averaging all excretion paths, the overall biological half-life is estimated to be about 18 years. The elimination rate of strontium is strongly affected by age and sex, due to differences in bone metabolism. The drug strontium ranelate aids bone growth, increases bone density, and lessens the incidence of vertebral, peripheral, and hip fractures. However, strontium ranelate also increases the risk of venous thromboembolism, pulmonary embolism, and serious cardiovascular disorders, including myocardial infarction. Its use is therefore now restricted. Its beneficial effects are also questionable, since the increased bone density is partially caused by the increased density of strontium over the calcium which it replaces. Strontium also bioaccumulates in the body. Despite restrictions on strontium ranelate, strontium is still contained in some supplements. There is not much scientific evidence on risks of strontium chloride when taken by mouth. Those with a personal or family history of blood clotting disorders are advised to avoid strontium. Strontium has been shown to inhibit sensory irritation when applied topically to the skin. Topically applied, strontium has been shown to accelerate the recovery rate of the epidermal permeability barrier (skin barrier). Nuclear waste Strontium-90 is a radioactive fission product produced by nuclear reactors used in nuclear power. It is a major component of high level radioactivity of nuclear waste and spent nuclear fuel. Its 29-year half life is short enough that its decay heat has been used to power arctic lighthouses, but long enough that it can take hundreds of years to decay to safe levels. Exposure from contaminated water and food may increase the risk of leukemia, bone cancer and primary hyperparathyroidism. Remediation Algae has shown selectivity for strontium in studies, where most plants used in bioremediation have not shown selectivity between calcium and strontium, often becoming saturated with calcium, which is greater in quantity and also present in nuclear waste. Researchers have looked at the bioaccumulation of strontium by Scenedesmus spinosus (algae) in simulated wastewater. The study claims a highly selective biosorption capacity for strontium of S. spinosus, suggesting that it may be appropriate for use in treating nuclear wastewater. A study of the pond alga Closterium moniliferum using non-radioactive strontium found that varying the ratio of barium to strontium in water improved strontium selectivity.
Physical sciences
Chemical elements_2
null
27119
https://en.wikipedia.org/wiki/Silver
Silver
Silver is a chemical element; it has symbol Ag (, derived from Proto-Indo-European *h₂erǵ ) and atomic number 47. A soft, white, lustrous transition metal, it exhibits the highest electrical conductivity, thermal conductivity, and reflectivity of any metal. Silver is found in the Earth's crust in the pure, free elemental form ("native silver"), as an alloy with gold and other metals, and in minerals such as argentite and chlorargyrite. Most silver is produced as a byproduct of copper, gold, lead, and zinc refining. Silver is a naturally occurring element. It is found in the environment combined with other elements such as sulfide, chloride, and nitrate. Pure silver is “silver” colored, but silver nitrate and silver chloride are powdery white and silver sulfide and silver oxide are dark-gray to black. Silver is often found as a by-product during the retrieval of copper, lead, zinc, and gold ores. Silver has long been valued as a precious metal. Silver metal is used in many bullion coins, sometimes alongside gold: while it is more abundant than gold, it is much less abundant as a native metal. Its purity is typically measured on a per-mille basis; a 94%-pure alloy is described as "0.940 fine". As one of the seven metals of antiquity, silver has had an enduring role in most human cultures. Other than in currency and as an investment medium (coins and bullion), silver is used in solar panels, water filtration, jewellery, ornaments, high-value tableware and utensils (hence the term "silverware"), in electrical contacts and conductors, in specialised mirrors, window coatings, in catalysis of chemical reactions, as a colorant in stained glass, and in specialised confectionery. Its compounds are used in photographic and X-ray film. Dilute solutions of silver nitrate and other silver compounds are used as disinfectants and microbiocides (oligodynamic effect), added to bandages, wound-dressings, catheters, and other medical instruments. Characteristics Silver is similar in its physical and chemical properties to its two vertical neighbours in group 11 of the periodic table: copper, and gold. Its 47 electrons are arranged in the configuration [Kr]4d105s1, similarly to copper ([Ar]3d104s1) and gold ([Xe]4f145d106s1); group 11 is one of the few groups in the d-block which has a completely consistent set of electron configurations. This distinctive electron configuration, with a single electron in the highest occupied s subshell over a filled d subshell, accounts for many of the singular properties of metallic silver. Silver is a relatively soft and extremely ductile and malleable transition metal, though it is slightly less malleable than gold. Silver crystallises in a face-centred cubic lattice with bulk coordination number 12, where only the single 5s electron is delocalised, similarly to copper and gold. Unlike metals with incomplete d-shells, metallic bonds in silver are lacking a covalent character and are relatively weak. This observation explains the low hardness and high ductility of single crystals of silver. Silver has a brilliant, white, metallic luster that can take a high polish, and which is so characteristic that the name of the metal itself has become a color name. Protected silver has greater optical reflectivity than aluminium at all wavelengths longer than ~450 nm. At wavelengths shorter than 450 nm, silver's reflectivity is inferior to that of aluminium and drops to zero near 310 nm. Very high electrical and thermal conductivity are common to the elements in group 11, because their single s electron is free and does not interact with the filled d subshell, as such interactions (which occur in the preceding transition metals) lower electron mobility. The thermal conductivity of silver is among the highest of all materials, although the thermal conductivity of carbon (in the diamond allotrope) and superfluid helium-4 are higher. The electrical conductivity of silver is the highest of all metals, greater even than copper. Silver also has the lowest contact resistance of any metal. Silver is rarely used for its electrical conductivity, due to its high cost, although an exception is in radio-frequency engineering, particularly at VHF and higher frequencies where silver plating improves electrical conductivity because those currents tend to flow on the surface of conductors rather than through the interior. During World War II in the US, tons of silver were used for the electromagnets in calutrons for enriching uranium, mainly because of the wartime shortage of copper. Silver readily forms alloys with copper, gold, and zinc. Zinc-silver alloys with low zinc concentration may be considered as face-centred cubic solid solutions of zinc in silver, as the structure of the silver is largely unchanged while the electron concentration rises as more zinc is added. Increasing the electron concentration further leads to body-centred cubic (electron concentration 1.5), complex cubic (1.615), and hexagonal close-packed phases (1.75). Isotopes Naturally occurring silver is composed of two stable isotopes, 107Ag and 109Ag, with 107Ag being slightly more abundant (51.839% natural abundance). This almost equal abundance is rare in the periodic table. The atomic weight is 107.8682(2) u; this value is very important because of the importance of silver compounds, particularly halides, in gravimetric analysis. Both isotopes of silver are produced in stars via the s-process (slow neutron capture), as well as in supernovas via the r-process (rapid neutron capture). Twenty-eight radioisotopes have been characterised, the most stable being 105Ag with a half-life of 41.29 days, 111Ag with a half-life of 7.45 days, and 112Ag with a half-life of 3.13 hours. Silver has numerous nuclear isomers, the most stable being 108mAg (t1/2 = 418 years), 110mAg (t1/2 = 249.79 days) and 106mAg (t1/2 = 8.28 days). All of the remaining radioactive isotopes have half-lives of less than an hour, and the majority of these have half-lives of less than three minutes. Isotopes of silver range in relative atomic mass from 92.950 u (93Ag) to 129.950 u (130Ag); the primary decay mode before the most abundant stable isotope, 107Ag, is electron capture and the primary mode after is beta decay. The primary decay products before 107Ag are palladium (element 46) isotopes, and the primary products after are cadmium (element 48) isotopes. The palladium isotope 107Pd decays by beta emission to 107Ag with a half-life of 6.5 million years. Iron meteorites are the only objects with a high-enough palladium-to-silver ratio to yield measurable variations in 107Ag abundance. Radiogenic 107Ag was first discovered in the Santa Clara meteorite in 1978. 107Pd–107Ag correlations observed in bodies that have clearly been melted since the accretion of the Solar System must reflect the presence of unstable nuclides in the early Solar System. Chemistry Silver is a rather unreactive metal. This is because its filled 4d shell is not very effective in shielding the electrostatic forces of attraction from the nucleus to the outermost 5s electron, and hence silver is near the bottom of the electrochemical series (E0(Ag+/Ag) = +0.799 V). In group 11, silver has the lowest first ionisation energy (showing the instability of the 5s orbital), but has higher second and third ionisation energies than copper and gold (showing the stability of the 4d orbitals), so that the chemistry of silver is predominantly that of the +1 oxidation state, reflecting the increasingly limited range of oxidation states along the transition series as the d-orbitals fill and stabilise. Unlike copper, for which the larger hydration energy of Cu2+ as compared to Cu+ is the reason why the former is the more stable in aqueous solution and solids despite lacking the stable filled d-subshell of the latter, with silver this effect is swamped by its larger second ionisation energy. Hence, Ag+ is the stable species in aqueous solution and solids, with Ag2+ being much less stable as it oxidises water. Most silver compounds have significant covalent character due to the small size and high first ionisation energy (730.8 kJ/mol) of silver. Furthermore, silver's Pauling electronegativity of 1.93 is higher than that of lead (1.87), and its electron affinity of 125.6 kJ/mol is much higher than that of hydrogen (72.8 kJ/mol) and not much less than that of oxygen (141.0 kJ/mol). Due to its full d-subshell, silver in its main +1 oxidation state exhibits relatively few properties of the transition metals proper from groups 4 to 10, forming rather unstable organometallic compounds, forming linear complexes showing very low coordination numbers like 2, and forming an amphoteric oxide as well as Zintl phases like the post-transition metals. Unlike the preceding transition metals, the +1 oxidation state of silver is stable even in the absence of π-acceptor ligands. Silver does not react with air, even at red heat, and thus was considered by alchemists as a noble metal, along with gold. Its reactivity is intermediate between that of copper (which forms copper(I) oxide when heated in air to red heat) and gold. Like copper, silver reacts with sulfur and its compounds; in their presence, silver tarnishes in air to form the black silver sulfide (copper forms the green sulfate instead, while gold does not react). While silver is not attacked by non-oxidising acids, the metal dissolves readily in hot concentrated sulfuric acid, as well as dilute or concentrated nitric acid. In the presence of air, and especially in the presence of hydrogen peroxide, silver dissolves readily in aqueous solutions of cyanide. The three main forms of deterioration in historical silver artifacts are tarnishing, formation of silver chloride due to long-term immersion in salt water, as well as reaction with nitrate ions or oxygen. Fresh silver chloride is pale yellow, becoming purplish on exposure to light; it projects slightly from the surface of the artifact or coin. The precipitation of copper in ancient silver can be used to date artifacts, as copper is nearly always a constituent of silver alloys. Silver metal is attacked by strong oxidant such as potassium permanganate () and potassium dichromate (), and in the presence of potassium bromide (). These compounds are used in photography to bleach silver images, converting them to silver bromide that can either be fixed with thiosulfate or redeveloped to intensify the original image. Silver forms cyanide complexes (silver cyanide) that are soluble in water in the presence of an excess of cyanide ions. Silver cyanide solutions are used in electroplating of silver. The common oxidation states of silver are (in order of commonness): +1 (the most stable state; for example, silver nitrate, AgNO3); +2 (highly oxidising; for example, silver(II) fluoride, AgF2); and even very rarely +3 (extreme oxidising; for example, potassium tetrafluoroargentate(III), KAgF4). The +3 state requires very strong oxidising agents to attain, such as fluorine or peroxodisulfate, and some silver(III) compounds react with atmospheric moisture and attack glass. Indeed, silver(III) fluoride is usually obtained by reacting silver or silver monofluoride with the strongest known oxidising agent, krypton difluoride. Compounds Oxides and chalcogenides Silver and gold have rather low chemical affinities for oxygen, lower than copper, and it is therefore expected that silver oxides are thermally quite unstable. Soluble silver(I) salts precipitate dark-brown silver(I) oxide, Ag2O, upon the addition of alkali. (The hydroxide AgOH exists only in solution; otherwise it spontaneously decomposes to the oxide.) Silver(I) oxide is very easily reduced to metallic silver, and decomposes to silver and oxygen above 160 °C. This and other silver(I) compounds may be oxidised by the strong oxidising agent peroxodisulfate to black AgO, a mixed silver(I,III) oxide of formula AgIAgIIIO2. Some other mixed oxides with silver in non-integral oxidation states, namely Ag2O3 and Ag3O4, are also known, as is Ag3O which behaves as a metallic conductor. Silver(I) sulfide, Ag2S, is very readily formed from its constituent elements and is the cause of the black tarnish on some old silver objects. It may also be formed from the reaction of hydrogen sulfide with silver metal or aqueous Ag+ ions. Many non-stoichiometric selenides and tellurides are known; in particular, AgTe~3 is a low-temperature superconductor. Halides The only known dihalide of silver is the difluoride, AgF2, which can be obtained from the elements under heat. A strong yet thermally stable and therefore safe fluorinating agent, silver(II) fluoride is often used to synthesise hydrofluorocarbons. In stark contrast to this, all four silver(I) halides are known. The fluoride, chloride, and bromide have the sodium chloride structure, but the iodide has three known stable forms at different temperatures; that at room temperature is the cubic zinc blende structure. They can all be obtained by the direct reaction of their respective elements. As the halogen group is descended, the silver halide gains more and more covalent character, solubility decreases, and the colour changes from the white chloride to the yellow iodide as the energy required for ligand-metal charge transfer (X−Ag+ → XAg) decreases. The fluoride is anomalous, as the fluoride ion is so small that it has a considerable solvation energy and hence is highly water-soluble and forms di- and tetrahydrates. The other three silver halides are highly insoluble in aqueous solutions and are very commonly used in gravimetric analytical methods. All four are photosensitive (though the monofluoride is so only to ultraviolet light), especially the bromide and iodide which photodecompose to silver metal, and thus were used in traditional photography. The reaction involved is: X− + hν → X + e− (excitation of the halide ion, which gives up its extra electron into the conduction band) Ag+ + e− → Ag (liberation of a silver ion, which gains an electron to become a silver atom) The process is not reversible because the silver atom liberated is typically found at a crystal defect or an impurity site, so that the electron's energy is lowered enough that it is "trapped". Other inorganic compounds White silver nitrate, AgNO3, is a versatile precursor to many other silver compounds, especially the halides, and is much less sensitive to light. It was once called lunar caustic because silver was called luna by the ancient alchemists, who believed that silver was associated with the Moon. It is often used for gravimetric analysis, exploiting the insolubility of the heavier silver halides which it is a common precursor to. Silver nitrate is used in many ways in organic synthesis, e.g. for deprotection and oxidations. Ag+ binds alkenes reversibly, and silver nitrate has been used to separate mixtures of alkenes by selective absorption. The resulting adduct can be decomposed with ammonia to release the free alkene. Yellow silver carbonate, Ag2CO3 can be easily prepared by reacting aqueous solutions of sodium carbonate with a deficiency of silver nitrate. Its principal use is for the production of silver powder for use in microelectronics. It is reduced with formaldehyde, producing silver free of alkali metals: Ag2CO3 + CH2O → 2 Ag + 2 CO2 + H2 Silver carbonate is also used as a reagent in organic synthesis such as the Koenigs–Knorr reaction. In the Fétizon oxidation, silver carbonate on celite acts as an oxidising agent to form lactones from diols. It is also employed to convert alkyl bromides into alcohols. Silver fulminate, AgCNO, a powerful, touch-sensitive explosive used in percussion caps, is made by reaction of silver metal with nitric acid in the presence of ethanol. Other dangerously explosive silver compounds are silver azide, AgN3, formed by reaction of silver nitrate with sodium azide, and silver acetylide, Ag2C2, formed when silver reacts with acetylene gas in ammonia solution. In its most characteristic reaction, silver azide decomposes explosively, releasing nitrogen gas: given the photosensitivity of silver salts, this behaviour may be induced by shining a light on its crystals. 2 (s) → 3 (g) + 2 Ag (s) Coordination compounds Silver complexes tend to be similar to those of its lighter homologue copper. Silver(III) complexes tend to be rare and very easily reduced to the more stable lower oxidation states, though they are slightly more stable than those of copper(III). For instance, the square planar periodate [Ag(IO5OH)2]5− and tellurate [Ag{TeO4(OH)2}2]5− complexes may be prepared by oxidising silver(I) with alkaline peroxodisulfate. The yellow diamagnetic [AgF4]− is much less stable, fuming in moist air and reacting with glass. Silver(II) complexes are more common. Like the valence isoelectronic copper(II) complexes, they are usually square planar and paramagnetic, which is increased by the greater field splitting for 4d electrons than for 3d electrons. Aqueous Ag2+, produced by oxidation of Ag+ by ozone, is a very strong oxidising agent, even in acidic solutions: it is stabilised in phosphoric acid due to complex formation. Peroxodisulfate oxidation is generally necessary to give the more stable complexes with heterocyclic amines, such as [Ag(py)4]2+ and [Ag(bipy)2]2+: these are stable provided the counterion cannot reduce the silver back to the +1 oxidation state. [AgF4]2− is also known in its violet barium salt, as are some silver(II) complexes with N- or O-donor ligands such as pyridine carboxylates. By far the most important oxidation state for silver in complexes is +1. The Ag+ cation is diamagnetic, like its homologues Cu+ and Au+, as all three have closed-shell electron configurations with no unpaired electrons: its complexes are colourless provided the ligands are not too easily polarised such as I−. Ag+ forms salts with most anions, but it is reluctant to coordinate to oxygen and thus most of these salts are insoluble in water: the exceptions are the nitrate, perchlorate, and fluoride. The tetracoordinate tetrahedral aqueous ion [Ag(H2O)4]+ is known, but the characteristic geometry for the Ag+ cation is 2-coordinate linear. For example, silver chloride dissolves readily in excess aqueous ammonia to form [Ag(NH3)2]+; silver salts are dissolved in photography due to the formation of the thiosulfate complex [Ag(S2O3)2]3−; and cyanide extraction for silver (and gold) works by the formation of the complex [Ag(CN)2]−. Silver cyanide forms the linear polymer {Ag–C≡N→Ag–C≡N→}; silver thiocyanate has a similar structure, but forms a zigzag instead because of the sp3-hybridized sulfur atom. Chelating ligands are unable to form linear complexes and thus silver(I) complexes with them tend to form polymers; a few exceptions exist, such as the near-tetrahedral diphosphine and diarsine complexes [Ag(L–L)2]+. Organometallic Under standard conditions, silver does not form simple carbonyls, due to the weakness of the Ag–C bond. A few are known at very low temperatures around 6–15 K, such as the green, planar paramagnetic Ag(CO)3, which dimerises at 25–30 K, probably by forming Ag–Ag bonds. Additionally, the silver carbonyl [Ag(CO)] [B(OTeF5)4] is known. Polymeric AgLX complexes with alkenes and alkynes are known, but their bonds are thermodynamically weaker than even those of the platinum complexes (though they are formed more readily than those of the analogous gold complexes): they are also quite unsymmetrical, showing the weak π bonding in group 11. Ag–C σ bonds may also be formed by silver(I), like copper(I) and gold(I), but the simple alkyls and aryls of silver(I) are even less stable than those of copper(I) (which tend to explode under ambient conditions). For example, poor thermal stability is reflected in the relative decomposition temperatures of AgMe (−50 °C) and CuMe (−15 °C) as well as those of PhAg (74 °C) and PhCu (100 °C). The C–Ag bond is stabilised by perfluoroalkyl ligands, for example in AgCF(CF3)2. Alkenylsilver compounds are also more stable than their alkylsilver counterparts. Silver-NHC complexes are easily prepared, and are commonly used to prepare other NHC complexes by displacing labile ligands. For example, the reaction of the bis(NHC)silver(I) complex with bis(acetonitrile)palladium dichloride or chlorido(dimethyl sulfide)gold(I): Intermetallic Silver forms alloys with most other elements on the periodic table. The elements from groups 1–3, except for hydrogen, lithium, and beryllium, are very miscible with silver in the condensed phase and form intermetallic compounds; those from groups 4–9 are only poorly miscible; the elements in groups 10–14 (except boron and carbon) have very complex Ag–M phase diagrams and form the most commercially important alloys; and the remaining elements on the periodic table have no consistency in their Ag–M phase diagrams. By far the most important such alloys are those with copper: most silver used for coinage and jewellery is in reality a silver–copper alloy, and the eutectic mixture is used in vacuum brazing. The two metals are completely miscible as liquids but not as solids; their importance in industry comes from the fact that their properties tend to be suitable over a wide range of variation in silver and copper concentration, although most useful alloys tend to be richer in silver than the eutectic mixture (71.9% silver and 28.1% copper by weight, and 60.1% silver and 28.1% copper by atom). Most other binary alloys are of little use: for example, silver–gold alloys are too soft and silver–cadmium alloys too toxic. Ternary alloys have much greater importance: dental amalgams are usually silver–tin–mercury alloys, silver–copper–gold alloys are very important in jewellery (usually on the gold-rich side) and have a vast range of hardnesses and colours, silver–copper–zinc alloys are useful as low-melting brazing alloys, and silver–cadmium–indium (involving three adjacent elements on the periodic table) is useful in nuclear reactors because of its high thermal neutron capture cross-section, good conduction of heat, mechanical stability, and resistance to corrosion in hot water. Etymology The word silver appears in Old English in various spellings, such as and . It is cognate with Old High German ; Gothic ; or Old Norse , all ultimately deriving from Proto-Germanic *silubra. The Balto-Slavic words for silver are rather similar to the Germanic ones (e.g. Russian [], Polish , Lithuanian ), as is the Celtiberian form silabur. They may have a common Indo-European origin, although their morphology rather suggest a non-Indo-European Wanderwort. Some scholars have thus proposed a Paleo-Hispanic origin, pointing to the Basque form as an evidence. The chemical symbol Ag is from the Latin word for silver, (compare Ancient Greek , ), from the Proto-Indo-European root *h₂erǵ- (formerly reconstructed as *arǵ-), meaning or . This was the usual Proto-Indo-European word for the metal, whose reflexes are missing in Germanic and Balto-Slavic. History Silver was known in prehistoric times: the three metals of group 11, copper, silver, and gold, occur in the elemental form in nature and were probably used as the first primitive forms of money as opposed to simple bartering. Unlike copper, silver did not lead to the growth of metallurgy, on account of its low structural strength; it was more often used ornamentally or as money. Since silver is more reactive than gold, supplies of native silver were much more limited than those of gold. For example, silver was more expensive than gold in Egypt until around the fifteenth century BC: the Egyptians are thought to have separated gold from silver by heating the metals with salt, and then reducing the silver chloride produced to the metal. The situation changed with the discovery of cupellation, a technique that allowed silver metal to be extracted from its ores. While slag heaps found in Asia Minor and on the islands of the Aegean Sea indicate that silver was being separated from lead as early as the 4th millennium BC, and one of the earliest silver extraction centres in Europe was Sardinia in the early Chalcolithic period, these techniques did not spread widely until later, when it spread throughout the region and beyond. The origins of silver production in India, China, and Japan were almost certainly equally ancient, but are not well-documented due to their great age. When the Phoenicians first came to what is now Spain, they obtained so much silver that they could not fit it all on their ships, and as a result used silver to weight their anchors instead of lead. By the time of the Greek and Roman civilisations, silver coins were a staple of the economy: the Greeks were already extracting silver from galena by the 7th century BC, and the rise of Athens was partly made possible by the nearby silver mines at Laurium, from which they extracted about 30 tonnes a year from 600 to 300 BC. The stability of the Roman currency relied to a high degree on the supply of silver bullion, mostly from Spain, which Roman miners produced on a scale unparalleled before the discovery of the New World. Reaching a peak production of 200 tonnes per year, an estimated silver stock of 10,000 tonnes circulated in the Roman economy in the middle of the second century AD, five to ten times larger than the combined amount of silver available to medieval Europe and the Abbasid Caliphate around AD 800. The Romans also recorded the extraction of silver in central and northern Europe in the same time period. This production came to a nearly complete halt with the fall of the Roman Empire, not to resume until the time of Charlemagne: by then, tens of thousands of tonnes of silver had already been extracted. Central Europe became the centre of silver production during the Middle Ages, as the Mediterranean deposits exploited by the ancient civilisations had been exhausted. Silver mines were opened in Bohemia, Saxony, Alsace, the Lahn region, Siegerland, Silesia, Hungary, Norway, Steiermark, Schwaz, and the southern Black Forest. Most of these ores were quite rich in silver and could simply be separated by hand from the remaining rock and then smelted; some deposits of native silver were also encountered. Many of these mines were soon exhausted, but a few of them remained active until the Industrial Revolution, before which the world production of silver was around a meagre 50 tonnes per year. In the Americas, high temperature silver-lead cupellation technology was developed by pre-Inca civilisations as early as AD 60–120; silver deposits in India, China, Japan, and pre-Columbian America continued to be mined during this time. With the discovery of America and the plundering of silver by the Spanish conquistadors, Central and South America became the dominant producers of silver until around the beginning of the 18th century, particularly Peru, Bolivia, Chile, and Argentina: the last of these countries later took its name from that of the metal that composed so much of its mineral wealth. The silver trade gave way to a global network of exchange. As one historian put it, silver "went round the world and made the world go round." Much of this silver ended up in the hands of the Chinese. A Portuguese merchant in 1621 noted that silver "wanders throughout all the world... before flocking to China, where it remains as if at its natural centre". Still, much of it went to Spain, allowing Spanish rulers to pursue military and political ambitions in both Europe and the Americas. "New World mines", concluded several historians, "supported the Spanish empire." In the 19th century, primary production of silver moved to North America, particularly Canada, Mexico, and Nevada in the United States: some secondary production from lead and zinc ores also took place in Europe, and deposits in Siberia and the Russian Far East as well as in Australia were mined. Poland emerged as an important producer during the 1970s after the discovery of copper deposits that were rich in silver, before the centre of production returned to the Americas the following decade. Today, Peru and Mexico are still among the primary silver producers, but the distribution of silver production around the world is quite balanced and about one-fifth of the silver supply comes from recycling instead of new production. Symbolic role Silver plays a certain role in mythology and has found various usage as a metaphor and in folklore. The Greek poet Hesiod's Works and Days (lines 109–201) lists different ages of man named after metals like gold, silver, bronze and iron to account for successive ages of humanity. Ovid's Metamorphoses contains another retelling of the story, containing an illustration of silver's metaphorical use of signifying the second-best in a series, better than bronze but worse than gold: In folklore, silver was commonly thought to have mystic powers: for example, a bullet cast from silver is often supposed in such folklore the only weapon that is effective against a werewolf, witch, or other monsters. From this the idiom of a silver bullet developed into figuratively referring to any simple solution with very high effectiveness or almost miraculous results, as in the widely discussed software engineering paper "No Silver Bullet." Other powers attributed to silver include detection of poison and facilitation of passage into the mythical realm of fairies. Silver production has also inspired figurative language. Clear references to cupellation occur throughout the Old Testament of the Bible, such as in Jeremiah's rebuke to Judah: "The bellows are burned, the lead is consumed of the fire; the founder melteth in vain: for the wicked are not plucked away. Reprobate silver shall men call them, because the Lord hath rejected them." (Jeremiah 6:19–20) Jeremiah was also aware of sheet silver, exemplifying the malleability and ductility of the metal: "Silver spread into plates is brought from Tarshish, and gold from Uphaz, the work of the workman, and of the hands of the founder: blue and purple is their clothing: they are all the work of cunning men." (Jeremiah 10:9) Silver also has more negative cultural meanings: the idiom thirty pieces of silver, referring to a reward for betrayal, references the bribe Judas Iscariot is said in the New Testament to have taken from Jewish leaders in Jerusalem to turn Jesus of Nazareth over to soldiers of the high priest Caiaphas. Ethically, silver also symbolizes greed and degradation of consciousness; this is the negative aspect, the perverting of its value. Occurrence and production The abundance of silver in the Earth's crust is 0.08 parts per million, almost exactly the same as that of mercury. It mostly occurs in sulfide ores, especially acanthite and argentite, Ag2S. Argentite deposits sometimes also contain native silver when they occur in reducing environments, and when in contact with salt water they are converted to chlorargyrite (including horn silver), AgCl, which is prevalent in Chile and New South Wales. Most other silver minerals are silver pnictides or chalcogenides; they are generally lustrous semiconductors. Most true silver deposits, as opposed to argentiferous deposits of other metals, came from Tertiary period vulcanism. The principal sources of silver are the ores of copper, copper-nickel, lead, and lead-zinc obtained from Peru, Bolivia, Mexico, China, Australia, Chile, Poland and Serbia. Peru, Bolivia and Mexico have been mining silver since 1546, and are still major world producers. Top silver-producing mines are Cannington (Australia), Fresnillo (Mexico), San Cristóbal (Bolivia), Antamina (Peru), Rudna (Poland), and Penasquito (Mexico). Top near-term mine development projects through 2015 are Pascua Lama (Chile), Navidad (Argentina), Jaunicipio (Mexico), Malku Khota (Bolivia), and Hackett River (Canada). In Central Asia, Tajikistan is known to have some of the largest silver deposits in the world. Silver is usually found in nature combined with other metals, or in minerals that contain silver compounds, generally in the form of sulfides such as galena (lead sulfide) or cerussite (lead carbonate). So the primary production of silver requires the smelting and then cupellation of argentiferous lead ores, a historically important process. Lead melts at 327 °C, lead oxide at 888 °C and silver melts at 960 °C. To separate the silver, the alloy is melted again at the high temperature of 960 °C to 1000 °C in an oxidising environment. The lead oxidises to lead monoxide, then known as litharge, which captures the oxygen from the other metals present. The liquid lead oxide is removed or absorbed by capillary action into the hearth linings. (s) + 2(s) + (g) → 2(absorbed) + Ag(l) Today, silver metal is primarily produced instead as a secondary byproduct of electrolytic refining of copper, lead, and zinc, and by application of the Parkes process on lead bullion from ore that also contains silver. In such processes, silver follows the non-ferrous metal in question through its concentration and smelting, and is later purified out. For example, in copper production, purified copper is electrolytically deposited on the cathode, while the less reactive precious metals such as silver and gold collect under the anode as the so-called "anode slime". This is then separated and purified of base metals by treatment with hot aerated dilute sulfuric acid and heating with lime or silica flux, before the silver is purified to over 99.9% purity via electrolysis in nitrate solution. Commercial-grade fine silver is at least 99.9% pure, and purities greater than 99.999% are available. In 2022, Mexico was the top producer of silver (6,300 tonnes or 24.2% of the world's total of 26,000 t), followed by China (3,600 t) and Peru (3,100 t). In marine environments Silver concentration is low in seawater (pmol/L). Levels vary by depth and between water bodies. Dissolved silver concentrations range from 0.3 pmol/L in coastal surface waters to 22.8 pmol/L in pelagic deep waters. Analysing the presence and dynamics of silver in marine environments is difficult due to these particularly low concentrations and complex interactions in the environment. Although a rare trace metal, concentrations are greatly impacted by fluvial, aeolian, atmospheric, and upwelling inputs, as well as anthropogenic inputs via discharge, waste disposal, and emissions from industrial companies. Other internal processes such as decomposition of organic matter may be a source of dissolved silver in deeper waters, which feeds into some surface waters through upwelling and vertical mixing. In the Atlantic and Pacific, silver concentrations are minimal at the surface but rise in deeper waters. Silver is taken up by plankton in the photic zone, remobilized with depth, and enriched in deep waters. Silver is transported from the Atlantic to the other oceanic water masses. In North Pacific waters, silver is remobilised at a slower rate and increasingly enriched compared to deep Atlantic waters. Silver has increasing concentrations that follow the major oceanic conveyor belt that cycles water and nutrients from the North Atlantic to the South Atlantic to the North Pacific. There is not an extensive amount of data focused on how marine life is affected by silver despite the likely deleterious effects it could have on organisms through bioaccumulation, association with particulate matters, and sorption. Not until about 1984 did scientists begin to understand the chemical characteristics of silver and the potential toxicity. In fact, mercury is the only other trace metal that surpasses the toxic effects of silver; the full silver toxicity extent is not expected in oceanic conditions because of its tendency to transfer into nonreactive biological compounds. In one study, the presence of excess ionic silver and silver nanoparticles caused bioaccumulation effects on zebrafish organs and altered the chemical pathways within their gills. In addition, very early experimental studies demonstrated how the toxic effects of silver fluctuate with salinity and other parameters, as well as between life stages and different species such as finfish, molluscs, and crustaceans. Another study found raised concentrations of silver in the muscles and liver of dolphins and whales, indicating pollution of this metal within recent decades. Silver is not an easy metal for an organism to eliminate and elevated concentrations can cause death. Monetary use The earliest known coins were minted in the kingdom of Lydia in Asia Minor around 600 BC. The coins of Lydia were made of electrum, which is a naturally occurring alloy of gold and silver, that was available within the territory of Lydia. Since that time, silver standards, in which the standard economic unit of account is a fixed weight of silver, have been widespread throughout the world until the 20th century. Notable silver coins through the centuries include the Greek drachma, the Roman denarius, the Islamic dirham, the karshapana from ancient India and rupee from the time of the Mughal Empire (grouped with copper and gold coins to create a trimetallic standard), and the Spanish dollar. The ratio between the amount of silver used for coinage and that used for other purposes has fluctuated greatly over time; for example, in wartime, more silver tends to have been used for coinage to finance the war. Today, silver bullion has the ISO 4217 currency code XAG, one of only four precious metals to have one (the others being palladium, platinum, and gold). Silver coins are produced from cast rods or ingots, rolled to the correct thickness, heat-treated, and then used to cut blanks from. These blanks are then milled and minted in a coining press; modern coining presses can produce 8000 silver coins per hour. Price Silver prices are normally quoted in troy ounces. One troy ounce is equal to . The London silver fix is published every working day at noon London time. This price is determined by several major international banks and is used by London bullion market members for trading that day. Prices are most commonly shown as the United States dollar (USD), the Pound sterling (GBP), and the Euro (EUR). Applications Jewellery and silverware The major use of silver besides coinage throughout most of history was in the manufacture of jewellery and other general-use items, and this continues to be a major use today. Examples include table silver for cutlery, for which silver is highly suited due to its antibacterial properties. Western concert flutes are usually plated with or made out of sterling silver; in fact, most silverware is only silver-plated rather than made out of pure silver; the silver is normally put in place by electroplating. Silver-plated glass (as opposed to metal) is used for mirrors, vacuum flasks, and Christmas tree decorations. Because pure silver is very soft, most silver used for these purposes is alloyed with copper, with finenesses of 925/1000, 835/1000, and 800/1000 being common. One drawback is the easy tarnishing of silver in the presence of hydrogen sulfide and its derivatives. Including precious metals such as palladium, platinum, and gold gives resistance to tarnishing but is quite costly; base metals like zinc, cadmium, silicon, and germanium do not totally prevent corrosion and tend to affect the lustre and colour of the alloy. Electrolytically refined pure silver plating is effective at increasing resistance to tarnishing. The usual solutions for restoring the lustre of tarnished silver are dipping baths that reduce the silver sulfide surface to metallic silver, and cleaning off the layer of tarnish with a paste; the latter approach also has the welcome side effect of polishing the silver concurrently. Medicine In medicine, silver is incorporated into wound dressings and used as an antibiotic coating in medical devices. Wound dressings containing silver sulfadiazine or silver nanomaterials are used to treat external infections. Silver is also used in some medical applications, such as urinary catheters (where tentative evidence indicates it reduces catheter-related urinary tract infections) and in endotracheal breathing tubes (where evidence suggests it reduces ventilator-associated pneumonia). The silver ion is bioactive and in sufficient concentration readily kills bacteria in vitro. Silver ions interfere with enzymes in the bacteria that transport nutrients, form structures, and synthesise cell walls; these ions also bond with the bacteria's genetic material. Silver and silver nanoparticles are used as an antimicrobial in a variety of industrial, healthcare, and domestic application: for example, infusing clothing with nanosilver particles thus allows them to stay odourless for longer. Bacteria can develop resistance to the antimicrobial action of silver. Silver compounds are taken up by the body like mercury compounds, but lack the toxicity of the latter. Silver and its alloys are used in cranial surgery to replace bone, and silver–tin–mercury amalgams are used in dentistry. Silver diammine fluoride, the fluoride salt of a coordination complex with the formula [Ag(NH3)2]F, is a topical medicament (drug) used to treat and prevent dental caries (cavities) and relieve dentinal hypersensitivity. Electronics Silver is very important in electronics for conductors and electrodes on account of its high electrical conductivity even when tarnished. Bulk silver and silver foils were used to make vacuum tubes, and continue to be used today in the manufacture of semiconductor devices, circuits, and their components. For example, silver is used in high quality connectors for RF, VHF, and higher frequencies, particularly in tuned circuits such as cavity filters where conductors cannot be scaled by more than 6%. Printed circuits and RFID antennas are made with silver paints, Powdered silver and its alloys are used in paste preparations for conductor layers and electrodes, ceramic capacitors, and other ceramic components. Brazing alloys Silver-containing brazing alloys are used for brazing metallic materials, mostly cobalt, nickel, and copper-based alloys, tool steels, and precious metals. The basic components are silver and copper, with other elements selected according to the specific application desired: examples include zinc, tin, cadmium, palladium, manganese, and phosphorus. Silver provides increased workability and corrosion resistance during usage. Chemical equipment Silver is useful in the manufacture of chemical equipment on account of its low chemical reactivity, high thermal conductivity, and being easily workable. Silver crucibles (alloyed with 0.15% nickel to avoid recrystallisation of the metal at red heat) are used for carrying out alkaline fusion. Copper and silver are also used when doing chemistry with fluorine. Equipment made to work at high temperatures is often silver-plated. Silver and its alloys with gold are used as wire or ring seals for oxygen compressors and vacuum equipment. Catalysis Silver metal is a good catalyst for oxidation reactions; in fact it is somewhat too good for most purposes, as finely divided silver tends to result in complete oxidation of organic substances to carbon dioxide and water, and hence coarser-grained silver tends to be used instead. For instance, 15% silver supported on α-Al2O3 or silicates is a catalyst for the oxidation of ethylene to ethylene oxide at 230–270 °C. Dehydrogenation of methanol to formaldehyde is conducted at 600–720 °C over silver gauze or crystals as the catalyst, as is dehydrogenation of isopropanol to acetone. In the gas phase, glycol yields glyoxal and ethanol yields acetaldehyde, while organic amines are dehydrated to nitriles. Photography Before the advent of digital photography, which is now dominant, the photosensitivity of silver halides was exploited for use in traditional film photography. The photosensitive emulsion used in black-and-white photography is a suspension of silver halide crystals in gelatin, possibly mixed in with some noble metal compounds for improved photosensitivity, developing, and . Colour photography requires the addition of special dye components and sensitisers, so that the initial black-and-white silver image couples with a different dye component. The original silver images are bleached off and the silver is then recovered and recycled. Silver nitrate is the starting material in all cases. The market for silver nitrate and silver halides for photography has rapidly declined with the rise of digital cameras. From the peak global demand for photographic silver in 1999 (267,000,000 troy ounces or 8,304.6 tonnes) the market contracted almost 70% by 2013. Nanoparticles Nanosilver particles, between 10 and 100 nanometres in size, are used in many applications. They are used in conductive inks for printed electronics, and have a much lower melting point than larger silver particles of micrometre size. They are also used medicinally in antibacterials and antifungals in much the same way as larger silver particles. In addition, according to the European Union Observatory for Nanomaterials (EUON), silver nanoparticles are used both in pigments, as well as cosmetics. Miscellanea Pure silver metal is used as a food colouring. It has the E174 designation and is approved in the European Union. Traditional Indian and Pakistani dishes sometimes include decorative silver foil known as vark, and in various other cultures, silver dragée are used to decorate cakes, cookies, and other dessert items. Photochromic lenses include silver halides, so that ultraviolet light in natural daylight liberates metallic silver, darkening the lenses. The silver halides are reformed in lower light intensities. Colourless silver chloride films are used in radiation detectors. Zeolite sieves incorporating Ag+ ions are used to desalinate seawater during rescues, using silver ions to precipitate chloride as silver chloride. Silver is also used for its antibacterial properties for water sanitisation, but the application of this is limited by limits on silver consumption. Colloidal silver is similarly used to disinfect closed swimming pools; while it has the advantage of not giving off a smell like hypochlorite treatments do, colloidal silver is not effective enough for more contaminated open swimming pools. Small silver iodide crystals are used in cloud seeding to cause rain. The Texas Legislature designated silver the official precious metal of Texas in 2007. Precautions Silver compounds have low toxicity compared to those of most other heavy metals, as they are poorly absorbed by the human body when ingested, and that which does get absorbed is rapidly converted to insoluble silver compounds or complexed by metallothionein. Silver fluoride and silver nitrate are caustic and can cause tissue damage, resulting in gastroenteritis, diarrhoea, falling blood pressure, cramps, paralysis, or respiratory arrest. Animals repeatedly dosed with silver salts have been observed to experience anaemia, slowed growth, necrosis of the liver, and fatty degeneration of the liver and kidneys; rats implanted with silver foil or injected with colloidal silver have been observed to develop localised tumours. Parenterally admistered colloidal silver causes acute silver poisoning. Some waterborne species are particularly sensitive to silver salts and those of the other precious metals; in most situations, silver is not a serious environmental hazard. In large doses, silver and compounds containing it can be absorbed into the circulatory system and become deposited in various body tissues, leading to argyria, which results in a blue-grayish pigmentation of the skin, eyes, and mucous membranes. Argyria is rare, and so far as is known, does not otherwise harm a person's health, though it is disfiguring and usually permanent. Mild forms of argyria are sometimes mistaken for cyanosis, a blue tint on skin, caused by lack of oxygen. Metallic silver, like copper, is an antibacterial agent, which was known to the ancients and first scientifically investigated and named the oligodynamic effect by Carl Nägeli. Silver ions damage the metabolism of bacteria even at such low concentrations as 0.01–0.1 milligrams per litre; metallic silver has a similar effect due to the formation of silver oxide. This effect is lost in the presence of sulfur due to the extreme insolubility of silver sulfide. Some silver compounds are very explosive, such as the nitrogen compounds silver azide, silver amide, and silver fulminate, as well as silver acetylide, silver oxalate, and silver(II) oxide. They can explode on heating, force, drying, illumination, or sometimes spontaneously. To avoid the formation of such compounds, ammonia and acetylene should be kept away from silver equipment. Salts of silver with strongly oxidising acids such as silver chlorate and silver nitrate can explode on contact with materials that can be readily oxidised, such as organic compounds, sulfur and soot.
Physical sciences
Chemistry
null
27121
https://en.wikipedia.org/wiki/Samarium
Samarium
Samarium is a chemical element; it has symbol Sm and atomic number 62. It is a moderately hard silvery metal that slowly oxidizes in air. Being a typical member of the lanthanide series, samarium usually has the oxidation state +3. Compounds of samarium(II) are also known, most notably the monoxide SmO, monochalcogenides SmS, SmSe and SmTe, as well as samarium(II) iodide. Discovered in 1879 by French chemist Paul-Émile Lecoq de Boisbaudran, samarium was named after the mineral samarskite from which it was isolated. The mineral itself was named after a Russian mine official, Colonel Vassili Samarsky-Bykhovets, who thus became the first person to have a chemical element named after him, though the name was indirect. Samarium occurs in concentration up to 2.8% in several minerals including cerite, gadolinite, samarskite, monazite and bastnäsite, the last two being the most common commercial sources of the element. These minerals are mostly found in China, the United States, Brazil, India, Sri Lanka and Australia; China is by far the world leader in samarium mining and production. The main commercial use of samarium is in samarium–cobalt magnets, which have permanent magnetization second only to neodymium magnets; however, samarium compounds can withstand significantly higher temperatures, above , without losing their permanent magnetic properties. The radioisotope samarium-153 is the active component of the drug samarium (153Sm) lexidronam (Quadramet), which kills cancer cells in lung cancer, prostate cancer, breast cancer and osteosarcoma. Another isotope, samarium-149, is a strong neutron absorber and so is added to control rods of nuclear reactors. It also forms as a decay product during the reactor operation and is one of the important factors considered in the reactor design and operation. Other uses of samarium include catalysis of chemical reactions, radioactive dating and X-ray lasers. Samarium(II) iodide, in particular, is a common reducing agent in chemical synthesis. Samarium has no biological role; some samarium salts are slightly toxic. Physical properties Samarium is a rare earth element with a hardness and density similar to zinc. With a boiling point of , samarium is the third most volatile lanthanide after ytterbium and europium and comparable in this respect to lead and barium; this helps separation of samarium from its ores. When freshly prepared, samarium has a silvery lustre, and takes on a duller appearance when oxidized in air. Samarium is calculated to have one of the largest atomic radii of the elements; with a radius of 238 pm, only potassium, praseodymium, barium, rubidium and caesium are larger. In ambient conditions, samarium has a rhombohedral structure (α form). Upon heating to , its crystal symmetry changes to hexagonal close-packed (hcp),; it has actual transition temperature depending on metal purity. Further heating to transforms the metal into a body-centered cubic (bcc) phase. Heating to plus compression to 40 kbar results in a double-hexagonally close-packed structure (dhcp). Higher pressure of the order of hundreds or thousands of kilobars induces a series of phase transformations, in particular with a tetragonal phase appearing at about 900 kbar. In one study, the dhcp phase could be produced without compression, using a nonequilibrium annealing regime with a rapid temperature change between about and , confirming the transient character of this samarium phase. Thin films of samarium obtained by vapor deposition may contain the hcp or dhcp phases in ambient conditions. Samarium and its sesquioxide are paramagnetic at room temperature. Their corresponding effective magnetic moments, below 2 bohr magnetons, are the third-lowest among lanthanides (and their oxides) after lanthanum and lutetium. The metal transforms to an antiferromagnetic state upon cooling to 14.8 K. Individual samarium atoms can be isolated by encapsulating them into fullerene molecules. They can also be intercalated into the interstices of the bulk C60 to form a solid solution of nominal composition Sm3C60, which is superconductive at a temperature of 8 K. Samarium doping of iron-based superconductors – a class of high-temperature superconductor – increases their transition to normal conductivity temperature up to 56 K, the highest value achieved so far in this series. Chemical properties In air, samarium slowly oxidizes at room temperature and spontaneously ignites at . Even when stored under mineral oil, samarium gradually oxidizes and develops a grayish-yellow powder of the oxide-hydroxide mixture at the surface. The metallic appearance of a sample can be preserved by sealing it under an inert gas such as argon. Samarium is quite electropositive and reacts slowly with cold water and rapidly with hot water to form samarium hydroxide: Samarium dissolves readily in dilute sulfuric acid to form solutions containing the yellow to pale green Sm(III) ions, which exist as complexes: Samarium is one of the few lanthanides with a relatively accessible +2 oxidation state, alongside Eu and Yb. ions are blood-red in aqueous solution. Compounds Oxides The most stable oxide of samarium is the sesquioxide Sm2O3. Like many samarium compounds, it exists in several crystalline phases. The trigonal form is obtained by slow cooling from the melt. The melting point of Sm2O3 is high (2345 °C), so it is usually melted not by direct heating, but with induction heating, through a radio-frequency coil. Sm2O3 crystals of monoclinic symmetry can be grown by the flame fusion method (Verneuil process) from Sm2O3 powder, that yields cylindrical boules up to several centimeters long and about one centimeter in diameter. The boules are transparent when pure and defect-free and are orange otherwise. Heating the metastable trigonal Sm2O3 to converts it to the more stable monoclinic phase. Cubic Sm2O3 has also been described. Samarium is one of the few lanthanides that form a monoxide, SmO. This lustrous golden-yellow compound was obtained by reducing Sm2O3 with samarium metal at high temperature (1000 °C) and a pressure above 50 kbar; lowering the pressure resulted in incomplete reaction. SmO has cubic rock-salt lattice structure. Chalcogenides Samarium forms a trivalent sulfide, selenide and telluride. Divalent chalcogenides SmS, SmSe and SmTe with a cubic rock-salt crystal structure are known. These chalcogenides convert from a semiconducting to metallic state at room temperature upon application of pressure. Whereas the transition is continuous and occurs at about 20–30 kbar in SmSe and SmTe, it is abrupt in SmS and requires only 6.5 kbar. This effect results in a spectacular color change in SmS from black to golden yellow when its crystals of films are scratched or polished. The transition does not change the lattice symmetry, but there is a sharp decrease (~15%) in the crystal volume. It exhibits hysteresis, i.e., when the pressure is released, SmS returns to the semiconducting state at a much lower pressure of about 0.4 kbar. Halides Samarium metal reacts with all the halogens, forming trihalides: 2 Sm (s) + 3 X2 (g) → 2 SmX3 (s) (X = F, Cl, Br or I) Their further reduction with samarium, lithium or sodium metals at elevated temperatures (about 700–900 °C) yields the dihalides. The diiodide can also be prepared by heating SmI3, or by reacting the metal with 1,2-diiodoethane in anhydrous tetrahydrofuran at room temperature: Sm (s) + ICH2-CH2I → SmI2 + CH2=CH2. In addition to dihalides, the reduction also produces many non-stoichiometric samarium halides with a well-defined crystal structure, such as Sm3F7, Sm14F33, Sm27F64, Sm11Br24, Sm5Br11 and Sm6Br13. Samarium halides change their crystal structures when one type of halide anion is substituted for another, which is an uncommon behavior for most elements (e.g. actinides). Many halides have two major crystal phases for one composition, one being significantly more stable and another being metastable. The latter is formed upon compression or heating, followed by quenching to ambient conditions. For example, compressing the usual monoclinic samarium diiodide and releasing the pressure results in a PbCl2-type orthorhombic structure (density 5.90 g/cm3), and similar treatment results in a new phase of samarium triiodide (density 5.97 g/cm3). Borides Sintering powders of samarium oxide and boron, in a vacuum, yields a powder containing several samarium boride phases; the ratio between these phases can be controlled through the mixing proportion. The powder can be converted into larger crystals of samarium borides using arc melting or zone melting techniques, relying on the different melting/crystallization temperature of SmB6 (2580 °C), SmB4 (about 2300 °C) and SmB66 (2150 °C). All these materials are hard, brittle, dark-gray solids with the hardness increasing with the boron content. Samarium diboride is too volatile to be produced with these methods and requires high pressure (about 65 kbar) and low temperatures between 1140 and 1240 °C to stabilize its growth. Increasing the temperature results in the preferential formation of SmB6. Samarium hexaboride Samarium hexaboride is a typical intermediate-valence compound where samarium is present both as Sm2+ and Sm3+ ions in a 3:7 ratio. It belongs to a class of Kondo insulators; at temperatures above 50 K, its properties are typical of a Kondo metal, with metallic electrical conductivity characterized by strong electron scattering, whereas at lower temperatures, it behaves as a non-magnetic insulator with a narrow band gap of about 4–14 meV. The cooling-induced metal-insulator transition in SmB6 is accompanied by a sharp increase in the thermal conductivity, peaking at about 15 K. The reason for this increase is that electrons themselves do not contribute to the thermal conductivity at low temperatures, which is dominated by phonons, but the decrease in electron concentration reduces the rate of electron-phonon scattering. Other inorganic compounds Samarium carbides are prepared by melting a graphite-metal mixture in an inert atmosphere. After the synthesis, they are unstable in air and need to be studied under an inert atmosphere. Samarium monophosphide SmP is a semiconductor with a bandgap of 1.10 eV, the same as in silicon, and electrical conductivity of n-type. It can be prepared by annealing at an evacuated quartz ampoule containing mixed powders of phosphorus and samarium. Phosphorus is highly volatile at high temperatures and may explode, thus the heating rate has to be kept well below 1 °C/min. A similar procedure is adopted for the monarsenide SmAs, but the synthesis temperature is higher at . Numerous crystalline binary compounds are known for samarium and one of the group 14, 15, or 16 elements X, where X is Si, Ge, Sn, Pb, Sb or Te, and metallic alloys of samarium form another large group. They are all prepared by annealing mixed powders of the corresponding elements. Many of the resulting compounds are non-stoichiometric and have nominal compositions SmaXb, where the b/a ratio varies between 0.5 and 3. Organometallic compounds Samarium forms a cyclopentadienide and its chloroderivatives and . They are prepared by reacting samarium trichloride with in tetrahydrofuran. Contrary to cyclopentadienides of most other lanthanides, in some rings bridge each other by forming ring vertexes η1 or edges η2 toward another neighboring samarium, thus creating polymeric chains. The chloroderivative has a dimer structure, which is more accurately expressed as . There, the chlorine bridges can be replaced, for instance, by iodine, hydrogen or nitrogen atoms or by CN groups. The ()− ion in samarium cyclopentadienides can be replaced by the indenide ()− or cyclooctatetraenide ()2− ring, resulting in or . The latter compound has a structure similar to uranocene. There is also a cyclopentadienide of divalent samarium, a solid that sublimates at about . Contrary to ferrocene, the rings in are not parallel but are tilted by 40°. A metathesis reaction in tetrahydrofuran or ether gives alkyls and aryls of samarium: Here R is a hydrocarbon group and Me = methyl. Isotopes Naturally occurring samarium is composed of five stable isotopes: 144Sm, 149Sm, 150Sm, 152Sm and 154Sm, and two extremely long-lived radioisotopes, 147Sm (half-life t1/2 = 1.06 years) and 148Sm (7 years), with 152Sm being the most abundant (26.75%). 149Sm is listed by various sources as being stable, but some sources state that it is radioactive, with a lower bound for its half-life given as years. Some observationally stable samarium isotopes are predicted to decay to isotopes of neodymium. The long-lived isotopes 146Sm, 147Sm, and 148Sm undergo alpha decay to neodymium isotopes. Lighter unstable isotopes of samarium mainly decay by electron capture to promethium, while heavier ones beta decay to europium. The known isotopes range from 129Sm to 168Sm. The half-lives of 151Sm and 145Sm are 90 years and 340 days, respectively. All remaining radioisotopes have half-lives that are less than 2 days, and most these have half-life less than 48 seconds. Samarium also has twelve known nuclear isomers, the most stable of which are 141mSm (half-life 22.6 minutes), 143m1Sm (t1/2 = 66 seconds), and 139mSm (t1/2 = 10.7 seconds). Natural samarium has a radioactivity of 127 Bq/g, mostly due to 147Sm, which alpha decays to 143Nd with a half-life of 1.06 years and is used in samarium–neodymium dating. 146Sm is an extinct radionuclide, with the half-life of 9.20 years. There have been searches of samarium-146 as a primordial nuclide, because its half-life is long enough such that minute quantities of the element should persist today. It can be used in radiometric dating. Samarium-149 is an observationally stable isotope of samarium (predicted to decay, but no decays have ever been observed, giving it a half-life at least several orders of magnitude longer than the age of the universe), and a product of the decay chain from the fission product 149Nd (yield 1.0888%). 149Sm is a decay product and neutron-absorber in nuclear reactors, with a neutron poison effect that is second in importance for reactor design and operation only to 135Xe. Its neutron cross section is 41000 barns for thermal neutrons. Because samarium-149 is not radioactive and is not removed by decay, it presents problems somewhat different from those encountered with xenon-135. The equilibrium concentration (and thus the poisoning effect) builds to an equilibrium value during reactor operations in about 500 hours (about three weeks), and since samarium-149 is stable, its concentration remains essentially constant during reactor operation. Samarium-153 is a beta emitter with a half-life of 46.3 hours. It is used to kill cancer cells in lung cancer, prostate cancer, breast cancer, and osteosarcoma. For this purpose, samarium-153 is chelated with ethylene diamine tetramethylene phosphonate (EDTMP) and injected intravenously. The chelation prevents accumulation of radioactive samarium in the body that would result in excessive irradiation and generation of new cancer cells. The corresponding drug has several names including samarium (153Sm) lexidronam; its trade name is Quadramet. History Detection of samarium and related elements was announced by several scientists in the second half of the 19th century; however, most sources give priority to French chemist Paul-Émile Lecoq de Boisbaudran. Boisbaudran isolated samarium oxide and/or hydroxide in Paris in 1879 from the mineral samarskite ) and identified a new element in it via sharp optical absorption lines. Swiss chemist Marc Delafontaine announced a new element decipium (from meaning "deceptive, misleading") in 1878, but later in 1880–1881 demonstrated that it was a mix of several elements, one being identical to Boisbaudran's samarium. Though samarskite was first found in the Ural Mountains in Russia, by the late 1870s it had been found in other places, making it available to many researchers. In particular, it was found that the samarium isolated by Boisbaudran was also impure and had a comparable amount of europium. The pure element was produced only in 1901 by Eugène-Anatole Demarçay. Boisbaudran named his element samarium after the mineral samarskite, which in turn honored Vassili Samarsky-Bykhovets (1803–1870). Samarsky-Bykhovets, as the Chief of Staff of the Russian Corps of Mining Engineers, had granted access for two German mineralogists, the brothers Gustav and Heinrich Rose, to study the mineral samples from the Urals. Samarium was thus the first chemical element to be named after a person. The word samaria is sometimes used to mean samarium(III) oxide, by analogy with yttria, zirconia, alumina, ceria, holmia, etc. The symbol Sm was suggested for samarium, but an alternative Sa was often used instead until the 1920s. Before the advent of ion-exchange separation technology in the 1950s, pure samarium had no commercial uses. However, a by-product of fractional crystallization purification of neodymium was a mix of samarium and gadolinium that got the name "Lindsay Mix" after the company that made it, and was used for nuclear control rods in some early nuclear reactors. Nowadays, a similar commodity product has the name "samarium-europium-gadolinium" (SEG) concentrate. It is prepared by solvent extraction from the mixed lanthanides isolated from bastnäsite (or monazite). Since heavier lanthanides have more affinity for the solvent used, they are easily extracted from the bulk using relatively small proportions of solvent. Not all rare-earth producers who process bastnäsite do so on a large enough scale to continue by separating the components of SEG, which typically makes up only 12% of the original ore. Such producers therefore make SEG with a view to marketing it to the specialized processors. In this manner, the valuable europium in the ore is rescued for use in making phosphor. Samarium purification follows the removal of the europium. , being in oversupply, samarium oxide is cheaper on a commercial scale than its relative abundance in the ore might suggest. Occurrence and production Samarium concentration in soils varies between 2 and 23 ppm, and oceans contain about 0.5–0.8 parts per trillion. The median value for its abundance in the Earth's crust used by the CRC Handbook is 7 parts per million (ppm) and is the 40th most abundant element. Distribution of samarium in soils strongly depends on its chemical state and is very inhomogeneous: in sandy soils, samarium concentration is about 200 times higher at the surface of soil particles than in the water trapped between them, and this ratio can exceed 1,000 in clays. Samarium is not found free in nature, but, like other rare earth elements, is contained in many minerals, including monazite, bastnäsite, cerite, gadolinite and samarskite; monazite (in which samarium occurs at concentrations of up to 2.8%) and bastnäsite are mostly used as commercial sources. World resources of samarium are estimated at two million tonnes; they are mostly located in China, US, Brazil, India, Sri Lanka and Australia, and the annual production is about 700 tonnes. Country production reports are usually given for all rare-earth metals combined. By far, China has the largest production with 120,000 tonnes mined per year; it is followed by the US (about 5,000 tonnes) and India (2,700 tonnes). Samarium is usually sold as oxide, which at the price of about US$30/kg is one of the cheapest lanthanide oxides. Whereas mischmetal – a mixture of rare earth metals containing about 1% of samarium – has long been used, relatively pure samarium has been isolated only recently, through ion exchange processes, solvent extraction techniques, and electrochemical deposition. The metal is often prepared by electrolysis of a molten mixture of samarium(III) chloride with sodium chloride or calcium chloride. Samarium can also be obtained by reducing its oxide with lanthanum. The product is then distilled to separate samarium (boiling point 1794 °C) and lanthanum (b.p. 3464 °C). Very few minerals have samarium being the most dominant element. Minerals with essential (dominant) samarium include monazite-(Sm) and florencite-(Sm). These minerals are very rare and are usually found containing other elements, usually cerium or neodymium. It is also made by neutron capture by samarium-149, which is added to the control rods of nuclear reactors. Therefore, Sm is present in spent nuclear fuel and radioactive waste. Applications Magnets An important use of samarium is samarium–cobalt magnets, which are nominally or . They have high permanent magnetization, about 10,000 times that of iron and second only to neodymium magnets. However, samarium magnets resist demagnetization better; they are stable to temperatures above (cf. 300–400 °C for neodymium magnets). These magnets are found in small motors, headphones, and high-end magnetic pickups for guitars and related musical instruments. For example, they are used in the motors of a solar-powered electric aircraft, the Solar Challenger, and in the Samarium Cobalt Noiseless electric guitar and bass pickups. Chemical reagent Samarium and its compounds are important as catalysts and chemical reagents. Samarium catalysts help the decomposition of plastics, dechlorination of pollutants such as polychlorinated biphenyls (PCB), as well as dehydration and dehydrogenation of ethanol. Samarium(III) triflate , that is , is one of the most efficient Lewis acid catalysts for a halogen-promoted Friedel–Crafts reaction with alkenes. Samarium(II) iodide is a very common reducing and coupling agent in organic synthesis, for example in desulfonylation reactions; annulation; Danishefsky, Kuwajima, Mukaiyama and Holton Taxol total syntheses; strychnine total synthesis; Barbier reaction and other reductions with samarium(II) iodide. In its usual oxidized form, samarium is added to ceramics and glasses where it increases absorption of infrared light. As a (minor) part of mischmetal, samarium is found in the "flint" ignition devices of many lighters and torches. Neutron absorber Samarium-149 has a high cross section for neutron capture (41,000 barns) and so is used in control rods of nuclear reactors. Its advantage compared to competing materials, such as boron and cadmium, is stability of absorption – most of the fusion products of Sm are other isotopes of samarium that are also good neutron absorbers. For example, the cross section of samarium-151 is 15,000 barns, it is on the order of hundreds of barns for Sm, Sm, and Sm, and 6,800 barns for natural (mixed-isotope) samarium. Lasers Samarium-doped calcium fluoride crystals were used as an active medium in one of the first solid-state lasers designed and built by Peter Sorokin (co-inventor of the dye laser) and Mirek Stevenson at IBM research labs in early 1961. This samarium laser gave pulses of red light at 708.5 nm. It had to be cooled by liquid helium and so did not find practical applications. Another samarium-based laser became the first saturated X-ray laser operating at wavelengths shorter than 10 nanometers. It gave 50-picosecond pulses at 7.3 and 6.8 nm suitable for uses in holography, high-resolution microscopy of biological specimens, deflectometry, interferometry, and radiography of dense plasmas related to confinement fusion and astrophysics. Saturated operation meant that the maximum possible power was extracted from the lasing medium, resulting in the high peak energy of 0.3 mJ. The active medium was samarium plasma produced by irradiating samarium-coated glass with a pulsed infrared Nd-glass laser (wavelength ~1.05 μm). Storage phosphor In 2007 it was shown that nanocrystalline BaFCl:Sm as prepared by co-precipitation can serve as a very efficient X-ray storage phosphor. The co-precipitation leads to nanocrystallites of the order of 100–200 nm in size and their sensitivity as X-ray storage phosphors is increased a remarkable ~500,000 times because of the specific arrangements and density of defect centers in comparison with microcrystalline samples prepared by sintering at high temperature. The mechanism is based on reduction of Sm to Sm by trapping electrons that are created upon exposure to ionizing radiation in the BaFCl host. The D–F f–f luminescence lines can be very efficiently excited via the parity allowed 4f→4f5d transition at ~417 nm. The latter wavelength is ideal for efficient excitation by blue-violet laser diodes as the transition is electric dipole allowed and thus relatively intense (400 L/(mol⋅cm)). The phosphor has potential applications in personal dosimetry, dosimetry and imaging in radiotherapy, and medical imaging. Non-commercial and potential uses The change in electrical resistivity in samarium monochalcogenides can be used in a pressure sensor or in a memory device triggered between a low-resistance and high-resistance state by external pressure, and such devices are being developed commercially. Samarium monosulfide also generates electric voltage upon moderate heating to about that can be applied in thermoelectric power converters. Analysis of relative concentrations of samarium and neodymium isotopes Sm, Nd, and Nd allows determination of the age and origin of rocks and meteorites in samarium–neodymium dating. Both elements are lanthanides and are very similar physically and chemically. Thus, Sm–Nd dating is either insensitive to partitioning of the marker elements during various geologic processes, or such partitioning can well be understood and modeled from the ionic radii of said elements. The Sm ion is a potential activator for use in warm-white light emitting diodes. It offers high luminous efficacy due to narrow emission bands; but the generally low quantum efficiency and too little absorption in the UV-A to blue spectral region hinders commercial application. Samarium is used for ionosphere testing. A rocket spreads samarium monoxide as a red vapor at high altitude, and researchers test how the atmosphere disperses it and how it impacts radio transmissions. Samarium hexaboride, , has recently been shown to be a topological insulator with potential uses in quantum computing. Biological role and precautions Samarium salts stimulate metabolism, but it is unclear whether this is from samarium or other lanthanides present with it. The total amount of samarium in adults is about 50 μg, mostly in liver and kidneys and with ~8 μg/L being dissolved in blood. Samarium is not absorbed by plants to a measurable concentration and so is normally not part of human diet. However, a few plants and vegetables may contain up to 1 part per million of samarium. Insoluble salts of samarium are non-toxic and the soluble ones are only slightly toxic. When ingested, only 0.05% of samarium salts are absorbed into the bloodstream and the remainder are excreted. From the blood, 45% goes to the liver and 45% is deposited on the surface of the bones where it remains for 10 years; the remaining 10% is excreted.
Physical sciences
Chemical elements_2
null
27127
https://en.wikipedia.org/wiki/Sulfur
Sulfur
Sulfur (also spelled sulphur in British English) is a chemical element; it has symbol S and atomic number 16. It is abundant, multivalent and nonmetallic. Under normal conditions, sulfur atoms form cyclic octatomic molecules with the chemical formula S8. Elemental sulfur is a bright yellow, crystalline solid at room temperature. Sulfur is the tenth most abundant element by mass in the universe and the fifth most common on Earth. Though sometimes found in pure, native form, sulfur on Earth usually occurs as sulfide and sulfate minerals. Being abundant in native form, sulfur was known in ancient times, being mentioned for its uses in ancient India, ancient Greece, China, and ancient Egypt. Historically and in literature sulfur is also called brimstone, which means "burning stone". Almost all elemental sulfur is produced as a byproduct of removing sulfur-containing contaminants from natural gas and petroleum. The greatest commercial use of the element is the production of sulfuric acid for sulfate and phosphate fertilizers, and other chemical processes. Sulfur is used in matches, insecticides, and fungicides. Many sulfur compounds are odoriferous, and the smells of odorized natural gas, skunk scent, bad breath, grapefruit, and garlic are due to organosulfur compounds. Hydrogen sulfide gives the characteristic odor to rotting eggs and other biological processes. Sulfur is an essential element for all life, almost always in the form of organosulfur compounds or metal sulfides. Amino acids (two proteinogenic: cysteine and methionine, and many other non-coded: cystine, taurine, etc.) and two vitamins (biotin and thiamine) are organosulfur compounds crucial for life. Many cofactors also contain sulfur, including glutathione, and iron–sulfur proteins. Disulfides, S–S bonds, confer mechanical strength and insolubility of the (among others) protein keratin, found in outer skin, hair, and feathers. Sulfur is one of the core chemical elements needed for biochemical functioning and is an elemental macronutrient for all living organisms. Characteristics Physical properties Sulfur forms several polyatomic molecules. The best-known allotrope is octasulfur, cyclo-S8. The point group of cyclo-S8 is D4d and its dipole moment is 0 D. Octasulfur is a soft, bright-yellow solid that is odorless. It melts at , and boils at . At , below its melting temperature, cyclo-octasulfur begins slowly changing from α-octasulfur to the β-polymorph. The structure of the S8 ring is virtually unchanged by this phase transition, which affects the intermolecular interactions. Cooling molten sulfur freezes at , as it predominantly consists of the β-S8 molecules. Between its melting and boiling temperatures, octasulfur changes its allotrope again, turning from β-octasulfur to γ-sulfur, again accompanied by a lower density but increased viscosity due to the formation of polymers. At higher temperatures, the viscosity decreases as depolymerization occurs. Molten sulfur assumes a dark red color above . The density of sulfur is about 2 g/cm3, depending on the allotrope; all of the stable allotropes are excellent electrical insulators. Sulfur sublimes more or less between and . Sulfur is insoluble in water but soluble in carbon disulfide and, to a lesser extent, in other nonpolar organic solvents, such as benzene and toluene. Chemical properties Under normal conditions, sulfur hydrolyzes very slowly to mainly form hydrogen sulfide and sulfuric acid: The reaction involves adsorption of protons onto clusters, followed by disproportionation into the reaction products. The second, fourth and sixth ionization energies of sulfur are 2252 kJ/mol, 4556 kJ/mol and 8495.8 kJ/mol, respectively. The composition of reaction products of sulfur with oxidants (and its oxidation state) depends on whether releasing of reaction energy overcomes these thresholds. Applying catalysts and/or supply of external energy may vary sulfur's oxidation state and the composition of reaction products. While reaction between sulfur and oxygen under normal conditions gives sulfur dioxide (oxidation state +4), formation of sulfur trioxide (oxidation state +6) requires a temperature of and presence of a catalyst. In reactions with elements of lesser electronegativity, it reacts as an oxidant and forms sulfides, where it has oxidation state −2. Sulfur reacts with nearly all other elements except noble gases, even with the notoriously unreactive metal iridium (yielding iridium disulfide). Some of those reactions require elevated temperatures. Allotropes Sulfur forms over 30 solid allotropes, more than any other element. Besides S8, several other rings are known. Removing one atom from the crown gives S7, which is of a deeper yellow than S8. HPLC analysis of "elemental sulfur" reveals an equilibrium mixture of mainly S8, but with S7 and small amounts of S6. Larger rings have been prepared, including S12 and S18. Amorphous or "plastic" sulfur is produced by rapid cooling of molten sulfur—for example, by pouring it into cold water. X-ray crystallography studies show that the amorphous form may have a helical structure with eight atoms per turn. The long coiled polymeric molecules make the brownish substance elastic, and in bulk it has the feel of crude rubber. This form is metastable at room temperature and gradually reverts to the crystalline molecular allotrope, which is no longer elastic. This process happens over a matter of hours to days, but can be rapidly catalyzed. Isotopes Sulfur has 23 known isotopes, four of which are stable: 32S (), 33S (), 34S (), and 36S (). Other than 35S, with a half-life of 87 days, the radioactive isotopes of sulfur have half-lives less than 3 hours. The preponderance of 32S is explained by its production in the so-called alpha-process (one of the main classes of nuclear fusion reactions) in exploding stars. Other stable sulfur isotopes are produced in the bypass processes related with 34Ar, and their composition depends on a type of a stellar explosion. For example, proportionally more 33S comes from novae than from supernovae. On the planet Earth the sulfur isotopic composition was determined by the Sun. Though it was assumed that the distribution of different sulfur isotopes would be more or less equal, it has been found that proportions of the two most abundant sulfur isotopes 32S and 34S varies in different samples. Assaying of the isotope ratio (δ34S) in the samples suggests their chemical history, and with support of other methods, it allows to age-date the samples, estimate temperature of equilibrium between ore and water, determine pH and oxygen fugacity, identify the activity of sulfate-reducing bacteria in the time of formation of the sample, or suggest the main sources of sulfur in ecosystems. However, there are ongoing discussions over the real reason for the δ34S shifts, biological activity or postdeposit alteration. For example, when sulfide minerals are precipitated, isotopic equilibration among solids and liquid may cause small differences in the δ34S values of co-genetic minerals. The differences between minerals can be used to estimate the temperature of equilibration. The δ13C and δ34S of coexisting carbonate minerals and sulfides can be used to determine the pH and oxygen fugacity of the ore-bearing fluid during ore formation. Scientists measure the sulfur isotopes of minerals in rocks and sediments to study the redox conditions in past oceans. Sulfate-reducing bacteria in marine sediment fractionate sulfur isotopes as they take in sulfate and produce sulfide. Prior to the 2010s, it was thought that sulfate reduction could fractionate sulfur isotopes up to 46 permil and fractionation larger than 46 permil recorded in sediments must be due to disproportionation of sulfur compounds in the sediment. This view has changed since the 2010s as experiments showed that sulfate-reducing bacteria can fractionate to 66 permil. As substrates for disproportionation are limited by the product of sulfate reduction, the isotopic effect of disproportionation should be less than 16 permil in most sedimentary settings. In forest ecosystems, sulfate is derived mostly from the atmosphere; weathering of ore minerals and evaporites contribute some sulfur. Sulfur with a distinctive isotopic composition has been used to identify pollution sources, and enriched sulfur has been added as a tracer in hydrologic studies. Differences in the natural abundances can be used in systems where there is sufficient variation in the 34S of ecosystem components. Rocky Mountain lakes thought to be dominated by atmospheric sources of sulfate have been found to have measurably different 34S values than lakes believed to be dominated by watershed sources of sulfate. The radioactive 35S is formed in cosmic ray spallation of the atmospheric 40Ar. This fact may be used to verify the presence of recent (up to 1 year) atmospheric sediments in various materials. This isotope may be obtained artificially by different ways. In practice, the reaction 35Cl + n → 35S + p is used by irradiating potassium chloride with neutrons. The isotope 35S is used in various sulfur-containing compounds as a radioactive tracer for many biological studies, for example, the Hershey-Chase experiment. Because of the weak beta activity of 35S, its compounds are relatively safe as long as they are not ingested or absorbed by the body. Natural occurrence 32S is created inside massive stars, at a depth where the temperature exceeds 2.5×109 K, by the fusion of one nucleus of silicon plus one nucleus of helium. As this nuclear reaction is part of the alpha process that produces elements in abundance, sulfur is the 10th most common element in the universe. Sulfur, usually as sulfide, is present in many types of meteorites. Ordinary chondrites contain on average 2.1% sulfur, and carbonaceous chondrites may contain as much as 6.6%. It is normally present as troilite (FeS), but there are exceptions, with carbonaceous chondrites containing free sulfur, sulfates and other sulfur compounds. The distinctive colors of Jupiter's volcanic moon Io are attributed to various forms of molten, solid, and gaseous sulfur. In July 2024, elemental sulfur was accidentally discovered to exist on Mars after the Curiosity rover drove over and crushed a rock, revealing sulfur crystals inside it. Sulfur is the fifth most common element by mass in the Earth. Elemental sulfur can be found near hot springs and volcanic regions in many parts of the world, especially along the Pacific Ring of Fire; such volcanic deposits are mined in Indonesia, Chile, and Japan. These deposits are polycrystalline, with the largest documented single crystal measuring . Historically, Sicily was a major source of sulfur in the Industrial Revolution. Lakes of molten sulfur up to about in diameter have been found on the sea floor, associated with submarine volcanoes, at depths where the boiling point of water is higher than the melting point of sulfur. Native sulfur is synthesized by anaerobic bacteria acting on sulfate minerals such as gypsum in salt domes. Significant deposits in salt domes occur along the coast of the Gulf of Mexico, and in evaporites in eastern Europe and western Asia. Native sulfur may be produced by geological processes alone. Fossil-based sulfur deposits from salt domes were once the basis for commercial production in the United States, Russia, Turkmenistan, and Ukraine. Such sources have become of secondary commercial importance, and most are no longer worked but commercial production is still carried out in the Osiek mine in Poland. Common naturally occurring sulfur compounds include the sulfide minerals, such as pyrite (iron sulfide), cinnabar (mercury sulfide), galena (lead sulfide), sphalerite (zinc sulfide), and stibnite (antimony sulfide); and the sulfate minerals, such as gypsum (calcium sulfate), alunite (potassium aluminium sulfate), and barite (barium sulfate). On Earth, just as upon Jupiter's moon Io, elemental sulfur occurs naturally in volcanic emissions, including emissions from hydrothermal vents. The main industrial source of sulfur has become petroleum and natural gas. Compounds Common oxidation states of sulfur range from −2 to +6. Sulfur forms stable compounds with all elements except the noble gases. Electron transfer reactions Sulfur polycations, , and are produced when sulfur is reacted with oxidizing agents in a strongly acidic solution. The colored solutions produced by dissolving sulfur in oleum were first reported as early as 1804 by C. F. Bucholz, but the cause of the color and the structure of the polycations involved was only determined in the late 1960s. is deep blue, is yellow and is red. Reduction of sulfur gives various polysulfides with the formula , many of which have been obtained in crystalline form. Illustrative is the production of sodium tetrasulfide: Some of these dianions dissociate to give radical anions, such as gives the blue color of the rock lapis lazuli. This reaction highlights a distinctive property of sulfur: its ability to catenate (bind to itself by formation of chains). Protonation of these polysulfide anions produces the polysulfanes, H2Sx, where x = 2, 3, and 4. Ultimately, reduction of sulfur produces sulfide salts: The interconversion of these species is exploited in the sodium–sulfur battery. Hydrogenation Treatment of sulfur with hydrogen gives hydrogen sulfide. When dissolved in water, hydrogen sulfide is mildly acidic: Hydrogen sulfide gas and the hydrosulfide anion are extremely toxic to mammals, due to their inhibition of the oxygen-carrying capacity of hemoglobin and certain cytochromes in a manner analogous to cyanide and azide (see below, under precautions). Combustion The two principal sulfur oxides are obtained by burning sulfur: Many other sulfur oxides are observed including the sulfur-rich oxides include sulfur monoxide, disulfur monoxide, disulfur dioxides, and higher oxides containing peroxo groups. Halogenation Sulfur reacts with fluorine to give the highly reactive sulfur tetrafluoride and the highly inert sulfur hexafluoride. Whereas fluorine gives S(IV) and S(VI) compounds, chlorine gives S(II) and S(I) derivatives. Thus, sulfur dichloride, disulfur dichloride, and higher chlorosulfanes arise from the chlorination of sulfur. Sulfuryl chloride and chlorosulfuric acid are derivatives of sulfuric acid; thionyl chloride (SOCl2) is a common reagent in organic synthesis. Bromine also oxidizes sulfur to form sulfur dibromide and disulfur dibromide. Pseudohalides Sulfur oxidizes cyanide and sulfite to give thiocyanate and thiosulfate, respectively. Metal sulfides Sulfur reacts with many metals. Electropositive metals give polysulfide salts. Copper, zinc, and silver are attacked by sulfur; see tarnishing. Although many metal sulfides are known, most are prepared by high temperature reactions of the elements. Geoscientists also study the isotopes of metal sulfides in rocks and sediment to study environmental conditions in the Earth's past. Organic compounds Some of the main classes of sulfur-containing organic compounds include the following: Thiols or mercaptans (so called because they capture mercury as chelators) are the sulfur analogs of alcohols; treatment of thiols with base gives thiolate ions. Thioethers are the sulfur analogs of ethers. Sulfonium ions have three groups attached to a cationic sulfur center. Dimethylsulfoniopropionate (DMSP) is one such compound, important in the marine organic sulfur cycle. Sulfoxides and sulfones are thioethers with one and two oxygen atoms attached to the sulfur atom, respectively. The simplest sulfoxide, dimethyl sulfoxide, is a common solvent; a common sulfone is sulfolane. Sulfonic acids are used in many detergents. Compounds with carbon–sulfur multiple bonds are uncommon, an exception being carbon disulfide, a volatile colorless liquid that is structurally similar to carbon dioxide. It is used as a reagent to make the polymer rayon and many organosulfur compounds. Unlike carbon monoxide, carbon monosulfide is stable only as an extremely dilute gas, found between solar systems. Organosulfur compounds are responsible for some of the unpleasant odors of decaying organic matter. They are widely known as the odorant in domestic natural gas, garlic odor, and skunk spray, as well as a component of bad breath odor. Not all organic sulfur compounds smell unpleasant at all concentrations: the sulfur-containing monoterpenoid grapefruit mercaptan in small concentrations is the characteristic scent of grapefruit, but has a generic thiol odor at larger concentrations. Sulfur mustard, a potent vesicant, was used in World War I as a disabling agent. Sulfur–sulfur bonds are a structural component used to stiffen rubber, similar to the disulfide bridges that rigidify proteins (see biological below). In the most common type of industrial "curing" or hardening and strengthening of natural rubber, elemental sulfur is heated with the rubber to the point that chemical reactions form disulfide bridges between isoprene units of the polymer. This process, patented in 1843, made rubber a major industrial product, especially in automobile tires. Because of the heat and sulfur, the process was named vulcanization, after the Roman god of the forge and volcanism. History Antiquity Being abundantly available in native form, sulfur was known in ancient times and is referred to in the Torah (Genesis). English translations of the Christian Bible commonly referred to burning sulfur as "brimstone", giving rise to the term "fire-and-brimstone" sermons, in which listeners are reminded of the fate of eternal damnation that await the unbelieving and unrepentant. It is from this part of the Bible that Hell is implied to "smell of sulfur" (likely due to its association with volcanic activity). According to the Ebers Papyrus, a sulfur ointment was used in ancient Egypt to treat granular eyelids. Sulfur was used for fumigation in preclassical Greece; this is mentioned in the Odyssey. Pliny the Elder discusses sulfur in book 35 of his Natural History, saying that its best-known source is the island of Melos. He mentions its use for fumigation, medicine, and bleaching cloth. A natural form of sulfur known as () was known in China since the 6th century BC and found in Hanzhong. By the 3rd century, the Chinese had discovered that sulfur could be extracted from pyrite. Chinese Daoists were interested in sulfur's flammability and its reactivity with certain metals, yet its earliest practical uses were found in traditional Chinese medicine. The Wujing Zongyao of 1044 AD described various formulas for Chinese black powder, which is a mixture of potassium nitrate (), charcoal, and sulfur. Indian alchemists, practitioners of the "science of chemicals" (), wrote extensively about the use of sulfur in alchemical operations with mercury, from the eighth century AD onwards. In the tradition, sulfur is called "the smelly" (, ). Early European alchemists gave sulfur a unique alchemical symbol, a triangle atop a cross (🜍). (This is sometimes confused with the astronomical crossed-spear symbol ⚴ for 2 Pallas.) The variation known as brimstone has a symbol combining a two-barred cross atop a lemniscate (🜏). In traditional skin treatment, elemental sulfur was used (mainly in creams) to alleviate such conditions as scabies, ringworm, psoriasis, eczema, and acne. The mechanism of action is unknown—though elemental sulfur does oxidize slowly to sulfurous acid, which is (through the action of sulfite) a mild reducing and antibacterial agent. Modern times Sulfur appears in a column of fixed (non-acidic) alkali in a chemical table of 1718. Antoine Lavoisier used sulfur in combustion experiments, writing of some of these in 1777. Sulfur deposits in Sicily were the dominant source for more than a century. By the late 18th century, about 2,000 tonnes per year of sulfur were imported into Marseille, France, for the production of sulfuric acid for use in the Leblanc process. In industrializing Britain, with the repeal of tariffs on salt in 1824, demand for sulfur from Sicily surged. The increasing British control and exploitation of the mining, refining, and transportation of sulfur, coupled with the failure of this lucrative export to transform Sicily's backward and impoverished economy, led to the Sulfur Crisis of 1840, when King Ferdinand II gave a monopoly of the sulfur industry to a French firm, violating an earlier 1816 trade agreement with Britain. A peaceful solution was eventually negotiated by France. In 1867, elemental sulfur was discovered in underground deposits in Louisiana and Texas. The highly successful Frasch process was developed to extract this resource. In the late 18th century, furniture makers used molten sulfur to produce decorative inlays. Molten sulfur is sometimes still used for setting steel bolts into drilled concrete holes where high shock resistance is desired for floor-mounted equipment attachment points. Pure powdered sulfur was used as a medicinal tonic and laxative. Since the advent of the contact process, the majority of sulfur is used to make sulfuric acid for a wide range of uses, particularly fertilizer. In recent times, the main source of sulfur has become petroleum and natural gas. This is due to the requirement to remove sulfur from fuels in order to prevent acid rain, and has resulted in a surplus of sulfur. Spelling and etymology Sulfur is derived from the Latin word , which was Hellenized to in the erroneous belief that the Latin word came from Greek. This spelling was later reinterpreted as representing an /f/ sound and resulted in the spelling , which appears in Latin toward the end of the Classical period. The true Ancient Greek word for sulfur, , theîon (from earlier , théeion), is the source of the international chemical prefix thio-. The Modern Standard Greek word for sulfur is θείο, theío. In 12th-century Anglo-French, it was . In the 14th century, the erroneously Hellenized Latin was restored in Middle English . By the 15th century, both full Latin spelling variants sulfur and sulphur became common in English. The parallel f~ph spellings continued in Britain until the 19th century, when the word was standardized as sulphur. On the other hand, sulfur was the form eventually chosen in the United States, though multiple place names (such as White Sulphur Springs) use -ph-. Canada uses both spellings. IUPAC adopted the spelling sulfur in 1990 as did the Nomenclature Committee of the Royal Society of Chemistry in 1992, restoring the spelling sulfur to Britain. Oxford Dictionaries note that "in chemistry and other technical uses ... the -f- spelling is now the standard form for this and related words in British as well as US contexts, and is increasingly used in general contexts as well." Production Sulfur may be found by itself and historically was usually obtained in this form; pyrite has also been a source of sulfur. In volcanic regions in Sicily, in ancient times, it was found on the surface of the Earth, and the "Sicilian process" was used: sulfur deposits were piled and stacked in brick kilns built on sloping hillsides, with airspaces between them. Then, some sulfur was pulverized, spread over the stacked ore and ignited, causing the free sulfur to melt down the hills. Eventually the surface-borne deposits played out, and miners excavated veins that ultimately dotted the Sicilian landscape with labyrinthine mines. Mining was unmechanized and labor-intensive, with pickmen freeing the ore from the rock, and mine-boys or carusi carrying baskets of ore to the surface, often through a mile or more of tunnels. Once the ore was at the surface, it was reduced and extracted in smelting ovens. The conditions in Sicilian sulfur mines were horrific, prompting Booker T. Washington to write "I am not prepared just now to say to what extent I believe in a physical hell in the next world, but a sulfur mine in Sicily is about the nearest thing to hell that I expect to see in this life." Sulfur is still mined from surface deposits in poorer nations with volcanoes, such as Indonesia, and problems with working conditions still exist. Elemental sulfur was extracted from salt domes (where it sometimes occurs in nearly pure form) until the late 20th century, when it became a side product of other industrial processes such as in oil refining, in which sulfur is undesirable. As a mineral, native sulfur under salt domes is thought to be a fossil mineral resource, produced by the action of anaerobic bacteria on sulfate deposits. It was removed from such salt-dome mines mainly by the Frasch process. In this method, superheated water was pumped into a native sulfur deposit to melt the sulfur, and then compressed air returned the 99.5% pure melted product to the surface. Throughout the 20th century this procedure produced elemental sulfur that required no further purification. Due to a limited number of such sulfur deposits and the high cost of working them, this process for mining sulfur has not had significant use anywhere in the world since 2002. Since then, sulfur has typically been produced from petroleum, natural gas, and related fossil resources, from which it is obtained mainly as hydrogen sulfide. Organosulfur compounds, undesirable impurities in petroleum, may be upgraded by subjecting them to hydrodesulfurization, which cleaves the C–S bonds: The resulting hydrogen sulfide from this process, and also as it occurs in natural gas, is converted into elemental sulfur by the Claus process, which entails oxidation of some hydrogen sulfide to sulfur dioxide and then the comproportionation of the two: Due to the high sulfur content of the Athabasca Oil Sands, stockpiles of elemental sulfur from this process exist throughout Alberta, Canada. Another way of storing sulfur is as a binder for concrete, the resulting product having some desirable properties (see sulfur concrete). The world production of sulfur in 2011 amounted to 69 million tonnes (Mt), with more than 15 countries contributing more than 1 Mt each. Countries producing more than 5 Mt are China (9.6), the United States (8.8), Canada (7.1) and Russia (7.1). Production has been slowly increasing from 1900 to 2010; the price was unstable in the 1980s and around 2010. Applications Sulfuric acid Elemental sulfur is used mainly as a precursor to other chemicals. Approximately 85% (1989) is converted to sulfuric acid (H2SO4): In 2010, the United States produced more sulfuric acid than any other inorganic industrial chemical. The principal use for the acid is the extraction of phosphate ores for the production of fertilizer manufacturing. Other applications of sulfuric acid include oil refining, wastewater processing, and mineral extraction. Other important sulfur chemistry Sulfur reacts directly with methane to give carbon disulfide, which is used to manufacture cellophane and rayon. One of the uses of elemental sulfur is in vulcanization of rubber, where polysulfide chains crosslink organic polymers. Large quantities of sulfites are used to bleach paper and to preserve dried fruit. Many surfactants and detergents (e.g. sodium lauryl sulfate) are sulfate derivatives. Calcium sulfate, gypsum (CaSO4·2H2O) is mined on the scale of 100 million tonnes each year for use in Portland cement and fertilizers. When silver-based photography was widespread, sodium and ammonium thiosulfate were widely used as "fixing agents". Sulfur is a component of gunpowder ("black powder"). Fertilizer Amino acids synthesized by living organisms such as methionine and cysteine contain organosulfur groups (thioester and thiol respectively). The antioxidant glutathione protecting many living organisms against free radicals and oxidative stress also contains organic sulfur. Some crops such as onion and garlic also produce different organosulfur compounds such as syn-propanethial-S-oxide responsible of lacrymal irritation (onions), or diallyl disulfide and allicin (garlic). Sulfates, commonly found in soils and groundwaters are often a sufficient natural source of sulfur for plants and bacteria. Atmospheric deposition of sulfur dioxide (SO2) is also a common artificial source (coal combustion) of sulfur for the soils. Under normal circumstances, in most agricultural soils, sulfur is not a limiting nutrient for plants and microorganisms (see Liebig's barrel). However, in some circumstances, soils can be depleted in sulfate, e.g. if this later is leached by meteoric water (rain) or if the requirements in sulfur for some types of crops are high. This explains that sulfur is increasingly recognized and used as a component of fertilizers. The most important form of sulfur for fertilizer is calcium sulfate, commonly found in nature as the mineral gypsum (CaSO4·2H2O). Elemental sulfur is hydrophobic (not soluble in water) and cannot be used directly by plants. Elemental sulfur (ES) is sometimes mixed with bentonite to amend depleted soils for crops with high requirement in organo-sulfur. Over time, oxidation abiotic processes with atmospheric oxygen and soil bacteria can oxidize and convert elemental sulfur to soluble derivatives, which can then be used by microorganisms and plants. Sulfur improves the efficiency of other essential plant nutrients, particularly nitrogen and phosphorus. Biologically produced sulfur particles are naturally hydrophilic due to a biopolymer coating and are easier to disperse over the land in a spray of diluted slurry, resulting in a faster uptake by plants. The plants requirement for sulfur equals or exceeds the requirement for phosphorus. It is an essential nutrient for plant growth, root nodule formation of legumes, and immunity and defense systems. Sulfur deficiency has become widespread in many countries in Europe. Because atmospheric inputs of sulfur continue to decrease, the deficit in the sulfur input/output is likely to increase unless sulfur fertilizers are used. Atmospheric inputs of sulfur decrease because of actions taken to limit acid rains. Fungicide and pesticide Elemental sulfur is one of the oldest fungicides and pesticides. "Dusting sulfur", elemental sulfur in powdered form, is a common fungicide for grapes, strawberry, many vegetables and several other crops. It has a good efficacy against a wide range of powdery mildew diseases as well as black spot. In organic production, sulfur is the most important fungicide. It is the only fungicide used in organically farmed apple production against the main disease apple scab under colder conditions. Biosulfur (biologically produced elemental sulfur with hydrophilic characteristics) can also be used for these applications. Standard-formulation dusting sulfur is applied to crops with a sulfur duster or from a dusting plane. Wettable sulfur is the commercial name for dusting sulfur formulated with additional ingredients to make it water miscible. It has similar applications and is used as a fungicide against mildew and other mold-related problems with plants and soil. Elemental sulfur powder is used as an "organic" (i.e., "green") insecticide (actually an acaricide) against ticks and mites. A common method of application is dusting the clothing or limbs with sulfur powder. A diluted solution of lime sulfur (made by combining calcium hydroxide with elemental sulfur in water) is used as a dip for pets to destroy ringworm (fungus), mange, and other dermatoses and parasites. Sulfur candles of almost pure sulfur were burned to fumigate structures and wine barrels, but are now considered too toxic for residences. Pharmaceuticals Sulfur (specifically octasulfur, S8) is used in pharmaceutical skin preparations for the treatment of acne and other conditions. It acts as a keratolytic agent and also kills bacteria, fungi, scabies mites, and other parasites. Precipitated sulfur and colloidal sulfur are used, in form of lotions, creams, powders, soaps, and bath additives, for the treatment of acne vulgaris, acne rosacea, and seborrhoeic dermatitis. Many drugs contain sulfur. Early examples include antibacterial sulfonamides, known as sulfa drugs. A more recent example is mucolytic acetylcysteine. Sulfur is a part of many bacterial defense molecules. Most β-lactam antibiotics, including the penicillins, cephalosporins and monobactams contain sulfur. Batteries Due to their high energy density and the availability of sulfur, there is ongoing research in creating rechargeable lithium–sulfur batteries. Until now, carbonate electrolytes have caused failures in such batteries after a single cycle. In February 2022, researchers at Drexel University have not only created a prototypical battery that lasted 4000 recharge cycles, but also found the first monoclinic gamma sulfur that remained stable below 95 degrees Celsius. Biological role Sulfur is an essential component of all living cells. It is the eighth most abundant element in the human body by weight, about equal in abundance to potassium, and slightly greater than sodium and chlorine. A human body contains about of sulfur. The main dietary source of sulfur for humans is sulfur-containing amino-acids, which can be found in plant and animal proteins. Transferring sulfur between inorganic and biomolecules In the 1880s, while studying Beggiatoa (a bacterium living in a sulfur rich environment), Sergei Winogradsky found that it oxidized hydrogen sulfide (H2S) as an energy source, forming intracellular sulfur droplets. Winogradsky referred to this form of metabolism as inorgoxidation (oxidation of inorganic compounds). Another contributor, who continued to study it was Selman Waksman. Primitive bacteria that live around deep ocean volcanic vents oxidize hydrogen sulfide for their nutrition, as discovered by Robert Ballard. Sulfur oxidizers can use as energy sources reduced sulfur compounds, including hydrogen sulfide, elemental sulfur, sulfite, thiosulfate, and various polythionates (e.g., tetrathionate). They depend on enzymes such as sulfur oxygenase and sulfite oxidase to oxidize sulfur to sulfate. Some lithotrophs can even use the energy contained in sulfur compounds to produce sugars, a process known as chemosynthesis. Some bacteria and archaea use hydrogen sulfide in place of water as the electron donor in chemosynthesis, a process similar to photosynthesis that produces sugars and uses oxygen as the electron acceptor. Sulfur-based chemosynthesis may be simplifiedly compared with photosynthesis: There are bacteria combining these two ways of nutrition: green sulfur bacteria and purple sulfur bacteria. Also sulfur-oxidizing bacteria can go into symbiosis with larger organisms, enabling the later to use hydrogen sulfide as food to be oxidized. Example: the giant tube worm. There are sulfate-reducing bacteria, that, by contrast, "breathe sulfate" instead of oxygen. They use organic compounds or molecular hydrogen as the energy source. They use sulfur as the electron acceptor, and reduce various oxidized sulfur compounds back into sulfide, often into hydrogen sulfide. They can grow on other partially oxidized sulfur compounds (e.g. thiosulfates, thionates, polysulfides, sulfites). There are studies pointing that many deposits of native sulfur in places that were the bottom of the ancient oceans have biological origin. These studies indicate that this native sulfur have been obtained through biological activity, but what is responsible for that (sulfur-oxidizing bacteria or sulfate-reducing bacteria) is still unknown for sure. Sulfur is absorbed by plants roots from soil as sulfate and transported as a phosphate ester. Sulfate is reduced to sulfide via sulfite before it is incorporated into cysteine and other organosulfur compounds. While the plants' role in transferring sulfur to animals by food chains is more or less understood, the role of sulfur bacteria is just getting investigated. Protein and organic metabolites In all forms of life, most of the sulfur is contained in two proteinogenic amino acids (cysteine and methionine), thus the element is present in all proteins that contain these amino acids, as well as in respective peptides. Some of the sulfur is comprised in certain metabolites—many of which are cofactors—and sulfated polysaccharides of connective tissue (chondroitin sulfates, heparin). Proteins, to execute their biological function, need to have specific space geometry. Formation of this geometry is performed in a process called protein folding, and is provided by intra- and inter-molecular bonds. The process has several stages. While at premier stages a polypeptide chain folds due to hydrogen bonds, at later stages folding is provided (apart from hydrogen bonds) by covalent bonds between two sulfur atoms of two cysteine residues (so called disulfide bridges) at different places of a chain (tertiary protein structure) as well as between two cysteine residues in two separated protein subunits (quaternary protein structure). Both structures easily may be seen in insulin. As the bond energy of a covalent disulfide bridge is higher than the energy of a coordinate bond or hydrophobic interaction, higher disulfide bridges content leads to higher energy needed for protein denaturation. In general disulfide bonds are necessary in proteins functioning outside cellular space, and they do not change proteins' conformation (geometry), but serve as its stabilizers. Within cytoplasm cysteine residues of proteins are saved in reduced state (i.e. in -SH form) by thioredoxins. This property manifests in following examples. Lysozyme is stable enough to be applied as a drug. Feathers and hair have relative strength, and consisting in them keratin is considered indigestible by most organisms. However, there are fungi and bacteria containing keratinase, and are able to destruct keratin. Many important cellular enzymes use prosthetic groups ending with -SH moieties to handle reactions involving acyl-containing biochemicals: two common examples from basic metabolism are coenzyme A and alpha-lipoic acid. Cysteine-related metabolites homocysteine and taurine are other sulfur-containing amino acids that are similar in structure, but not coded by DNA, and are not part of the primary structure of proteins, take part in various locations of mammalian physiology. Two of the 13 classical vitamins, biotin and thiamine, contain sulfur, and serve as cofactors to several enzymes. In intracellular chemistry, sulfur operates as a carrier of reducing hydrogen and its electrons for cellular repair of oxidation. Reduced glutathione, a sulfur-containing tripeptide, is a reducing agent through its sulfhydryl (–SH) moiety derived from cysteine. Methanogenesis, the route to most of the world's methane, is a multistep biochemical transformation of carbon dioxide. This conversion requires several organosulfur cofactors. These include coenzyme M, , the immediate precursor to methane. Metalloproteins and inorganic cofactors Metalloproteins—in which the active site is a transition metal ion (or metal-sulfide cluster) often coordinated by sulfur atoms of cysteine residues—are essential components of enzymes involved in electron transfer processes. Examples include plastocyanin (Cu2+) and nitrous oxide reductase (Cu–S). The function of these enzymes is dependent on the fact that the transition metal ion can undergo redox reactions. Other examples include many zinc proteins, as well as iron–sulfur clusters. Most pervasive are the ferrodoxins, which serve as electron shuttles in cells. In bacteria, the important nitrogenase enzymes contain an Fe–Mo–S cluster and is a catalyst that performs the important function of nitrogen fixation, converting atmospheric nitrogen to ammonia that can be used by microorganisms and plants to make proteins, DNA, RNA, alkaloids, and the other organic nitrogen compounds necessary for life. Sulfur is also present in molybdenum cofactor. Sulfate Deficiency In humans methionine is an essential amino acid; cysteine is conditionally essential and may be synthesized from non-essential serine (sulfur donor would be methionine in this case). Dietary deficiency rarely happens in common conditions. Artificial methionine deficiency is attempted to apply in cancer treatment, but the method is still potentially dangerous. Isolated sulfite oxidase deficiency is a rare, fatal genetic disease preventing production of sulfite oxidase, needed to metabolize sulfites to sulfates. Precautions Though elemental sulfur is only minimally absorbed through the skin and is of low toxicity to humans, inhalation of sulfur dust or contact with eyes or skin may cause irritation. Excessive ingestion of sulfur can cause a burning sensation or diarrhea, and cases of life-threatening metabolic acidosis have been reported after patients deliberately consumed sulfur as a folk remedy. Toxicity of sulfur compounds When sulfur burns in air, it produces sulfur dioxide. In water, this gas produces sulfurous acid and sulfites; sulfites are antioxidants that inhibit growth of aerobic bacteria and a useful food additive in small amounts. At high concentrations these acids harm the lungs, eyes, or other tissues. In organisms without lungs such as insects, sulfite in high concentration prevents respiration. Sulfur trioxide (made by catalysis from sulfur dioxide) and sulfuric acid are similarly highly acidic and corrosive in the presence of water. Concentrated sulfuric acid is a strong dehydrating agent that can strip available water molecules and water components from sugar and organic tissue. The burning of coal and/or petroleum by industry and power plants generates sulfur dioxide (SO2) that reacts with atmospheric water and oxygen to produce sulfurous acid (H2SO3). These acids are components of acid rain, lowering the pH of soil and freshwater bodies, sometimes resulting in substantial damage to the environment and chemical weathering of statues and structures. Fuel standards increasingly require that fuel producers extract sulfur from fossil fuels to prevent acid rain formation. This extracted and refined sulfur represents a large portion of sulfur production. In coal-fired power plants, flue gases are sometimes purified. More modern power plants that use synthesis gas extract the sulfur before they burn the gas. Hydrogen sulfide is about one-half as toxic as hydrogen cyanide, and intoxicates by the same mechanism (inhibition of the respiratory enzyme cytochrome oxidase), though hydrogen sulfide is less likely to cause sudden poisonings from small inhaled amounts (near its permissible exposure limit (PEL) of 20 ppm) because of its disagreeable odor. However, its presence in ambient air at concentration over 100–150 ppm quickly deadens the sense of smell, and a victim may breathe increasing quantities without noticing until severe symptoms cause death. Dissolved sulfide and hydrosulfide salts are toxic by the same mechanism.
Physical sciences
Chemistry
null
27154
https://en.wikipedia.org/wiki/Seizure
Seizure
A seizure is a sudden change in behavior, movement or consciousness due to abnormal electrical activity in the brain. Seizures can look different in different people. It can be uncontrolled shaking of the whole body (tonic-clonic seizures) or a person spacing out for a few seconds (absence seizures). Most seizures last less than two minutes. They are then followed by confusion/drowsiness before the person returns to normal. If a seizure lasts longer than 5 minutes, it is a medical emergency (status epilepticus) and needs immediate treatment. Seizures can be classified as provoked or unprovoked. Provoked seizures have a cause that can be fixed, such as low blood sugar, alcohol withdrawal, high fever, recent stroke, and recent head trauma. Unprovoked seizures have no clear cause or fixable cause. Examples include past strokes, brain tumors, brain vessel malformations, and genetic disorders. If no cause is found, it is called an idiopathic seizure. After a first unprovoked seizure, the chance of experiencing a second one is about 40% within 2 years. People with repeated unprovoked seizures are diagnosed with epilepsy. Doctors assess a seizure by first ruling out other conditions that look similar to seizures, such as fainting and strokes. This includes taking a detailed history and ordering blood tests. They may also order an electroencephalogram (EEG) and brain imaging (CT, MRI or both). If it is a person's first seizure and it was "provoked", or caused by another condition, treatment of the cause is usually enough to treat the seizure. If the seizure is "unprovoked", brain imaging is abnormal, and/or EEG is abnormal, starting anti-seizure medications is generally recommended. Signs and symptoms A seizure can last from a few seconds to 5 minutes. Once it reaches and passes 5 minutes, it is known as status epilepticus. Accidental urination (urinary incontinence), stool leaking (fecal incontinence), tongue biting, foaming of the mouth, and turning blue due to inability to breathe commonly are seen in seizures. A period of confusion typically follows the seizure that lasts from seconds to hours before a person returns to normal. This period is called a postictal period. Other symptoms during this period include drowsiness, headache, difficulty speaking, psychosis, and weakness. Observable signs and symptoms of seizures vary depending on the type. Seizures can be classified into generalized seizures and focal seizures, depending on what part of the brain is involved. Focal seizures Focal seizures affect a specific area of the brain, not both sides. It may turn into a generalized seizure if the seizure spreads through the brain. Consciousness may or may not be impaired. The signs and symptoms of these seizures depends on the location of the brain that is affected. Focal seizures usually consist of motor symptoms or sensory symptoms. Sensory symptoms: Auras are subjective sensations that occur before focal seizures.  Auras include changes in vision, hearing, or smell (example is smelling rubber). Feelings of deja-vu or abdominal discomfort are also examples of auras. A person who experiences focal weakness of a limb may also have just experienced a focal seizure. This is known as Todd's paralysis. Motor symptoms: Head turning and eyes moving to one side, with contraction of limbs on one side is a common presentation. Automatisms are also an indicator that a seizure is focal. These are repetitive movements. It can be lip smacking, chewing, swallowing, eyelid fluttering, feet shuffling, or picking movements. Jacksonian March is also a motor presentation of a focal seizure, with contractions spreading from one muscle to the next on one side of the body. Generalized seizures Generalized seizures affect both sides of the brain and typically involve both sides of the body. They all involve a loss of consciousness and usually happen without warning. There are six main types of generalized seizures: tonic-clonic, tonic, clonic, myoclonic, absence, and atonic seizures. Tonic-clonic seizures, also known as Grand Mal seizures, present with continuous stiffening of the body for 10–20 seconds followed by rhythmic jerking. It may be accompanied by an increase in blood pressure, increase in heart rate, urinary incontinence. The person may turn blue if breathing is impaired. Shoulder dislocation and tongue biting are also possible. Tonic seizures produce constant contractions of the muscles. The body stiffens for a prolonged period of time. The muscles most commonly affected are the neck, shoulders, hips, and trunk. Clonic seizures involve jerking of the muscles rhythmically. Myoclonic seizures involve short contractions of muscles in either a few areas of the body or through the whole body. They are not typically rhythmic. Absence seizures last 10–15 seconds usually. It is characterized by a sudden, brief episode where a person is unaware of what is happening and does not respond. The person stops in the middle of activity. The person often does not fall over. They may return to normal right after the seizure ends, with no postictal state. The person is usually unaware of what just happened. Atonic seizures involve the loss of muscle activity causing a person to drop abruptly with their muscles limp. This is called a drop attack. Causes Seizures have a number of causes. Seizures can be classified into provoked or unprovoked. Provoked seizures have a cause that is temporary and reversible. They are also known as Acute Symptomatic Seizures as they occur closely after the injury. Unprovoked seizures do not have a known cause or the cause is not reversible. Unprovoked seizures are typically considered epilepsy and treated as epilepsy. Of those who have a seizure, about 25% have epilepsy. Those with epilepsy may have certain triggers that they know cause seizures to occur, including emotional stress, sleep deprivation, and flickering lights. Causes of provoked seizures Metabolic Dehydration can trigger epileptic seizures by changing electrolyte balances. Low blood sugar, low blood sodium, high blood sugar, high blood sodium, low blood calcium, high blood urea, and low blood magnesium levels may cause seizures. Medications Up to 9% of status epilepticus cases occur due to drug intoxication. Common drugs involved include antidepressants, stimulants (cocaine), and antihistamines. Withdrawal seizures commonly occur after prolonged alcohol or sedative use. In people who are at risk of developing epileptic seizures, common herbal medicines such as ephedra, ginkgo biloba and wormwood can provoke seizures. Acute infections Systemic infection with high fever is a common cause of seizures, especially in children. These are called febrile seizures and occur in 2–5% of children between the ages of six months and five years. Acute infection of the brain, such as encephalitis or meningitis are also causes of seizures. Acute brain trauma Acute stroke or brain bleed may lead to seizures. Stroke is the most common cause of seizures in the elderly population. Post-stroke seizures occur in 5-7% of those with ischemic strokes. It is higher in those who experienced brain bleeds, with 10-16% risk in those patients. Recent traumatic brain injury may also lead to seizures. 1 to 5 of every 10 people who have had traumatic brain injury have experienced at least one seizure. Seizures may occur within 7 days of the injury (early posttraumatic seizure) or after 7 days have passed (late posttraumatic seizure). Causes of unprovoked seizures Structural Space-occupying lesions in the brain (abscesses, tumours) are one cause of unprovoked seizures. In people with brain tumours, the frequency of epilepsy depends on the location of the tumor in the cortical region. Abnormalities in blood vessels of the brain (Arteriovenous malformation) can also cause epilepsy. In babies and children, congenital brain abnormalities, such as lissencephaly or polymicrogyria, will also result in epilepsy. Hypoxic-ischemic encephalopathy in newborns will also predispose the newborn to epilepsy. Prior brain trauma Strokes, brain bleeds, and traumatic brain injury can all also lead to epilepsy if seizures re-occur. If the first seizure occurs more than 7 days following a stroke, there is a higher chance of the person developing epilepsy. Post-stroke epilepsy accounts for 30%-50% of new epilepsy cases. This is also the case for traumatic brain injury, with 80% of people with late posttraumatic seizures having another seizure occur, classifying it as epilepsy. Prior brain infections Infections of newborns that occur while before or during birth, such as herpes simplex virus, rubella, and cytomegalovirus, all carry a risk of causing epilepsy. Infection with the pork tapeworm, which can cause neurocysticercosis, is the cause of up to half of epilepsy cases in areas of the world where the parasite is common. Meningitis and encephalitis also carry the risk of causing long-term epilepsy as well. Genetic epilepsy syndromes During childhood, well-defined epilepsy syndromes are generally seen. Examples include Dravet Syndrome, Lennox-Gastaut Syndrome, and Juvenile Myoclonic Epilepsy. Mechanism Neurons function by either being excited or inhibited. Excited neurons fire electrical charges while inhibited neurons are prevented from firing. The balance of the two maintains our central nervous system. In those with seizures, neurons are both hyperexcitable and hypersynchronous, where many neurons fire numerously at the same time. This may be due to an imbalance of excitation and inhibition of neurons. γ-aminobutyric acid (GABA) and Glutamate are chemicals called neurotransmitters that work by opening or closing ion channels on neurons to cause inhibition or excitability. GABA serves to inhibit neurons from firing. It has been found to be decreased in epilepsy patients. This may explain the lack of inhibition of neurons resulting in seizures. Glutamate serves to excite neurons into firing when appropriate. It was found to be increased in those with epilepsy.  This is a possible mechanism for why there is hyper-excitability of neurons in seizures. Seizures that occur after brain injury may be due to the brain adapting to injury (neuroplasticity). This process is known as epileptogenesis. There is loss of inhibitory neurons because they die due to the injury. The brain may also adapt and make new neuron connections that may be hyper-excitatory. Brief seizures, such as absence seizures lasting 5–10 seconds, do not cause observable brain damage. More prolonged seizures have a higher risk of neuronal death. Prolonged and recurrent seizures, such as status epilepticus, typically cause brain damage. Scarring of brain tissue (gliosis), neuronal death, and shrinking of areas of the brain (atrophy) are linked to recurrent seizures. These changes may lead to the development of epilepsy. Diagnosis Diagnosis of seizures involve gathering history, doing a physical exam, and ordering tests. These are done to classify the seizure and determine if the seizure is provoked or unprovoked. History and physical examination Events leading up to the seizure and what movements occurred during the seizure are important in classifying the type of seizure. The person's memory of what happened before and during the seizure is also important. However, since most people that experience seizures do not remember what happened, it is best to get history from a witness when possible. Video recording of the seizure is also helpful in diagnosis of seizures. Events that occurred after the seizure are also an important part of the history. Past medical history, such as past head trauma, past strokes, past febrile seizures, or past infections, are helpful. In babies and children, information about developmental milestones, birth history, and previous illnesses are important as potential epilepsy risk factors. Family history of seizures is also important in evaluating risk for epilepsy. History regarding medication use, substance use, and alcohol use is important in determining a cause of the seizure. Most people are in a postictal state (drowsy or confused) following a seizure. A bite mark on the side of the tongue or bleeding from the mouth strongly indicates a seizure happened. But only a third of people who have had a seizure have such a bite. Weakness of one limb or asymmetric reflexes are also signs a seizure just occurred. Presence of urinary incontinence or fecal incontinence also strongly suggests a seizure occurred. However, most people who have had a seizure will have a normal physical exam. Tests Blood tests can determine if there are any reversible causes of the seizure (provoked seizures). This includes a complete blood count that may show infection. A comprehensive metabolic panel is ordered to rule out abnormal sugar levels (hypoglycemia or hyperglycemia) or electrolyte abnormalities (such as hyponatremia) as a cause. A lumbar puncture is mainly done if there is reason to believe infection or inflammation of the nervous system is occurring. Toxicology screening is also mainly done if history is suggestive. Brain imaging by CT scan and MRI is recommended after a first seizure, especially if no provoking factors are discovered. It is done to detect structural problems inside the brain, such as tumors. MRI is generally the better imaging test, but CT scan is preferred when intracranial bleeding is suspected. Imaging may be done at a later point in time in those who return to their normal selves while in the emergency room. An electroencephalography (EEG) measures the brain's electrical activity. It is used in cases of first seizures that have no provoking factor, normal head imaging, and no prior history of head trauma. It will help determine the type of seizure or epilepsy syndrome present, as well as where the seizures are coming from if its focal. It is also used when a person has not returned to baseline after a seizure for a prolonged time. Differential diagnosis Other conditions that commonly get mistaken for a seizure include syncope, psychogenic nonepileptic seizures, cardiac arrhythmias, migraine headaches, and stroke/transient ischemic attacks. Prevention There are times when a person has never had a seizure but anti-seizure medications are started to prevent seizures in those at risk. Following traumatic brain injury, anti-seizure medications decrease the risk of early seizures but not late seizures. However, there is no clear evidence that anti-seizure medications are effective at preventing seizures following brain surgery (craniotomy), a brain bleed, or after a stroke. Prevention of seizures from re-occurring after a first seizure depends on many factors. If it was an unprovoked seizure with abnormal brain imaging or abnormal EEG, then it is recommended to start anti-seizure medication. If a person has an unprovoked seizure, but physical exam is normal, EEG is normal, and brain imaging is normal, then anti-seizure medication may not be needed. The decision to start anti-seizure medications should be made after a discussion between the patient and doctor. In children with one simple febrile seizure, starting anti-seizure medications is not recommended. While both fever medications (antipyretics) and anti-seizure medications reduce reoccurrence, the harmless nature of febrile seizures outweighs the risks of these medications. However, if it was a complex febrile seizure, EEG should be done. If EEG is abnormal, starting prophylactic anti-seizure medications is recommended. Management During an active seizure, the person seizing should be slowly laid on the floor. Witnesses should not try to stop the convulsions or other movements. Potentially sharp or dangerous objects should be moved from the area around a person experiencing a seizure so that the individual is not hurt. Nothing should be placed in the person's mouth as it is a choking hazard. After the seizure, if the person is not fully conscious and alert, they should be turned to their side to prevent choking. This is called recovery position. Timing of the seizure is also important. If a seizure is longer than five minutes, or there are two or more seizures occurring in five minutes, it is a medical emergency known as status epilepticus. Emergency services should be called. Medication The first line medication for an actively seizing person is a benzodiazepine, with most guidelines recommending lorazepam. Diazepam and midazolam are alternatives. It may be given in IV if emergency services is present. Rectal and intranasal forms also exist if a child has had seizures previously and was prescribed the rescue medication. If seizures continue, second-line therapy includes phenytoin, fosphenytoin, and phenobarbital. Levetiracetam or valproate may also be used. Starting anti-seizure medications is not typically recommended if it was a provoked seizure that can be corrected. Examples of causes of provoked seizures that can be corrected include low blood sugar, low blood sodium, febrile seizures in children, and substance/medication use. Starting anti-seizure medications is usually for those with medium to high risk of seizures re-occurring. This includes people with unprovoked seizures with abnormal brain imaging or abnormal EEG. It also includes those who have had more than one unprovoked seizure more than 24 hours apart. It is recommended to start with one anti-seizure medication. Another may be added if one is not enough to control the seizure occurrence. Approximately 70% of people can obtain full control with continuous use of medication. The type of medication used is based on the type of seizure. Anti-seizure medications may be slowly stopped after a period of time if a person has just experienced one seizure and has not had anymore. The decision to stop anti-seizure medications should be discussed between the doctor and patient, weighing the benefits and risks. Surgery In severe cases where seizures are uncontrolled by at least two anti-seizure medications, brain surgery can be a treatment option. Epilepsy surgery is especially useful for those with focal seizures where the seizures are coming from a specific part of the brain. The amount of brain removed during the surgery depends on the extent of the brain involved in the seizures. It can range from just removing one lobe of the brain (temporal lobectomy) to disconnecting an entire side of the brain (hemispherectomy). The procedure can be curative, where seizures are eliminated completely. However, if it is not curative, it can be palliative that reduces the frequency of seizures but does not eliminate it. Other Helmets may be used to provide protection to the head during a seizure. Some claim that seizure response dogs, a form of service dog, can predict seizures. Evidence for this, however, is poor. Cannabis has also been used for the management of seizures that do not respond to anti-seizure medications. Research on its effectiveness is ongoing, but current research shows that it does reduce seizure frequency. A ketogenic diet or modified Atkins diet may help in those who have epilepsy who do not improve following typical treatments, with evidence for its effectiveness growing. Precautions Following a person's first seizure, they are legally not allowed to drive until they are seizure-free for a period of time. This period of time varies between states, but is usually between 6 and 12 months. They are also cautioned against working at heights and swimming alone in case a seizure occurs. Prognosis Following a first unprovoked seizure, the risk of more seizures in the next two years is around 40%. Starting anti-seizure medications reduces recurrence of seizures by 35% within the first two years. The greatest predictors of more seizures are problems either on the EEG or on imaging of the brain. Those with normal EEG and normal physical exam following a first unprovoked seizure had less of risk of recurrence in the next two years, with a risk of 25%. In adults, after 6 months of being seizure-free after a first seizure, the risk of a subsequent seizure in the next year is less than 20% regardless of treatment. Those who have a seizure that is provoked have a low risk of re-occurrence, but have a higher risk of death compared to those with epilepsy. Epidemiology Approximately 8–10% of people will experience an epileptic seizure during their lifetime. In adults, the risk of seizure recurrence within the five years following a new-onset seizure is 35%; the risk rises to 75% in persons who have had a second seizure. In children, the risk of seizure recurrence within the five years following a single unprovoked seizure is about 50%; the risk rises to about 80% after two unprovoked seizures. In the United States in 2011, seizures resulted in an estimated 1.6 million emergency department visits; approximately 400,000 of these visits were for new-onset seizures. History Epileptic seizures were first described in an Akkadian text from 2000 B.C. Early reports of epilepsy often saw seizures and convulsions as the work of "evil spirits". The perception of epilepsy, however, began to change in the time of Ancient Greek medicine. The term "epilepsy" itself is a Greek word, which is derived from the verb "epilambanein", meaning "to seize, possess, or afflict". Although the Ancient Greeks referred to epilepsy as the "sacred disease", this perception of epilepsy as a "spiritual" disease was challenged by Hippocrates in his work On the Sacred Disease, who proposed that the source of epilepsy was from natural causes rather than supernatural ones. Early surgical treatment of epilepsy was primitive in Ancient Greek, Roman and Egyptian medicine. The 19th century saw the rise of targeted surgery for the treatment of epileptic seizures, beginning in 1886 with localized resections performed by Sir Victor Horsley, a neurosurgeon in London. Another advancement was that of the development by the Montreal procedure by Canadian neurosurgeon Wilder Penfield, which involved use of electrical stimulation among conscious patients to more accurately identify and resect the epileptic areas in the brain. Society and culture Economics Seizures result in direct economic costs of about one billion dollars in the United States. Epilepsy results in economic costs in Europe of around €15.5 billion in 2004. In India, epilepsy is estimated to result in costs of US$1.7 billion or 0.5% of the GDP. They make up about 1% of emergency department visits (2% for emergency departments for children) in the United States. Research Scientific work into the prediction of epileptic seizures began in the 1970s. Several techniques and methods have been proposed, but evidence regarding their usefulness is still lacking. Two promising areas include: (1) gene therapy, and (2) seizure detection and seizure prediction. Gene therapy for epilepsy consists of employing vectors to deliver pieces of genetic material to areas of the brain involved in seizure onset. Seizure prediction is a special case of seizure detection in which the developed systems is able to issue a warning before the clinical onset of the epileptic seizure. Computational neuroscience has been able to bring a new point of view on the seizures by considering the dynamical aspects.
Biology and health sciences
Symptoms and signs
Health
27488
https://en.wikipedia.org/wiki/Software%20documentation
Software documentation
Software documentation is written text or illustration that accompanies computer software or is embedded in the source code. The documentation either explains how the software operates or how to use it, and may mean different things to people in different roles. Documentation is an important part of software engineering. Types of documentation include: Requirements – Statements that identify attributes, capabilities, characteristics, or qualities of a system. This is the foundation for what will be or has been implemented. Architecture/Design – Overview of software. Includes relations to an environment and construction principles to be used in design of software components. Technical – Documentation of code, algorithms, interfaces, and APIs. End user – Manuals for the end-user, system administrators and support staff. Marketing – How to market the product and analysis of the market demand. Types Requirements documentation Requirements documentation is the description of what a particular software does or should do. It is used throughout development to communicate how the software functions or how it is intended to operate. It is also used as an agreement or as the foundation for agreement on what the software will do. Requirements are produced and consumed by everyone involved in the production of software, including: end users, customers, project managers, sales, marketing, software architects, usability engineers, interaction designers, developers, and testers. Requirements come in a variety of styles, notations and formality. Requirements can be goal-like (e.g., distributed work environment), close to design (e.g., builds can be started by right-clicking a configuration file and selecting the 'build' function), and anything in between. They can be specified as statements in natural language, as drawn figures, as detailed mathematical formulas, or as a combination of them all. The variation and complexity of requirement documentation make it a proven challenge. Requirements may be implicit and hard to uncover. It is difficult to know exactly how much and what kind of documentation is needed and how much can be left to the architecture and design documentation, and it is difficult to know how to document requirements considering the variety of people who shall read and use the documentation. Thus, requirements documentation is often incomplete (or non-existent). Without proper requirements documentation, software changes become more difficult — and therefore more error prone (decreased software quality) and time-consuming (expensive). The need for requirements documentation is typically related to the complexity of the product, the impact of the product, and the life expectancy of the software. If the software is very complex or developed by many people (e.g., mobile phone software), requirements can help better communicate what to achieve. If the software is safety-critical and can have a negative impact on human life (e.g., nuclear power systems, medical equipment, mechanical equipment), more formal requirements documentation is often required. If the software is expected to live for only a month or two (e.g., very small mobile phone applications developed specifically for a certain campaign) very little requirements documentation may be needed. If the software is a first release that is later built upon, requirements documentation is very helpful when managing the change of the software and verifying that nothing has been broken in the software when it is modified. Traditionally, requirements are specified in requirements documents (e.g. using word processing applications and spreadsheet applications). To manage the increased complexity and changing nature of requirements documentation (and software documentation in general), database-centric systems and special-purpose requirements management tools are advocated. In Agile software development, requirements are often expressed as user stories with accompanying acceptance criteria. User stories are typically part of a feature, or an epic, which is a broader functionality or set of related functionalities that deliver a specific value to the user based on the business requirements. Architecture design documentation Architecture documentation (also known as software architecture description) is a special type of design document. In a way, architecture documents are third derivative from the code (design document being second derivative, and code documents being first). Very little in the architecture documents is specific to the code itself. These documents do not describe how to program a particular routine, or even why that particular routine exists in the form that it does, but instead merely lays out the general requirements that would motivate the existence of such a routine. A good architecture document is short on details but thick on explanation. It may suggest approaches for lower level design, but leave the actual exploration trade studies to other documents. Another type of design document is the comparison document, or trade study. This would often take the form of a whitepaper. It focuses on one specific aspect of the system and suggests alternate approaches. It could be at the user interface, code, design, or even architectural level. It will outline what the situation is, describe one or more alternatives, and enumerate the pros and cons of each. A good trade study document is heavy on research, expresses its idea clearly (without relying heavily on obtuse jargon to dazzle the reader), and most importantly is impartial. It should honestly and clearly explain the costs of whatever solution it offers as best. The objective of a trade study is to devise the best solution, rather than to push a particular point of view. It is perfectly acceptable to state no conclusion, or to conclude that none of the alternatives are sufficiently better than the baseline to warrant a change. It should be approached as a scientific endeavor, not as a marketing technique. A very important part of the design document in enterprise software development is the Database Design Document (DDD). It contains Conceptual, Logical, and Physical Design Elements. The DDD includes the formal information that the people who interact with the database need. The purpose of preparing it is to create a common source to be used by all players within the scene. The potential users are: Database designer Database developer Database administrator Application designer Application developer When talking about Relational Database Systems, the document should include following parts: Entity - Relationship Schema (enhanced or not), including following information and their clear definitions: Entity Sets and their attributes Relationships and their attributes Candidate keys for each entity set Attribute and Tuple based constraints Relational Schema, including following information: Tables, Attributes, and their properties Views Constraints such as primary keys, foreign keys, Cardinality of referential constraints Cascading Policy for referential constraints Primary keys It is very important to include all information that is to be used by all actors in the scene. It is also very important to update the documents as any change occurs in the database as well. Technical documentation It is important for the code documents associated with the source code (which may include README files and API documentation) to be thorough, but not so verbose that it becomes overly time-consuming or difficult to maintain them. Various how-to and overview documentation guides are commonly found specific to the software application or software product being documented by API writers. This documentation may be used by developers, testers, and also end-users. Today, a lot of high-end applications are seen in the fields of power, energy, transportation, networks, aerospace, safety, security, industry automation, and a variety of other domains. Technical documentation has become important within such organizations as the basic and advanced level of information may change over a period of time with architecture changes. There is evidence that the existence of good code documentation actually reduces maintenance costs for software. Code documents are often organized into a reference guide style, allowing a programmer to quickly look up an arbitrary function or class. Technical documentation embedded in source code Often, tools such as Doxygen, NDoc, Visual Expert, Javadoc, JSDoc, EiffelStudio, Sandcastle, ROBODoc, POD, TwinText, or Universal Report can be used to auto-generate the code documents—that is, they extract the comments and software contracts, where available, from the source code and create reference manuals in such forms as text or HTML files. The idea of auto-generating documentation is attractive to programmers for various reasons. For example, because it is extracted from the source code itself (for example, through comments), the programmer can write it while referring to the code, and use the same tools used to create the source code to make the documentation. This makes it much easier to keep the documentation up-to-date. A possible downside is that only programmers can edit this kind of documentation, and it depends on them to refresh the output (for example, by running a cron job to update the documents nightly). Some would characterize this as a pro rather than a con. Literate programming Respected computer scientist Donald Knuth has noted that documentation can be a very difficult afterthought process and has advocated literate programming, written at the same time and location as the source code and extracted by automatic means. The programming languages Haskell and CoffeeScript have built-in support for a simple form of literate programming, but this support is not widely used. Elucidative programming Elucidative Programming is the result of practical applications of Literate Programming in real programming contexts. The Elucidative paradigm proposes that source code and documentation be stored separately. Often, software developers need to be able to create and access information that is not going to be part of the source file itself. Such annotations are usually part of several software development activities, such as code walks and porting, where third-party source code is analysed in a functional way. Annotations can therefore help the developer during any stage of software development where a formal documentation system would hinder progress. User documentation Unlike code documents, user documents simply describe how a program is used. In the case of a software library, the code documents and user documents could in some cases be effectively equivalent and worth conjoining, but for a general application this is not often true. Typically, the user documentation describes each feature of the program, and assists the user in realizing these features. It is very important for user documents to not be confusing, and for them to be up to date. User documents do not need to be organized in any particular way, but it is very important for them to have a thorough index. Consistency and simplicity are also very valuable. User documentation is considered to constitute a contract specifying what the software will do. API Writers are very well accomplished towards writing good user documents as they would be well aware of the software architecture and programming techniques used.
Technology
Software development: General
null
27529
https://en.wikipedia.org/wiki/September
September
September is the ninth month of the year in the Julian and Gregorian calendars. Its length is 30 days. September in the Northern Hemisphere and March in the Southern Hemisphere are seasonally equivalent. In the Northern hemisphere, the beginning of the meteorological autumn is on 1 September. In the Southern hemisphere, the beginning of the meteorological spring is on 1 September. September marks the beginning of the ecclesiastical year in the Eastern Orthodox Church. It is the start of the academic year in many countries of the northern hemisphere, in which children go back to school after the summer break, sometimes on the first day of the month. Some Libras and Virgos are born in September, with Virgos being born on September 1st through September 22nd and Libras September 23rd through September 30. September (from Latin septem, "seven") was originally the seventh month in the oldest known Roman calendar, the calendar of Romulus , with March being (Latin Martius) the first month of the year until perhaps as late as 451 BC. After the calendar reform that added January and February to the beginning of the year, September became the ninth month but retained its name. It had 29 days until the Julian reform, which added a day. Events Ancient Roman observances for September include Ludi Romani, originally celebrated from September 12 to September 14, later extended to September 5 to September 19. In the 1st century BC, an extra day was added in honor of the deified Julius Caesar on 4 September. Epulum Jovis was held on September 13. Ludi Triumphales was held from September 18–22. The Septimontium was celebrated in September, and on December 11 on later calendars. These dates do not correspond to the modern Gregorian calendar. September was called the "harvest month" in Charlemagne's calendar. September corresponds partly to the Fructidor and partly to the Vendémiaire of the first French republic. September is called Herbstmonat, harvest month, in Switzerland. The Anglo-Saxons called the month Gerstmonath, barley month, that crop being then usually harvested. In 1752, the British Empire adopted the Gregorian calendar. In the British Empire that year, September 2 was immediately followed by September 14. On Usenet, it is said that September 1993 (Eternal September) never ended. In the United States, September is one of the most common birth months (third most popular after August and July, which both have 31 days), as all but one Top 10 most common birthdays are in September, based on the National Center for Health Statistics statistics on births between 1994 and 2014. The most common birthday is September 9 (#1), least common is September 1 (#250). Astronomy and astrology The September equinox takes place in this month, and certain observances are organized around it. It is the Autumn equinox in the Northern Hemisphere, and the Vernal equinox in the Southern Hemisphere. The dates can vary from 21 September to 24 September (in UTC). September is mostly in the sixth month of the astrological calendar (and the first part of the seventh), which begins at the end of March/Mars/Aries. Symbols September's birthstone is the sapphire. The birth flowers are the forget-me-not, morning glory and aster. The zodiac signs are Virgo (until September 22) and Libra (September 23 onward). Observances This list does not necessarily imply either official status or general observance. Non-Gregorian List of observances set by the Bahá'í calendar List of observances set by the Chinese calendar List of observances set by the Hebrew calendar List of observances set by the Islamic calendar List of observances set by the Solar Hijri calendar Month-long Amerindian Heritage Month (Guyana) Childhood Cancer Awareness Month (United Kingdom) Gynecologic Cancer Awareness Month Leukemia and Lymphoma Awareness Month Ovarian Cancer Awareness Month Thyroid Cancer Awareness Month National Suicide Prevention Month Vegetable Month United States Better Breakfast Month Food Safety Education Month National Childhood Obesity Awareness Month Hydrocephalus Awareness Month Pain Awareness Month National Preparedness Month National Prostate Health Month National Sickle Cell Awareness Month National Yoga Month Food months National Bourbon Heritage Month California Wine Month National Chicken Month National Honey Month National Mushroom Month National Italian Cheese Month National Papaya Month National Potato Month National Rice Month National Whole Grains Month National Wild Rice Month Movable Gregorian Engineering Day (Egypt) White Balloon Day Day of the Programmer Te Wiki o te Reo Māori (Māori Language Week) (New Zealand)
Technology
Months
null
27553
https://en.wikipedia.org/wiki/Set%20theory
Set theory
Set theory is the branch of mathematical logic that studies sets, which can be informally described as collections of objects. Although objects of any kind can be collected into a set, set theory – as a branch of mathematics – is mostly concerned with those that are relevant to mathematics as a whole. The modern study of set theory was initiated by the German mathematicians Richard Dedekind and Georg Cantor in the 1870s. In particular, Georg Cantor is commonly considered the founder of set theory. The non-formalized systems investigated during this early stage go under the name of naive set theory. After the discovery of paradoxes within naive set theory (such as Russell's paradox, Cantor's paradox and the Burali-Forti paradox), various axiomatic systems were proposed in the early twentieth century, of which Zermelo–Fraenkel set theory (with or without the axiom of choice) is still the best-known and most studied. Set theory is commonly employed as a foundational system for the whole of mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Besides its foundational role, set theory also provides the framework to develop a mathematical theory of infinity, and has various applications in computer science (such as in the theory of relational algebra), philosophy, formal semantics, and evolutionary dynamics. Its foundational appeal, together with its paradoxes, and its implications for the concept of infinity and its multiple applications have made set theory an area of major interest for logicians and philosophers of mathematics. Contemporary research into set theory covers a vast array of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. History Early history The basic notion of grouping objects has existed since at least the emergence of numbers, and the notion of treating sets as their own objects has existed since at least the Tree of Porphyry, 3rd-century AD. The simplicity and ubiquity of sets makes it hard to determine the origin of sets as now used in mathematics, however, Bernard Bolzano's Paradoxes of the Infinite (Paradoxien des Unendlichen, 1851) is generally considered the first rigorous introduction of sets to mathematics. In his work, he (among other things) expanded on Galileo's paradox, and introduced one-to-one correspondence of infinite sets, for example between the intervals and by the relation . However, he resisted saying these sets were equinumerous, and his work is generally considered to have been uninfluential in mathematics of his time. Before mathematical set theory, basic concepts of infinity were considered to be solidly in the domain of philosophy (see: Infinity (philosophy) and ). Since the 5th century BC, beginning with Greek philosopher Zeno of Elea in the West (and early Indian mathematicians in the East, mathematicians had struggled with the concept of infinity. With the development of calculus in the late 17th century, philosophers began to generally distingush between actual and potential infinity, wherein mathematics was only considered in the latter. Carl Friedrich Gauss famously stated: "Infinity is nothing more than a figure of speech which helps us talk about limits. The notion of a completed infinity doesn't belong in mathematics." Development of mathematical set theory was motivated by several mathematicians. Bernhard Riemann's lecture On the Hypotheses which lie at the Foundations of Geometry (1854) proposed new ideas about topology, and about basing mathematics (especially geometry) in terms of sets or manifolds in the sense of a class (which he called Mannigfaltigkeit) now called point-set topology. The lecture was published by Richard Dedekind in 1868, along with Riemann’s paper on trigonometric series (which presented the Riemann integral), The latter was a starting point a movement in real analysis for the study of “seriously” discontinuous functions. A young Georg Cantor entered into this area, which led him to the study of point-sets. Around 1871, influenced by Riemann, Dedekind began working with sets in his publications, which dealt very clearly and precisely with equivalence relations, partitions of sets, and homomorphisms. Thus, many of the usual set-theoretic procedures of twentieth-century mathematics go back to his work. However, he did not publish a formal explanation of his set theory until 1888. Naive set theory Set theory, as understood by modern mathematicians, is generally considered to be founded by a single paper in 1874 by Georg Cantor titled On a Property of the Collection of All Real Algebraic Numbers. In his paper, he developed the notion of cardinality, comparing the sizes of two sets by setting them in one-to-one correspondence. His "revolutionary discovery" was that the set of all real numbers is uncountable, that is, one cannot put all real numbers in a list. This theorem is proved using Cantor's first uncountability proof, which differs from the more familiar proof using his diagonal argument. Cantor introduced fundamental constructions in set theory, such as the power set of a set A, which is the set of all possible subsets of A. He later proved that the size of the power set of A is strictly larger than the size of A, even when A is an infinite set; this result soon became known as Cantor's theorem. Cantor developed a theory of transfinite numbers, called cardinals and ordinals, which extended the arithmetic of the natural numbers. His notation for the cardinal numbers was the Hebrew letter (ℵ, aleph) with a natural number subscript; for the ordinals he employed the Greek letter (, omega). Set theory was beginning to become an essential ingredient of the new “modern” approach to mathematics. Originally, Cantor's theory of transfinite numbers was regarded as counter-intuitive – even shocking. This caused it to encounter resistance from mathematical contemporaries such as Leopold Kronecker and Henri Poincaré and later from Hermann Weyl and L. E. J. Brouwer, while Ludwig Wittgenstein raised philosophical objections (see: Controversy over Cantor's theory). Dedekind’s algebraic style only began to find followers in the 1890s Despite the controversy, Cantor's set theory gained remarkable ground around the turn of the 20th century with the work of several notable mathematicians and philosophers. Richard Dedekind, around the same time, began working with sets in his publications, and famously constructing the real numbers using Dedekind cuts. He also worked with Giuseppe Peano in developing the Peano axioms, which formalized natural-number arithmetic, using set-theoretic ideas, which also introduced the epsilon symbol for set membership. Possibly most prominently, Gottlob Frege began to develop his Foundations of Aritmetic. In his work, Frege tries to ground all mathematics in terms of logical axioms using Cantor's cardinality. For example, the sentence "the number of horses in the barn is four" means that four objects fall under the concept horse in the barn. Frege attempted to explain our grasp of numbers through cardinality ('the number of...', or ), relying on Hume's principle. However, Frege's work was short-lived, as it was found by Bertrand Russell that his axioms lead to a contradiction. Specifically, Frege's Basic Law V (now known as the axiom schema of unrestricted comprehension). According to Basic Law V, for any sufficiently well-defined property, there is the set of all and only the objects that have that property. The contradiction, called Russell's paradox, is shown as follows: Let R be the set of all sets that are not members of themselves. (This set is sometimes called "the Russell set".) If R is not a member of itself, then its definition entails that it is a member of itself; yet, if it is a member of itself, then it is not a member of itself, since it is the set of all sets that are not members of themselves. The resulting contradiction is Russell's paradox. In symbols: This came around a time of several paradoxes or counter-intuitive results. For example, that the parallel postulate cannot be proved, the existence of mathematical objects that cannot be computed or explicitly described, and the existence of theorems of arithmetic that cannot be proved with Peano arithmetic. The result was a foundational crisis of mathematics. Basic concepts and notation Set theory begins with a fundamental binary relation between an object and a set . If is a member (or element) of , the notation is used. A set is described by listing elements separated by commas, or by a characterizing property of its elements, within braces { }. Since sets are objects, the membership relation can relate sets as well, i.e., sets themselves can be members of other sets. A derived binary relation between two sets is the subset relation, also called set inclusion. If all the members of set are also members of set , then is a subset of , denoted . For example, is a subset of , and so is but is not. As implied by this definition, a set is a subset of itself. For cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined. is called a proper subset of if and only if is a subset of , but is not equal to . Also, 1, 2, and 3 are members (elements) of the set , but are not subsets of it; and in turn, the subsets, such as , are not members of the set . More complicated relations can exist; for example, the set is both a member and a proper subset of the set . Just as arithmetic features binary operations on numbers, set theory features binary operations on sets. The following is a partial list of them: Union of the sets and , denoted , is the set of all objects that are a member of , or , or both. For example, the union of and is the set . Intersection of the sets and , denoted , is the set of all objects that are members of both and . For example, the intersection of and is the set . Set difference of and , denoted , is the set of all members of that are not members of . The set difference is , while conversely, the set difference is . When is a subset of , the set difference is also called the complement of in . In this case, if the choice of is clear from the context, the notation is sometimes used instead of , particularly if is a universal set as in the study of Venn diagrams. Symmetric difference of sets and , denoted or , is the set of all objects that are a member of exactly one of and (elements which are in one of the sets, but not in both). For instance, for the sets and , the symmetric difference set is . It is the set difference of the union and the intersection, or . Cartesian product of and , denoted , is the set whose members are all possible ordered pairs , where is a member of and is a member of . For example, the Cartesian product of {1, 2} and {red, white} is Some basic sets of central importance are the set of natural numbers, the set of real numbers and the empty set – the unique set containing no elements. The empty set is also occasionally called the null set, though this name is ambiguous and can lead to several interpretations. The power set of a set , denoted , is the set whose members are all of the possible subsets of . For example, the power set of is . Notably, contains both A and the empty set. Ontology A set is pure if all of its members are sets, all members of its members are sets, and so on. For example, the set containing only the empty set is a nonempty pure set. In modern set theory, it is common to restrict attention to the von Neumann universe of pure sets, and many systems of axiomatic set theory are designed to axiomatize the pure sets only. There are many technical advantages to this restriction, and little generality is lost, because essentially all mathematical concepts can be modeled by pure sets. Sets in the von Neumann universe are organized into a cumulative hierarchy, based on how deeply their members, members of members, etc. are nested. Each set in this hierarchy is assigned (by transfinite recursion) an ordinal number , known as its rank. The rank of a pure set is defined to be the least ordinal that is strictly greater than the rank of any of its elements. For example, the empty set is assigned rank 0, while the set containing only the empty set is assigned rank 1. For each ordinal , the set is defined to consist of all pure sets with rank less than . The entire von Neumann universe is denoted . Formalized set theory Elementary set theory can be studied informally and intuitively, and so can be taught in primary schools using Venn diagrams. The intuitive approach tacitly assumes that a set may be formed from the class of all objects satisfying any particular defining condition. This assumption gives rise to paradoxes, the simplest and best known of which are Russell's paradox and the Burali-Forti paradox. Axiomatic set theory was originally devised to rid set theory of such paradoxes. The most widely studied systems of axiomatic set theory imply that all sets form a cumulative hierarchy. Such systems come in two flavors, those whose ontology consists of: Sets alone. This includes the most common axiomatic set theory, Zermelo–Fraenkel set theory with the axiom of choice (ZFC). Fragments of ZFC include: Zermelo set theory, which replaces the axiom schema of replacement with that of separation; General set theory, a small fragment of Zermelo set theory sufficient for the Peano axioms and finite sets; Kripke–Platek set theory, which omits the axioms of infinity, powerset, and choice, and weakens the axiom schemata of separation and replacement. Sets and proper classes. These include Von Neumann–Bernays–Gödel set theory, which has the same strength as ZFC for theorems about sets alone, and Morse–Kelley set theory and Tarski–Grothendieck set theory, both of which are stronger than ZFC. The above systems can be modified to allow urelements, objects that can be members of sets but that are not themselves sets and do not have any members. The New Foundations systems of NFU (allowing urelements) and NF (lacking them), associate with Willard Van Orman Quine, are not based on a cumulative hierarchy. NF and NFU include a "set of everything", relative to which every set has a complement. In these systems urelements matter, because NF, but not NFU, produces sets for which the axiom of choice does not hold. Despite NF's ontology not reflecting the traditional cumulative hierarchy and violating well-foundedness, Thomas Forster has argued that it does reflect an iterative conception of set. Systems of constructive set theory, such as CST, CZF, and IZF, embed their set axioms in intuitionistic instead of classical logic. Yet other systems accept classical logic but feature a nonstandard membership relation. These include rough set theory and fuzzy set theory, in which the value of an atomic formula embodying the membership relation is not simply True or False. The Boolean-valued models of ZFC are a related subject. An enrichment of ZFC called internal set theory was proposed by Edward Nelson in 1977. Applications Many mathematical concepts can be defined precisely using only set theoretic concepts. For example, mathematical structures as diverse as graphs, manifolds, rings, vector spaces, and relational algebras can all be defined as sets satisfying various (axiomatic) properties. Equivalence and order relations are ubiquitous in mathematics, and the theory of mathematical relations can be described in set theory. Set theory is also a promising foundational system for much of mathematics. Since the publication of the first volume of Principia Mathematica, it has been claimed that most (or even all) mathematical theorems can be derived using an aptly designed set of axioms for set theory, augmented with many definitions, using first or second-order logic. For example, properties of the natural and real numbers can be derived within set theory, as each of these number systems can be defined by representing their elements as sets of specific forms. Set theory as a foundation for mathematical analysis, topology, abstract algebra, and discrete mathematics is likewise uncontroversial; mathematicians accept (in principle) that theorems in these areas can be derived from the relevant definitions and the axioms of set theory. However, it remains that few full derivations of complex mathematical theorems from set theory have been formally verified, since such formal derivations are often much longer than the natural language proofs mathematicians commonly present. One verification project, Metamath, includes human-written, computer-verified derivations of more than 12,000 theorems starting from ZFC set theory, first-order logic and propositional logic. Areas of study Set theory is a major area of research in mathematics with many interrelated subfields: Combinatorial set theory Combinatorial set theory concerns extensions of finite combinatorics to infinite sets. This includes the study of cardinal arithmetic and the study of extensions of Ramsey's theorem such as the Erdős–Rado theorem. Descriptive set theory Descriptive set theory is the study of subsets of the real line and, more generally, subsets of Polish spaces. It begins with the study of pointclasses in the Borel hierarchy and extends to the study of more complex hierarchies such as the projective hierarchy and the Wadge hierarchy. Many properties of Borel sets can be established in ZFC, but proving these properties hold for more complicated sets requires additional axioms related to determinacy and large cardinals. The field of effective descriptive set theory is between set theory and recursion theory. It includes the study of lightface pointclasses, and is closely related to hyperarithmetical theory. In many cases, results of classical descriptive set theory have effective versions; in some cases, new results are obtained by proving the effective version first and then extending ("relativizing") it to make it more broadly applicable. A recent area of research concerns Borel equivalence relations and more complicated definable equivalence relations. This has important applications to the study of invariants in many fields of mathematics. Fuzzy set theory In set theory as Cantor defined and Zermelo and Fraenkel axiomatized, an object is either a member of a set or not. In fuzzy set theory this condition was relaxed by Lotfi A. Zadeh so an object has a degree of membership in a set, a number between 0 and 1. For example, the degree of membership of a person in the set of "tall people" is more flexible than a simple yes or no answer and can be a real number such as 0.75. Inner model theory An inner model of Zermelo–Fraenkel set theory (ZF) is a transitive class that includes all the ordinals and satisfies all the axioms of ZF. The canonical example is the constructible universe L developed by Gödel. One reason that the study of inner models is of interest is that it can be used to prove consistency results. For example, it can be shown that regardless of whether a model V of ZF satisfies the continuum hypothesis or the axiom of choice, the inner model L constructed inside the original model will satisfy both the generalized continuum hypothesis and the axiom of choice. Thus the assumption that ZF is consistent (has at least one model) implies that ZF together with these two principles is consistent. The study of inner models is common in the study of determinacy and large cardinals, especially when considering axioms such as the axiom of determinacy that contradict the axiom of choice. Even if a fixed model of set theory satisfies the axiom of choice, it is possible for an inner model to fail to satisfy the axiom of choice. For example, the existence of sufficiently large cardinals implies that there is an inner model satisfying the axiom of determinacy (and thus not satisfying the axiom of choice). Large cardinals A large cardinal is a cardinal number with an extra property. Many such properties are studied, including inaccessible cardinals, measurable cardinals, and many more. These properties typically imply the cardinal number must be very large, with the existence of a cardinal with the specified property unprovable in Zermelo–Fraenkel set theory. Determinacy Determinacy refers to the fact that, under appropriate assumptions, certain two-player games of perfect information are determined from the start in the sense that one player must have a winning strategy. The existence of these strategies has important consequences in descriptive set theory, as the assumption that a broader class of games is determined often implies that a broader class of sets will have a topological property. The axiom of determinacy (AD) is an important object of study; although incompatible with the axiom of choice, AD implies that all subsets of the real line are well behaved (in particular, measurable and with the perfect set property). AD can be used to prove that the Wadge degrees have an elegant structure. Forcing Paul Cohen invented the method of forcing while searching for a model of ZFC in which the continuum hypothesis fails, or a model of ZF in which the axiom of choice fails. Forcing adjoins to some given model of set theory additional sets in order to create a larger model with properties determined (i.e. "forced") by the construction and the original model. For example, Cohen's construction adjoins additional subsets of the natural numbers without changing any of the cardinal numbers of the original model. Forcing is also one of two methods for proving relative consistency by finitistic methods, the other method being Boolean-valued models. Cardinal invariants A cardinal invariant is a property of the real line measured by a cardinal number. For example, a well-studied invariant is the smallest cardinality of a collection of meagre sets of reals whose union is the entire real line. These are invariants in the sense that any two isomorphic models of set theory must give the same cardinal for each invariant. Many cardinal invariants have been studied, and the relationships between them are often complex and related to axioms of set theory. Set-theoretic topology Set-theoretic topology studies questions of general topology that are set-theoretic in nature or that require advanced methods of set theory for their solution. Many of these theorems are independent of ZFC, requiring stronger axioms for their proof. A famous problem is the normal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC. Controversy From set theory's inception, some mathematicians have objected to it as a foundation for mathematics. The most common objection to set theory, one Kronecker voiced in set theory's earliest years, starts from the constructivist view that mathematics is loosely related to computation. If this view is granted, then the treatment of infinite sets, both in naive and in axiomatic set theory, introduces into mathematics methods and objects that are not computable even in principle. The feasibility of constructivism as a substitute foundation for mathematics was greatly increased by Errett Bishop's influential book Foundations of Constructive Analysis. A different objection put forth by Henri Poincaré is that defining sets using the axiom schemas of specification and replacement, as well as the axiom of power set, introduces impredicativity, a type of circularity, into the definitions of mathematical objects. The scope of predicatively founded mathematics, while less than that of the commonly accepted Zermelo–Fraenkel theory, is much greater than that of constructive mathematics, to the point that Solomon Feferman has said that "all of scientifically applicable analysis can be developed [using predicative methods]". Ludwig Wittgenstein condemned set theory philosophically for its connotations of mathematical platonism. He wrote that "set theory is wrong", since it builds on the "nonsense" of fictitious symbolism, has "pernicious idioms", and that it is nonsensical to talk about "all numbers". Wittgenstein identified mathematics with algorithmic human deduction; the need for a secure foundation for mathematics seemed, to him, nonsensical. Moreover, since human effort is necessarily finite, Wittgenstein's philosophy required an ontological commitment to radical constructivism and finitism. Meta-mathematical statements – which, for Wittgenstein, included any statement quantifying over infinite domains, and thus almost all modern set theory – are not mathematics. Few modern philosophers have adopted Wittgenstein's views after a spectacular blunder in Remarks on the Foundations of Mathematics: Wittgenstein attempted to refute Gödel's incompleteness theorems after having only read the abstract. As reviewers Kreisel, Bernays, Dummett, and Goodstein all pointed out, many of his critiques did not apply to the paper in full. Only recently have philosophers such as Crispin Wright begun to rehabilitate Wittgenstein's arguments. Category theorists have proposed topos theory as an alternative to traditional axiomatic set theory. Topos theory can interpret various alternatives to that theory, such as constructivism, finite set theory, and computable set theory. Topoi also give a natural setting for forcing and discussions of the independence of choice from ZF, as well as providing the framework for pointless topology and Stone spaces. An active area of research is the univalent foundations and related to it homotopy type theory. Within homotopy type theory, a set may be regarded as a homotopy 0-type, with universal properties of sets arising from the inductive and recursive properties of higher inductive types. Principles such as the axiom of choice and the law of the excluded middle can be formulated in a manner corresponding to the classical formulation in set theory or perhaps in a spectrum of distinct ways unique to type theory. Some of these principles may be proven to be a consequence of other principles. The variety of formulations of these axiomatic principles allows for a detailed analysis of the formulations required in order to derive various mathematical results. Mathematical education As set theory gained popularity as a foundation for modern mathematics, there has been support for the idea of introducing the basics of naive set theory early in mathematics education. In the US in the 1960s, the New Math experiment aimed to teach basic set theory, among other abstract concepts, to primary school students, but was met with much criticism. The math syllabus in European schools followed this trend, and currently includes the subject at different levels in all grades. Venn diagrams are widely employed to explain basic set-theoretic relationships to primary school students (even though John Venn originally devised them as part of a procedure to assess the validity of inferences in term logic). Set theory is used to introduce students to logical operators (NOT, AND, OR), and semantic or rule description (technically intensional definition) of sets (e.g. "months starting with the letter A"), which may be useful when learning computer programming, since Boolean logic is used in various programming languages. Likewise, sets and other collection-like objects, such as multisets and lists, are common datatypes in computer science and programming. In addition to that, sets are commonly referred to in mathematical teaching when talking about different types of numbers (the sets of natural numbers, of integers, of real numbers, etc.), and when defining a mathematical function as a relation from one set (the domain) to another set (the range).
Mathematics
Discrete mathematics
null
27558
https://en.wikipedia.org/wiki/Salt%20%28chemistry%29
Salt (chemistry)
In chemistry, a salt or ionic compound is a chemical compound consisting of an assembly of positively charged ions (cations) and negatively charged ions (anions), which results in a compound with no net electric charge (electrically neutral). The constituent ions are held together by electrostatic forces termed ionic bonds. The component ions in a salt can be either inorganic, such as chloride (Cl−), or organic, such as acetate (). Each ion can be either monatomic (termed simple ion), such as sodium (Na+) and chloride (Cl−) in sodium chloride, or polyatomic, such as ammonium () and carbonate () ions in ammonium carbonate. Salts containing basic ions hydroxide (OH−) or oxide (O2−) are classified as bases, such as sodium hydroxide and potassium oxide. Individual ions within a salt usually have multiple near neighbours, so they are not considered to be part of molecules, but instead part of a continuous three-dimensional network. Salts usually form crystalline structures when solid. Salts composed of small ions typically have high melting and boiling points, and are hard and brittle. As solids they are almost always electrically insulating, but when melted or dissolved they become highly conductive, because the ions become mobile. Some salts have large cations, large anions, or both. In terms of their properties, such species often are more similar to organic compounds. History of discovery In 1913 the structure of sodium chloride was determined by William Henry Bragg and his son William Lawrence Bragg. This revealed that there were six equidistant nearest-neighbours for each atom, demonstrating that the constituents were not arranged in molecules or finite aggregates, but instead as a network with long-range crystalline order. Many other inorganic compounds were also found to have similar structural features. These compounds were soon described as being constituted of ions rather than neutral atoms, but proof of this hypothesis was not found until the mid-1920s, when X-ray reflection experiments (which detect the density of electrons), were performed. Principal contributors to the development of a theoretical treatment of ionic crystal structures were Max Born, Fritz Haber, Alfred Landé, Erwin Madelung, Paul Peter Ewald, and Kazimierz Fajans. Born predicted crystal energies based on the assumption of ionic constituents, which showed good correspondence to thermochemical measurements, further supporting the assumption. Formation Many metals such as the alkali metals react directly with the electronegative halogens gases to form salts. Salts form upon evaporation of their solutions. Once the solution is supersaturated and the solid compound nucleates. This process occurs widely in nature and is the means of formation of the evaporite minerals. Insoluble salts can be precipitated by mixing two solutions, one with the cation and one with the anion in it. Because all solutions are electrically neutral, the two solutions mixed must also contain counterions of the opposite charges. To ensure that these do not contaminate the precipitated salt, it is important to ensure they do not also precipitate. If the two solutions have hydrogen ions and hydroxide ions as the counterions, they will react with one another in what is called an acid–base reaction or a neutralization reaction to form water. Alternately the counterions can be chosen to ensure that even when combined into a single solution they will remain soluble as spectator ions. If the solvent is water in either the evaporation or precipitation method of formation, in many cases the ionic crystal formed also includes water of crystallization, so the product is known as a hydrate, and can have very different chemical properties compared to the anhydrous material. Molten salts will solidify on cooling to below their freezing point. This is sometimes used for the solid-state synthesis of complex salts from solid reactants, which are first melted together. In other cases, the solid reactants do not need to be melted, but instead can react through a solid-state reaction route. In this method, the reactants are repeatedly finely ground into a paste and then heated to a temperature where the ions in neighboring reactants can diffuse together during the time the reactant mixture remains in the oven. Other synthetic routes use a solid precursor with the correct stoichiometric ratio of non-volatile ions, which is heated to drive off other species. In some reactions between highly reactive metals (usually from Group 1 or Group 2) and highly electronegative halogen gases, or water, the atoms can be ionized by electron transfer, a process thermodynamically understood using the Born–Haber cycle. Salts are formed by salt-forming reactions A base and an acid, e.g., NH3 + HCl → NH4Cl A metal and an acid, e.g., Mg + H2SO4 → MgSO4 + H2 A metal and a non-metal, e.g., Ca + Cl2 → CaCl2 A base and an acid anhydride, e.g., 2 NaOH + Cl2O → 2 NaClO + H2O An acid and a base anhydride, e.g., 2 HNO3 + Na2O → 2 NaNO3 + H2O In the salt metathesis reaction where two different salts are mixed in water, their ions recombine, and the new salt is insoluble and precipitates. For example: Pb(NO3)2 + Na2SO4 → PbSO4↓ + 2 NaNO3 Bonding Ions in salts are primarily held together by the electrostatic forces between the charge distribution of these bodies, and in particular, the ionic bond resulting from the long-ranged Coulomb attraction between the net negative charge of the anions and net positive charge of the cations. There is also a small additional attractive force from van der Waals interactions which contributes only around 1–2% of the cohesive energy for small ions. When a pair of ions comes close enough for their outer electron shells (most simple ions have closed shells) to overlap, a short-ranged repulsive force occurs, due to the Pauli exclusion principle. The balance between these forces leads to a potential energy well with minimum energy when the nuclei are separated by a specific equilibrium distance. If the electronic structure of the two interacting bodies is affected by the presence of one another, covalent interactions (non-ionic) also contribute to the overall energy of the compound formed. Salts are rarely purely ionic, i.e. held together only by electrostatic forces. The bonds between even the most electronegative/electropositive pairs such as those in caesium fluoride exhibit a small degree of covalency. Conversely, covalent bonds between unlike atoms often exhibit some charge separation and can be considered to have a partial ionic character. The circumstances under which a compound will have ionic or covalent character can typically be understood using Fajans' rules, which use only charges and the sizes of each ion. According to these rules, compounds with the most ionic character will have large positive ions with a low charge, bonded to a small negative ion with a high charge. More generally HSAB theory can be applied, whereby the compounds with the most ionic character are those consisting of hard acids and hard bases: small, highly charged ions with a high difference in electronegativities between the anion and cation. This difference in electronegativities means that the charge separation, and resulting dipole moment, is maintained even when the ions are in contact (the excess electrons on the anions are not transferred or polarized to neutralize the cations). Although chemists classify idealized bond types as being ionic or covalent, the existence of additional types such as hydrogen bonds and metallic bonds, for example, has led some philosophers of science to suggest that alternative approaches to understanding bonding are required. This could be by applying quantum mechanics to calculate binding energies. Structure The lattice energy is the summation of the interaction of all sites with all other sites. For unpolarizable spherical ions, only the charges and distances are required to determine the electrostatic interaction energy. For any particular ideal crystal structure, all distances are geometrically related to the smallest internuclear distance. So for each possible crystal structure, the total electrostatic energy can be related to the electrostatic energy of unit charges at the nearest neighboring distance by a multiplicative constant called the Madelung constant that can be efficiently computed using an Ewald sum. When a reasonable form is assumed for the additional repulsive energy, the total lattice energy can be modelled using the Born–Landé equation, the Born–Mayer equation, or in the absence of structural information, the Kapustinskii equation. Using an even simpler approximation of the ions as impenetrable hard spheres, the arrangement of anions in these systems are often related to close-packed arrangements of spheres, with the cations occupying tetrahedral or octahedral interstices. Depending on the stoichiometry of the salt, and the coordination (principally determined by the radius ratio) of cations and anions, a variety of structures are commonly observed, and theoretically rationalized by Pauling's rules. In some cases, the anions take on a simple cubic packing and the resulting common structures observed are: Some ionic liquids, particularly with mixtures of anions or cations, can be cooled rapidly enough that there is not enough time for crystal nucleation to occur, so an ionic glass is formed (with no long-range order). Defects Within any crystal, there will usually be some defects. To maintain electroneutrality of the crystals, defects that involve loss of a cation will be associated with loss of an anion, i.e. these defects come in pairs. Frenkel defects consist of a cation vacancy paired with a cation interstitial and can be generated anywhere in the bulk of the crystal, occurring most commonly in compounds with a low coordination number and cations that are much smaller than the anions. Schottky defects consist of one vacancy of each type, and are generated at the surfaces of a crystal, occurring most commonly in compounds with a high coordination number and when the anions and cations are of similar size. If the cations have multiple possible oxidation states, then it is possible for cation vacancies to compensate for electron deficiencies on cation sites with higher oxidation numbers, resulting in a non-stoichiometric compound. Another non-stoichiometric possibility is the formation of an F-center, a free electron occupying an anion vacancy. When the compound has three or more ionic components, even more defect types are possible. All of these point defects can be generated via thermal vibrations and have an equilibrium concentration. Because they are energetically costly but entropically beneficial, they occur in greater concentration at higher temperatures. Once generated, these pairs of defects can diffuse mostly independently of one another, by hopping between lattice sites. This defect mobility is the source of most transport phenomena within an ionic crystal, including diffusion and solid state ionic conductivity. When vacancies collide with interstitials (Frenkel), they can recombine and annihilate one another. Similarly, vacancies are removed when they reach the surface of the crystal (Schottky). Defects in the crystal structure generally expand the lattice parameters, reducing the overall density of the crystal. Defects also result in ions in distinctly different local environments, which causes them to experience a different crystal-field symmetry, especially in the case of different cations exchanging lattice sites. This results in a different splitting of d-electron orbitals, so that the optical absorption (and hence colour) can change with defect concentration. Properties Acidity/basicity Ionic compounds containing hydrogen ions (H+) are classified as acids, and those containing electropositive cations and basic anions ions hydroxide (OH−) or oxide (O2−) are classified as bases. Other ionic compounds are known as salts and can be formed by acid–base reactions. Salts that produce hydroxide ions when dissolved in water are called alkali salts, and salts that produce hydrogen ions when dissolved in water are called acid salts. If the compound is the result of a reaction between a strong acid and a weak base, the result is an acid salt. If it is the result of a reaction between a strong base and a weak acid, the result is a base salt. If it is the result of a reaction between a strong acid and a strong base, the result is a neutral salt. Weak acids reacted with weak bases can produce ionic compounds with both the conjugate base ion and conjugate acid ion, such as ammonium acetate. Some ions are classed as amphoteric, being able to react with either an acid or a base. This is also true of some compounds with ionic character, typically oxides or hydroxides of less-electropositive metals (so the compound also has significant covalent character), such as zinc oxide, aluminium hydroxide, aluminium oxide and lead(II) oxide. Melting and boiling points Electrostatic forces between particles are strongest when the charges are high, and the distance between the nuclei of the ions is small. In such cases, the compounds generally have very high melting and boiling points and a low vapour pressure. Trends in melting points can be even better explained when the structure and ionic size ratio is taken into account. Above their melting point, salts melt and become molten salts (although some salts such as aluminium chloride and iron(III) chloride show molecule-like structures in the liquid phase). Inorganic compounds with simple ions typically have small ions, and thus have high melting points, so are solids at room temperature. Some substances with larger ions, however, have a melting point below or near room temperature (often defined as up to 100 °C), and are termed ionic liquids. Ions in ionic liquids often have uneven charge distributions, or bulky substituents like hydrocarbon chains, which also play a role in determining the strength of the interactions and propensity to melt. Even when the local structure and bonding of an ionic solid is disrupted sufficiently to melt it, there are still strong long-range electrostatic forces of attraction holding the liquid together and preventing ions boiling to form a gas phase. This means that even room temperature ionic liquids have low vapour pressures, and require substantially higher temperatures to boil. Boiling points exhibit similar trends to melting points in terms of the size of ions and strength of other interactions. When vapourized, the ions are still not freed of one another. For example, in the vapour phase sodium chloride exists as diatomic "molecules". Brittleness Most salts are very brittle. Once they reach the limit of their strength, they cannot deform malleably, because the strict alignment of positive and negative ions must be maintained. Instead the material undergoes fracture via cleavage. As the temperature is elevated (usually close to the melting point) a ductile–brittle transition occurs, and plastic flow becomes possible by the motion of dislocations. Compressibility The compressibility of a salt is strongly determined by its structure, and in particular the coordination number. For example, halides with the caesium chloride structure (coordination number 8) are less compressible than those with the sodium chloride structure (coordination number 6), and less again than those with a coordination number of 4. Solubility When simple salts dissolve, they dissociate into individual ions, which are solvated and dispersed throughout the resulting solution. Salts do not exist in solution. In contrast, molecular compounds, which includes most organic compounds, remain intact in solution. The solubility of salts is highest in polar solvents (such as water) or ionic liquids, but tends to be low in nonpolar solvents (such as petrol/gasoline). This contrast is principally because the resulting ion–dipole interactions are significantly stronger than ion-induced dipole interactions, so the heat of solution is higher. When the oppositely charged ions in the solid ionic lattice are surrounded by the opposite pole of a polar molecule, the solid ions are pulled out of the lattice and into the liquid. If the solvation energy exceeds the lattice energy, the negative net enthalpy change of solution provides a thermodynamic drive to remove ions from their positions in the crystal and dissolve in the liquid. In addition, the entropy change of solution is usually positive for most solid solutes like salts, which means that their solubility increases when the temperature increases. There are some unusual salts such as cerium(III) sulfate, where this entropy change is negative, due to extra order induced in the water upon solution, and the solubility decreases with temperature. The lattice energy, the cohesive forces between these ions within a solid, determines the solubility. The solubility is dependent on how well each ion interacts with the solvent, so certain patterns become apparent. For example, salts of sodium, potassium and ammonium are usually soluble in water. Notable exceptions include ammonium hexachloroplatinate and potassium cobaltinitrite. Most nitrates and many sulfates are water-soluble. Exceptions include barium sulfate, calcium sulfate (sparingly soluble), and lead(II) sulfate, where the 2+/2− pairing leads to high lattice energies. For similar reasons, most metal carbonates are not soluble in water. Some soluble carbonate salts are: sodium carbonate, potassium carbonate and ammonium carbonate. Strength Strong salts or strong electrolyte salts are chemical salts composed of strong electrolytes. These salts dissociate completely or almost completely in water. They are generally odorless and nonvolatile. Strong salts start with Na__, K__, NH4__, or they end with __NO3, __ClO4, or __CH3COO. Most group 1 and 2 metals form strong salts. Strong salts are especially useful when creating conductive compounds as their constituent ions allow for greater conductivity. Weak salts or weak electrolyte salts are composed of weak electrolytes. These salts do not dissociate well in water. They are generally more volatile than strong salts. They may be similar in odor to the acid or base they are derived from. For example, sodium acetate, CH3COONa, smells similar to acetic acid CH3COOH. Electrical conductivity Salts are characteristically insulators. Although they contain charged atoms or clusters, these materials do not typically conduct electricity to any significant extent when the substance is solid. In order to conduct, the charged particles must be mobile rather than stationary in a crystal lattice. This is achieved to some degree at high temperatures when the defect concentration increases the ionic mobility and solid state ionic conductivity is observed. When the salts are dissolved in a liquid or are melted into a liquid, they can conduct electricity because the ions become completely mobile. For this reason, molten salts and solutions containing dissolved salts (e.g., sodium chloride in water) can be used as electrolytes. This conductivity gain upon dissolving or melting is sometimes used as a defining characteristic of salts. In some unusual salts: fast-ion conductors, and ionic glasses, one or more of the ionic components has a significant mobility, allowing conductivity even while the material as a whole remains solid. This is often highly temperature dependent, and may be the result of either a phase change or a high defect concentration. These materials are used in all solid-state supercapacitors, batteries, and fuel cells, and in various kinds of chemical sensors. Colour The colour of a salt is often different from the colour of an aqueous solution containing the constituent ions, or the hydrated form of the same compound. The anions in compounds with bonds with the most ionic character tend to be colorless (with an absorption band in the ultraviolet part of the spectrum). In compounds with less ionic character, their color deepens through yellow, orange, red, and black (as the absorption band shifts to longer wavelengths into the visible spectrum). The absorption band of simple cations shifts toward a shorter wavelength when they are involved in more covalent interactions. This occurs during hydration of metal ions, so colorless anhydrous salts with an anion absorbing in the infrared can become colorful in solution. Salts exist in many different colors, which arise either from their constituent anions, cations or solvates. For example: sodium chromate is made yellow by the chromate ion . potassium dichromate is made red-orange by the dichromate ion . cobalt(II) nitrate hexahydrate is made red by the chromophore of hydrated cobalt(II) . copper(II) sulfate pentahydrate is made blue by the hydrated copper(II) cation. potassium permanganate is made violet by the permanganate anion . nickel(II) chloride hexahydrate is made green by the hydrated nickel(II) chloride . sodium chloride NaCl and magnesium sulfate heptahydrate are colorless or white because the constituent cations and anions do not absorb light in the part of the spectrum that is visible to humans. Some minerals are salts, some of which are soluble in water. Similarly, inorganic pigments tend not to be salts, because insolubility is required for fastness. Some organic dyes are salts, but they are virtually insoluble in water. Taste and odor Salts can elicit all five basic tastes, e.g., salty (sodium chloride), sweet (lead diacetate, which will cause lead poisoning if ingested), sour (potassium bitartrate), bitter (magnesium sulfate), and umami or savory (monosodium glutamate). Salts of strong acids and strong bases ("strong salts") are non-volatile and often odorless, whereas salts of either weak acids or weak bases ("weak salts") may smell like the conjugate acid (e.g., acetates like acetic acid (vinegar) and cyanides like hydrogen cyanide (almonds)) or the conjugate base (e.g., ammonium salts like ammonia) of the component ions. That slow, partial decomposition is usually accelerated by the presence of water, since hydrolysis is the other half of the reversible reaction equation of formation of weak salts. Uses Salts have long had a wide variety of uses and applications. Many minerals are ionic. Humans have processed common salt (sodium chloride) for over 8000 years, using it first as a food seasoning and preservative, and now also in manufacturing, agriculture, water conditioning, for de-icing roads, and many other uses. Many salts are so widely used in society that they go by common names unrelated to their chemical identity. Examples of this include borax, calomel, milk of magnesia, muriatic acid, oil of vitriol, saltpeter, and slaked lime. Soluble salts can easily be dissolved to provide electrolyte solutions. This is a simple way to control the concentration and ionic strength. The concentration of solutes affects many colligative properties, including increasing the osmotic pressure, and causing freezing-point depression and boiling-point elevation. Because the solutes are charged ions they also increase the electrical conductivity of the solution. The increased ionic strength reduces the thickness of the electrical double layer around colloidal particles, and therefore the stability of emulsions and suspensions. The chemical identity of the ions added is also important in many uses. For example, fluoride containing compounds are dissolved to supply fluoride ions for water fluoridation. Solid salts have long been used as paint pigments, and are resistant to organic solvents, but are sensitive to acidity or basicity. Since 1801 pyrotechnicians have described and widely used metal-containing salts as sources of colour in fireworks. Under intense heat, the electrons in the metal ions or small molecules can be excited. These electrons later return to lower energy states, and release light with a colour spectrum characteristic of the species present. In chemical synthesis, salts are often used as precursors for high-temperature solid-state synthesis. Many metals are geologically most abundant as salts within ores. To obtain the elemental materials, these ores are processed by smelting or electrolysis, in which redox reactions occur (often with a reducing agent such as carbon) such that the metal ions gain electrons to become neutral atoms. Nomenclature According to the nomenclature recommended by IUPAC, salts are named according to their composition, not their structure. In the most simple case of a binary salt with no possible ambiguity about the charges and thus the stoichiometry, the common name is written using two words. The name of the cation (the unmodified element name for monatomic cations) comes first, followed by the name of the anion. For example, MgCl2 is named magnesium chloride, and Na2SO4 is named sodium sulfate (, sulfate, is an example of a polyatomic ion). To obtain the empirical formula from these names, the stoichiometry can be deduced from the charges on the ions, and the requirement of overall charge neutrality. If there are multiple different cations and/or anions, multiplicative prefixes (di-, tri-, tetra-, ...) are often required to indicate the relative compositions, and cations then anions are listed in alphabetical order. For example, KMgCl3 is named magnesium potassium trichloride to distinguish it from K2MgCl4, magnesium dipotassium tetrachloride (note that in both the empirical formula and the written name, the cations appear in alphabetical order, but the order varies between them because the symbol for potassium is K). When one of the ions already has a multiplicative prefix within its name, the alternate multiplicative prefixes (bis-, tris-, tetrakis-, ...) are used. For example, Ba(BrF4)2 is named barium bis(tetrafluoridobromate). Compounds containing one or more elements which can exist in a variety of charge/oxidation states will have a stoichiometry that depends on which oxidation states are present, to ensure overall neutrality. This can be indicated in the name by specifying either the oxidation state of the elements present, or the charge on the ions. Because of the risk of ambiguity in allocating oxidation states, IUPAC prefers direct indication of the ionic charge numbers. These are written as an arabic integer followed by the sign (... , 2−, 1−, 1+, 2+, ...) in parentheses directly after the name of the cation (without a space separating them). For example, FeSO4 is named iron(2+) sulfate (with the 2+ charge on the Fe2+ ions balancing the 2− charge on the sulfate ion), whereas Fe2(SO4)3 is named iron(3+) sulfate (because the two iron ions in each formula unit each have a charge of 3+, to balance the 2− on each of the three sulfate ions). Stock nomenclature, still in common use, writes the oxidation number in Roman numerals (... , −II, −I, 0, I, II, ...). So the examples given above would be named iron(II) sulfate and iron(III) sulfate respectively. For simple ions the ionic charge and the oxidation number are identical, but for polyatomic ions they often differ. For example, the uranyl(2+) ion, , has uranium in an oxidation state of +6, so would be called a dioxouranium(VI) ion in Stock nomenclature. An even older naming system for metal cations, also still widely used, appended the suffixes -ous and -ic to the Latin root of the name, to give special names for the low and high oxidation states. For example, this scheme uses "ferrous" and "ferric", for iron(II) and iron(III) respectively, so the examples given above were classically named ferrous sulfate and ferric sulfate. Common salt-forming cations include: Ammonium Calcium Iron and Magnesium Potassium Pyridinium Quaternary ammonium , R being an alkyl group or an aryl group Sodium Copper Common salt-forming anions (parent acids in parentheses where available) include: Acetate (acetic acid) Carbonate (carbonic acid) Chloride (hydrochloric acid) Citrate (citric acid) Cyanide (hydrocyanic acid) Fluoride (hydrofluoric acid) Nitrate (nitric acid) Nitrite (nitrous acid) Oxide (water) Phosphate (phosphoric acid) Sulfate (sulfuric acid) Salts with varying number of hydrogen atoms replaced by cations as compared to their parent acid can be referred to as monobasic, dibasic, or tribasic, identifying that one, two, or three hydrogen atoms have been replaced; polybasic salts refer to those with more than one hydrogen atom replaced. Examples include: Sodium phosphate monobasic (NaH2PO4) Sodium phosphate dibasic (Na2HPO4) Sodium phosphate tribasic (Na3PO4) Non-salt Zwitterion Zwitterions contain an anionic and a cationic centre in the same molecule, but are not considered salts. Examples of zwitterions are amino acids, many metabolites, peptides, and proteins.
Physical sciences
Salts
null
27573
https://en.wikipedia.org/wiki/Superfluid%20helium-4
Superfluid helium-4
Superfluid helium-4 (helium II or He-II) is the superfluid form of helium-4, an isotope of the element helium. A superfluid is a state of matter in which matter behaves like a fluid with zero viscosity. The substance, which resembles other liquids such as helium I (conventional, non-superfluid liquid helium), flows without friction past any surface, which allows it to continue to circulate over obstructions and through pores in containers which hold it, subject only to its own inertia. The formation of the superfluid is a manifestation of the formation of a Bose–Einstein condensate of helium atoms. This condensation occurs in liquid helium-4 at a far higher temperature (2.17 K) than it does in helium-3 (2.5 mK) because each atom of helium-4 is a boson particle, by virtue of its zero spin. Helium-3, however, is a fermion particle, which can form bosons only by pairing with itself at much lower temperatures, in a weaker process that is similar to the electron pairing in superconductivity. History Known as a major facet in the study of quantum hydrodynamics and macroscopic quantum phenomena, the superfluidity effect was discovered by Pyotr Kapitsa and John F. Allen, and Don Misener in 1937. Onnes possibly observed the superfluid phase transition on August 2, 1911, the same day that he observed superconductivity in mercury. It has since been described through phenomenological and microscopic theories. In the 1950s, Hall and Vinen performed experiments establishing the existence of quantized vortex lines in superfluid helium. In the 1960s, Rayfield and Reif established the existence of quantized vortex rings. Packard has observed the intersection of vortex lines with the free surface of the fluid, and Avenel and Varoquaux have studied the Josephson effect in superfluid helium-4. In 2006, a group at the University of Maryland visualized quantized vortices by using small tracer particles of solid hydrogen. In the early 2000s, physicists created a Fermionic condensate from pairs of ultra-cold fermionic atoms. Under certain conditions, fermion pairs form diatomic molecules and undergo Bose–Einstein condensation. At the other limit, the fermions (most notably superconducting electrons) form Cooper pairs which also exhibit superfluidity. This work with ultra-cold atomic gases has allowed scientists to study the region in between these two extremes, known as the BEC-BCS crossover. Supersolids may also have been discovered in 2004 by physicists at Penn State University. When helium-4 is cooled below about 200 mK under high pressures, a fraction (≈1%) of the solid appears to become superfluid. By quench cooling or lengthening the annealing time, thus increasing or decreasing the defect density respectively, it was shown, via torsional oscillator experiment, that the supersolid fraction could be made to range from 20% to completely non-existent. This suggested that the supersolid nature of helium-4 is not intrinsic to helium-4 but a property of helium-4 and disorder. Some emerging theories posit that the supersolid signal observed in helium-4 was actually an observation of either a superglass state or intrinsically superfluid grain boundaries in the helium-4 crystal. Applications Recently in the field of chemistry, superfluid helium-4 has been successfully used in spectroscopic techniques as a quantum solvent. Referred to as superfluid helium droplet spectroscopy (SHeDS), it is of great interest in studies of gas molecules, as a single molecule solvated in a superfluid medium allows a molecule to have effective rotational freedom, allowing it to behave similarly to how it would in the "gas" phase. Droplets of superfluid helium also have a characteristic temperature of about 0.4 K which cools the solvated molecule(s) to its ground or nearly ground rovibronic state. Superfluids are also used in high-precision devices such as gyroscopes, which allow the measurement of some theoretically predicted gravitational effects (for an example, see Gravity Probe B). The Infrared Astronomical Satellite IRAS, launched in January 1983 to gather infrared data was cooled by 73 kilograms of superfluid helium, maintaining a temperature of . When used in conjunction with helium-3, temperatures as low as 40 mK are routinely achieved in extreme low temperature experiments. The helium-3, in liquid state at 3.2 K, can be evaporated into the superfluid helium-4, where it acts as a gas due to the latter's properties as a Bose–Einstein condensate. This evaporation pulls energy from the overall system, which can be pumped out in a way completely analogous to normal refrigeration techniques. (See dilution refrigerator) Superfluid-helium technology is used to extend the temperature range of cryocoolers to lower temperatures. So far the limit is 1.19 K, but there is a potential to reach 0.7 K. Properties Superfluids, such as helium-4 below the lambda point (known, for simplicity, as helium II), exhibit many unusual properties. A superfluid acts as if it were a mixture of a normal component, with all the properties of a normal fluid, and a superfluid component. The superfluid component has zero viscosity and zero entropy. Application of heat to a spot in superfluid helium results in a flow of the normal component which takes care of the heat transport at relatively high velocity (up to 20 cm/s) which leads to a very high effective thermal conductivity. Film flow Many ordinary liquids, like alcohol or petroleum, creep up solid walls, driven by their surface tension. Liquid helium also has this property, but, in the case of He-II, the flow of the liquid in the layer is not restricted by its viscosity but by a critical velocity which is about 20 cm/s. This is a fairly high velocity so superfluid helium can flow relatively easily up the wall of containers, over the top, and down to the same level as the surface of the liquid inside the container, in a siphon effect. It was, however, observed, that the flow through nanoporous membrane becomes restricted if the pore diameter is less than 0.7 nm (i.e. roughly three times the classical diameter of helium atom), suggesting the unusual hydrodynamic properties of He arise at larger scale than in the classical liquid helium. Rotation Another fundamental property becomes visible if a superfluid is placed in a rotating container. Instead of rotating uniformly with the container, the rotating state consists of quantized vortices. That is, when the container is rotated at speeds below the first critical angular velocity, the liquid remains perfectly stationary. Once the first critical angular velocity is reached, the superfluid will form a vortex. The vortex strength is quantized, that is, a superfluid can only spin at certain "allowed" values. Rotation in a normal fluid, like water, is not quantized. If the rotation speed is increased more and more quantized vortices will be formed which arrange in nice patterns similar to the Abrikosov lattice in a superconductor. Comparison with helium-3 Although the phenomenologies of the superfluid states of helium-4 and helium-3 are very similar, the microscopic details of the transitions are very different. Helium-4 atoms are bosons, and their superfluidity can be understood in terms of the Bose–Einstein statistics that they obey. Specifically, the superfluidity of helium-4 can be regarded as a consequence of Bose–Einstein condensation in an interacting system. On the other hand, helium-3 atoms are fermions, and the superfluid transition in this system is described by a generalization of the BCS theory of superconductivity. In it, Cooper pairing takes place between atoms rather than electrons, and the attractive interaction between them is mediated by spin fluctuations rather than phonons. (See fermion condensate.) A unified description of superconductivity and superfluidity is possible in terms of gauge symmetry breaking. Macroscopic theory Thermodynamics Figure 1 is the phase diagram of 4He. It is a pressure-temperature (p-T) diagram indicating the solid and liquid regions separated by the melting curve (between the liquid and solid state) and the liquid and gas region, separated by the vapor-pressure line. This latter ends in the critical point where the difference between gas and liquid disappears. The diagram shows the remarkable property that 4He is liquid even at absolute zero. 4He is only solid at pressures above 25 bar. Figure 1 also shows the λ-line. This is the line that separates two fluid regions in the phase diagram indicated by He-I and He-II. In the He-I region the helium behaves like a normal fluid; in the He-II region the helium is superfluid. The name lambda-line comes from the specific heat – temperature plot which has the shape of the Greek letter λ. See figure 2, which shows a peak at 2.172 K, the so-called λ-point of 4He. Below the lambda line the liquid can be described by the so-called two-fluid model. It behaves as if it consists of two components: a normal component, which behaves like a normal fluid, and a superfluid component with zero viscosity and zero entropy. The ratios of the respective densities ρn/ρ and ρs/ρ, with ρn (ρs) the density of the normal (superfluid) component, and ρ (the total density), depends on temperature and is represented in figure 3. By lowering the temperature, the fraction of the superfluid density increases from zero at Tλ to one at zero kelvins. Below 1 K the helium is almost completely superfluid. It is possible to create density waves of the normal component (and hence of the superfluid component since ρn + ρs = constant) which are similar to ordinary sound waves. This effect is called second sound. Due to the temperature dependence of ρn (figure 3) these waves in ρn are also temperature waves. Superfluid hydrodynamics The equation of motion for the superfluid component, in a somewhat simplified form, is given by Newton's law The mass is the molar mass of 4He, and is the velocity of the superfluid component. The time derivative is the so-called hydrodynamic derivative, i.e. the rate of increase of the velocity when moving with the fluid. In the case of superfluid 4He in the gravitational field the force is given by In this expression is the molar chemical potential, the gravitational acceleration, and the vertical coordinate. Thus we get the equation which states that the thermodynamics of a certain constant will be amplified by the force of the natural gravitational acceleration Eq.  only holds if is below a certain critical value, which usually is determined by the diameter of the flow channel. In classical mechanics the force is often the gradient of a potential energy. Eq.  shows that, in the case of the superfluid component, the force contains a term due to the gradient of the chemical potential. This is the origin of the remarkable properties of He-II such as the fountain effect. Fountain pressure In order to rewrite Eq. in more familiar form we use the general formula Here is the molar entropy and the molar volume. With Eq. can be found by a line integration in the – plane. First we integrate from the origin to , so at . Next we integrate from to , so with constant pressure (see figure 6). In the first integral and in the second . With Eq. we obtain We are interested only in cases where is small so that is practically constant. So where is the molar volume of the liquid at and . The other term in Eq. is also written as a product of and a quantity which has the dimension of pressure The pressure is called the fountain pressure. It can be calculated from the entropy of 4He which, in turn, can be calculated from the heat capacity. For the fountain pressure is equal to 0.692 bar. With a density of liquid helium of 125 kg/m3 and = 9.8 m/s2 this corresponds with a liquid-helium column of 56 meter height. So, in many experiments, the fountain pressure has a bigger effect on the motion of the superfluid helium than gravity. With Eqs. and , Eq. obtains the form Substitution of Eq. in gives with the density of liquid 4He at zero pressure and temperature. Eq. shows that the superfluid component is accelerated by gradients in the pressure and in the gravitational field, as usual, but also by a gradient in the fountain pressure. So far Eq. has only mathematical meaning, but in special experimental arrangements can show up as a real pressure. Figure 7 shows two vessels both containing He-II. The left vessel is supposed to be at zero kelvins () and zero pressure (). The vessels are connected by a so-called superleak. This is a tube, filled with a very fine powder, so the flow of the normal component is blocked. However, the superfluid component can flow through this superleak without any problem (below a critical velocity of about 20 cm/s). In the steady state so Eq. implies where the indexes and apply to the left and right side of the superleak respectively. In this particular case , , and (since ). Consequently, This means that the pressure in the right vessel is equal to the fountain pressure at . In an experiment, arranged as in figure 8, a fountain can be created. The fountain effect is used to drive the circulation of 3He in dilution refrigerators. Heat transport Figure 9 depicts a heat-conduction experiment between two temperatures and connected by a tube filled with He-II. When heat is applied to the hot end a pressure builds up at the hot end according to Eq.. This pressure drives the normal component from the hot end to the cold end according to Here is the viscosity of the normal component, some geometrical factor, and the volume flow. The normal flow is balanced by a flow of the superfluid component from the cold to the hot end. At the end sections a normal to superfluid conversion takes place and vice versa. So heat is transported, not by heat conduction, but by convection. This kind of heat transport is very effective, so the thermal conductivity of He-II is very much better than the best materials. The situation is comparable with heat pipes where heat is transported via gas–liquid conversion. The high thermal conductivity of He-II is applied for stabilizing superconducting magnets such as in the Large Hadron Collider at CERN. Microscopic theory Landau two-fluid approach L. D. Landau's phenomenological and semi-microscopic theory of superfluidity of helium-4 earned him the Nobel Prize in physics, in 1962. Assuming that sound waves are the most important excitations in helium-4 at low temperatures, he showed that helium-4 flowing past a wall would not spontaneously create excitations if the flow velocity was less than the sound velocity. In this model, the sound velocity is the "critical velocity" above which superfluidity is destroyed. (Helium-4 actually has a lower flow velocity than the sound velocity, but this model is useful to illustrate the concept.) Landau also showed that the sound wave and other excitations could equilibrate with one another and flow separately from the rest of the helium-4, which is known as the "condensate". From the momentum and flow velocity of the excitations he could then define a "normal fluid" density, which is zero at zero temperature and increases with temperature. At the so-called Lambda temperature, where the normal fluid density equals the total density, the helium-4 is no longer superfluid. To explain the early specific heat data on superfluid helium-4, Landau posited the existence of a type of excitation he called a "roton", but as better data became available he considered that the "roton" was the same as a high momentum version of sound. The Landau theory does not elaborate on the microscopic structure of the superfluid component of liquid helium. The first attempts to create a microscopic theory of the superfluid component itself were done by London and subsequently, Tisza. Other microscopical models have been proposed by different authors. Their main objective is to derive the form of the inter-particle potential between helium atoms in superfluid state from first principles of quantum mechanics. To date, a number of models of this kind have been proposed, including: models with vortex rings, hard-sphere models, and Gaussian cluster theories. Vortex ring model Landau thought that vorticity entered superfluid helium-4 by vortex sheets, but such sheets have since been shown to be unstable. Lars Onsager and, later independently, Feynman showed that vorticity enters by quantized vortex lines. They also developed the idea of quantum vortex rings. Arie Bijl in the 1940s, and Richard Feynman around 1955, developed microscopic theories for the roton, which was shortly observed with inelastic neutron experiments by Palevsky. Later on, Feynman admitted that his model gives only qualitative agreement with experiment. Hard-sphere models The models are based on the simplified form of the inter-particle potential between helium-4 atoms in the superfluid phase. Namely, the potential is assumed to be of the hard-sphere type. In these models the famous Landau (roton) spectrum of excitations is qualitatively reproduced. Gaussian cluster approach This is a two-scale approach which describes the superfluid component of liquid helium-4. It consists of two nested models linked via parametric space. The short-wavelength part describes the interior structure of the fluid element using a non-perturbative approach based on the logarithmic Schrödinger equation; it suggests the Gaussian-like behaviour of the element's interior density and interparticle interaction potential. The long-wavelength part is the quantum many-body theory of such elements which deals with their dynamics and interactions. The approach provides a unified description of the phonon, maxon and roton excitations, and has noteworthy agreement with experiment: with one essential parameter to fit one reproduces at high accuracy the Landau roton spectrum, sound velocity and structure factor of superfluid helium-4. This model utilizes the general theory of quantum Bose liquids with logarithmic nonlinearities which is based on introducing a dissipative-type contribution to energy related to the quantum Everett–Hirschman entropy function.
Physical sciences
s-Block
Chemistry
27577
https://en.wikipedia.org/wiki/Statistical%20inference
Statistical inference
Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population. Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population. In machine learning, the term inference is sometimes used instead to mean "make a prediction, by evaluating an already trained model"; in this context inferring properties of the model is referred to as training or learning (rather than inference), and using a model for prediction is referred to as inference (instead of prediction); see also predictive inference. Introduction Statistical inference makes propositions about a population, using data drawn from the population with some form of sampling. Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (first) selecting a statistical model of the process that generates the data and (second) deducing propositions from the model. Konishi and Kitagawa state "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". Relatedly, Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". The conclusion of a statistical inference is a statistical proposition. Some common forms of statistical proposition are the following: a point estimate, i.e. a particular value that best approximates some parameter of interest; an interval estimate, e.g. a confidence interval (or set estimate), i.e. an interval constructed using a dataset drawn from a population so that, under repeated sampling of such datasets, such intervals would contain the true parameter value with the probability at the stated confidence level; a credible interval, i.e. a set of values containing, for example, 95% of posterior belief; rejection of a hypothesis; clustering or classification of data points into groups. Models and assumptions Any statistical inference requires some assumptions. A statistical model is a set of assumptions concerning the generation of the observed data and similar data. Descriptions of statistical models usually emphasize the role of population quantities of interest, about which we wish to draw inference. Descriptive statistics are typically used as a preliminary step before more formal inferences are drawn. Degree of models/assumptions Statisticians distinguish between three levels of modeling assumptions: Fully parametric: The probability distributions describing the data-generation process are assumed to be fully described by a family of probability distributions involving only a finite number of unknown parameters. For example, one may assume that the distribution of population values is truly Normal, with unknown mean and variance, and that datasets are generated by 'simple' random sampling. The family of generalized linear models is a widely used and flexible class of parametric models. Non-parametric: The assumptions made about the process generating the data are much less than in parametric statistics and may be minimal. For example, every continuous probability distribution has a median, which may be estimated using the sample median or the Hodges–Lehmann–Sen estimator, which has good properties when the data arise from simple random sampling. Semi-parametric: This term typically implies assumptions 'in between' fully and non-parametric approaches. For example, one may assume that a population distribution has a finite mean. Furthermore, one may assume that the mean response level in the population depends in a truly linear manner on some covariate (a parametric assumption) but not make any parametric assumption describing the variance around that mean (i.e. about the presence or possible form of any heteroscedasticity). More generally, semi-parametric models can often be separated into 'structural' and 'random variation' components. One component is treated parametrically and the other non-parametrically. The well-known Cox model is a set of semi-parametric assumptions. Importance of valid models/assumptions Whatever level of assumption is made, correctly calibrated inference, in general, requires these assumptions to be correct; i.e. that the data-generating mechanisms really have been correctly specified. Incorrect assumptions of 'simple' random sampling can invalidate statistical inference. More complex semi- and fully parametric assumptions are also cause for concern. For example, incorrectly assuming the Cox model can in some cases lead to faulty conclusions. Incorrect assumptions of Normality in the population also invalidates some forms of regression-based inference. The use of any parametric model is viewed skeptically by most experts in sampling human populations: "most sampling statisticians, when they deal with confidence intervals at all, limit themselves to statements about [estimators] based on very large samples, where the central limit theorem ensures that these [estimators] will have distributions that are nearly normal." In particular, a normal distribution "would be a totally unrealistic and catastrophically unwise assumption to make if we were dealing with any kind of economic population." Here, the central limit theorem states that the distribution of the sample mean "for very large samples" is approximately normally distributed, if the distribution is not heavy-tailed. Approximate distributions Given the difficulty in specifying exact distributions of sample statistics, many methods have been developed for approximating these. With finite samples, approximation results measure how close a limiting distribution approaches the statistic's sample distribution: For example, with 10,000 independent samples the normal distribution approximates (to two digits of accuracy) the distribution of the sample mean for many population distributions, by the Berry–Esseen theorem. Yet for many practical purposes, the normal approximation provides a good approximation to the sample-mean's distribution when there are 10 (or more) independent samples, according to simulation studies and statisticians' experience. Following Kolmogorov's work in the 1950s, advanced statistics uses approximation theory and functional analysis to quantify the error of approximation. In this approach, the metric geometry of probability distributions is studied; this approach quantifies approximation error with, for example, the Kullback–Leibler divergence, Bregman divergence, and the Hellinger distance. With indefinitely large samples, limiting results like the central limit theorem describe the sample statistic's limiting distribution if one exists. Limiting results are not statements about finite samples, and indeed are irrelevant to finite samples. However, the asymptotic theory of limiting distributions is often invoked for work with finite samples. For example, limiting results are often invoked to justify the generalized method of moments and the use of generalized estimating equations, which are popular in econometrics and biostatistics. The magnitude of the difference between the limiting distribution and the true distribution (formally, the 'error' of the approximation) can be assessed using simulation. The heuristic application of limiting results to finite samples is common practice in many applications, especially with low-dimensional models with log-concave likelihoods (such as with one-parameter exponential families). Randomization-based models For a given dataset that was produced by a randomization design, the randomization distribution of a statistic (under the null-hypothesis) is defined by evaluating the test statistic for all of the plans that could have been generated by the randomization design. In frequentist inference, the randomization allows inferences to be based on the randomization distribution rather than a subjective model, and this is important especially in survey sampling and design of experiments. Statistical inference from randomized studies is also more straightforward than many other situations. In Bayesian inference, randomization is also of importance: in survey sampling, use of sampling without replacement ensures the exchangeability of the sample with the population; in randomized experiments, randomization warrants a missing at random assumption for covariate information. Objective randomization allows properly inductive procedures. Many statisticians prefer randomization-based analysis of data that was generated by well-defined randomization procedures. (However, it is true that in fields of science with developed theoretical knowledge and experimental control, randomized experiments may increase the costs of experimentation without improving the quality of inferences.) Similarly, results from randomized experiments are recommended by leading statistical authorities as allowing inferences with greater reliability than do observational studies of the same phenomena. However, a good observational study may be better than a bad randomized experiment. The statistical analysis of a randomized experiment may be based on the randomization scheme stated in the experimental protocol and does not need a subjective model. However, at any time, some hypotheses cannot be tested using objective statistical models, which accurately describe randomized experiments or random samples. In some cases, such randomized studies are uneconomical or unethical. Model-based analysis of randomized experiments It is standard practice to refer to a statistical model, e.g., a linear or logistic models, when analyzing data from randomized experiments. However, the randomization scheme guides the choice of a statistical model. It is not possible to choose an appropriate model without knowing the randomization scheme. Seriously misleading results can be obtained analyzing data from randomized experiments while ignoring the experimental protocol; common mistakes include forgetting the blocking used in an experiment and confusing repeated measurements on the same experimental unit with independent replicates of the treatment applied to different experimental units. Model-free randomization inference Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification. The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations. For example, model-free simple linear regression is based either on: a random design, where the pairs of observations are independent and identically distributed (iid), or a deterministic design, where the variables are deterministic, but the corresponding response variables are random and independent with a common conditional distribution, i.e., , which is independent of the index . In either case, the model-free randomization inference for features of the common conditional distribution relies on some regularity conditions, e.g. functional smoothness. For instance, model-free randomization inference for the population feature conditional mean, , can be consistently estimated via local averaging or local polynomial fitting, under the assumption that is smooth. Also, relying on asymptotic normality or resampling, we can construct confidence intervals for the population feature, in this case, the conditional mean, . Paradigms for inference Different schools of statistical inference have become established. These schools—or "paradigms"—are not mutually exclusive, and methods that work well under one paradigm often have attractive interpretations under other paradigms. Bandyopadhyay and Forster describe four paradigms: The classical (or frequentist) paradigm, the Bayesian paradigm, the likelihoodist paradigm, and the Akaikean-Information Criterion-based paradigm. Frequentist inference This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging. Examples of frequentist inference p-value Confidence interval Null hypothesis significance testing Frequentist inference, objectivity, and decision theory One interpretation of frequentist inference (or classical inference) is that it is applicable only in terms of frequency probability; that is, in terms of repeated sampling from a population. However, the approach of Neyman develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned on unknown parameters) probabilities used in the frequentist approach. The frequentist procedures of significance testing and confidence intervals can be constructed without regard to utility functions. However, some elements of frequentist statistics, such as statistical decision theory, do incorporate utility functions. In particular, frequentist developments of optimal inference (such as minimum-variance unbiased estimators, or uniformly most powerful testing) make use of loss functions, which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property. However, loss-functions are often useful for stating optimality properties: for example, median-unbiased estimators are optimal under absolute value loss functions, in that they minimize expected loss, and least squares estimators are optimal under squared error loss functions, in that they minimize expected loss. While statisticians using frequentist inference must choose for themselves the parameters of interest, and the estimators/test statistic to be used, the absence of obviously explicit utilities and prior distributions has helped frequentist procedures to become widely viewed as 'objective'. Bayesian inference The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate into one, and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions. There are several different justifications for using the Bayesian approach. Examples of Bayesian inference Credible interval for interval estimation Bayes factors for model comparison Bayesian inference, subjectivity and decision theory Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior. For example, the posterior mean, median and mode, highest posterior density intervals, and Bayes Factors can all be motivated in this way. While a user's utility function need not be stated for this sort of inference, these summaries do all depend (to some extent) on stated prior beliefs, and are generally viewed as subjective conclusions. (Methods of prior construction which do not require external input have been proposed but not yet fully developed.) Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically provides optimal decisions in a decision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation. Analyses which are not formally Bayesian can be (logically) incoherent; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to be coherent. Some advocates of Bayesian inference assert that inference must take place in this decision-theoretic framework, and that Bayesian inference should not conclude with the evaluation and summarization of posterior beliefs. Likelihood-based inference Likelihood-based inference is a paradigm used to estimate the parameters of a statistical model based on observed data. Likelihoodism approaches statistics by using the likelihood function, denoted as , quantifies the probability of observing the given data , assuming a specific set of parameter values . In likelihood-based inference, the goal is to find the set of parameter values that maximizes the likelihood function, or equivalently, maximizes the probability of observing the given data. The process of likelihood-based inference usually involves the following steps: Formulating the statistical model: A statistical model is defined based on the problem at hand, specifying the distributional assumptions and the relationship between the observed data and the unknown parameters. The model can be simple, such as a normal distribution with known variance, or complex, such as a hierarchical model with multiple levels of random effects. Constructing the likelihood function: Given the statistical model, the likelihood function is constructed by evaluating the joint probability density or mass function of the observed data as a function of the unknown parameters. This function represents the probability of observing the data for different values of the parameters. Maximizing the likelihood function: The next step is to find the set of parameter values that maximizes the likelihood function. This can be achieved using optimization techniques such as numerical optimization algorithms. The estimated parameter values, often denoted as , are the maximum likelihood estimates (MLEs). Assessing uncertainty: Once the MLEs are obtained, it is crucial to quantify the uncertainty associated with the parameter estimates. This can be done by calculating standard errors, confidence intervals, or conducting hypothesis tests based on asymptotic theory or simulation techniques such as bootstrapping. Model checking: After obtaining the parameter estimates and assessing their uncertainty, it is important to assess the adequacy of the statistical model. This involves checking the assumptions made in the model and evaluating the fit of the model to the data using goodness-of-fit tests, residual analysis, or graphical diagnostics. Inference and interpretation: Finally, based on the estimated parameters and model assessment, statistical inference can be performed. This involves drawing conclusions about the population parameters, making predictions, or testing hypotheses based on the estimated model. AIC-based inference The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means for model selection. AIC is founded on information theory: it offers an estimate of the relative information lost when a given model is used to represent the process that generated the data. (In doing so, it deals with the trade-off between the goodness of fit of the model and the simplicity of the model.) Other paradigms for inference Minimum description length The minimum description length (MDL) principle has been developed from ideas in information theory and the theory of Kolmogorov complexity. The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or probability models for the data, as might be done in frequentist or Bayesian approaches. However, if a "data generating mechanism" does exist in reality, then according to Shannon's source coding theorem it provides the MDL description of the data, on average and asymptotically. In minimizing description length (or descriptive complexity), MDL estimation is similar to maximum likelihood estimation and maximum a posteriori estimation (using maximum-entropy Bayesian priors). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling. The MDL principle has been applied in communication-coding theory in information theory, in linear regression, and in data mining. The evaluation of MDL-based inferential procedures often uses techniques or criteria from computational complexity theory. Fiducial inference Fiducial inference was an approach to statistical inference based on fiducial probability, also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious. However this argument is the same as that which shows that a so-called confidence distribution is not a valid probability distribution and, since this has not invalidated the application of confidence intervals, it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher's fiducial argument as a special case of an inference theory using upper and lower probabilities. Structural inference Developing ideas of Fisher and of Pitman from 1938 to 1939, George A. Barnard developed "structural inference" or "pivotal inference", an approach using invariant probabilities on group families. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful. Donald A. S. Fraser developed a general theory for structural inference based on group theory and applied this to linear models. The theory formulated by Fraser has close links to decision theory and Bayesian statistics and can provide optimal frequentist decision rules if they exist. Inference topics The topics below are usually included in the area of statistical inference. Statistical assumptions Statistical decision theory Estimation theory Statistical hypothesis testing Revising opinions in statistics Design of experiments, the analysis of variance, and regression Survey sampling Summarizing statistical data Predictive inference Predictive inference is an approach to statistical inference that emphasizes the prediction of future observations based on past observations. Initially, predictive inference was based on observable parameters and it was the main purpose of studying probability, but it fell out of favor in the 20th century due to a new parametric approach pioneered by Bruno de Finetti. The approach modeled phenomena as a physical system observed with error (e.g., celestial mechanics). De Finetti's idea of exchangeability—that future observations should behave like past observations—came to the attention of the English-speaking world with the 1974 translation from French of his 1937 paper, and has since been propounded by such statisticians as Seymour Geisser.
Mathematics
Statistics
null
27585
https://en.wikipedia.org/wiki/Statistical%20population
Statistical population
In statistics, a population is a set of similar items or events which is of interest for some question or experiment. A statistical population can be a group of existing objects (e.g. the set of all stars within the Milky Way galaxy) or a hypothetical and potentially infinite group of objects conceived as a generalization from experience (e.g. the set of all possible hands in a game of poker). A common aim of statistical analysis is to produce information about some chosen population. In statistical inference, a subset of the population (a statistical sample) is chosen to represent the population in a statistical analysis. Moreover, the statistical sample must be unbiased and accurately model the population (every unit of the population has an equal chance of selection). The ratio of the size of this statistical sample to the size of the population is called a sampling fraction. It is then possible to estimate the population parameters using the appropriate sample statistics. Mean The population mean, or population expected value, is a measure of the central tendency either of a probability distribution or of a random variable characterized by that distribution. In a discrete probability distribution of a random variable , the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value of and its probability , and then adding all these products together, giving . An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean (see the Cauchy distribution for an example). Moreover, the mean can be infinite for some distributions. For a finite population, the population mean of a property is equal to the arithmetic mean of the given property, while considering every member of the population. For example, the population mean height is equal to the sum of the heights of every individual—divided by the total number of individuals. The sample mean may differ from the population mean, especially for small samples. The law of large numbers states that the larger the size of the sample, the more likely it is that the sample mean will be close to the population mean.
Mathematics
Statistics and probability
null
27590
https://en.wikipedia.org/wiki/Standard%20deviation
Standard deviation
In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its mean. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not. Standard deviation may be abbreviated SD or std dev, and is most commonly represented in mathematical texts and equations by the lowercase Greek letter σ (sigma), for the population standard deviation, or the Latin letter s, for the sample standard deviation. The standard deviation of a random variable, sample, statistical population, data set, or probability distribution is the square root of its variance. (For a finite population, variance is the average of the squared deviations from the mean.) A useful property of the standard deviation is that, unlike the variance, it is expressed in the same unit as the data. Standard deviation can also be used to calculate standard error for a finite sample, and to determine statistical significance. When only a sample of data from a population is available, the term standard deviation of the sample or sample standard deviation can refer to either the above-mentioned quantity as applied to those data, or to a modified quantity that is an unbiased estimate of the population standard deviation (the standard deviation of the entire population). Relationship with standard error and statistical significance The standard deviation of a population or sample and the standard error of a statistic (e.g., of the sample mean) are quite different, but related. The sample mean's standard error is the standard deviation of the set of means that would be found by drawing an infinite number of repeated samples from the population and computing a mean for each sample. The mean's standard error turns out to equal the population standard deviation divided by the square root of the sample size, and is estimated by using the sample standard deviation divided by the square root of the sample size. For example, a poll's standard error (what is reported as the margin of error of the poll), is the expected standard deviation of the estimated mean if the same poll were to be conducted multiple times. Thus, the standard error estimates the standard deviation of an estimate, which itself measures how much the estimate depends on the particular sample that was taken from the population. In science, it is common to report both the standard deviation of the data (as a summary statistic) and the standard error of the estimate (as a measure of potential error in the findings). By convention, only effects more than two standard errors away from a null expectation are considered "statistically significant", a safeguard against spurious conclusion that is really due to random sampling error. Basic examples Population standard deviation of grades of eight students Suppose that the entire population of interest is eight students in a particular class. For a finite set of numbers, the population standard deviation is found by taking the square root of the average of the squared deviations of the values subtracted from their average value. The marks of a class of eight students (that is, a statistical population) are the following eight values: These eight data points have the mean (average) of 5: First, calculate the deviations of each data point from the mean, and square the result of each: The variance is the mean of these values: and the population standard deviation is equal to the square root of the variance: This formula is valid only if the eight values with which we began form the complete population. If the values instead were a random sample drawn from some large parent population (for example, they were 8 students randomly and independently chosen from a class of 2 million), then one divides by instead of in the denominator of the last formula, and the result is In that case, the result of the original formula would be called the sample standard deviation and denoted by instead of Dividing by rather than by gives an unbiased estimate of the variance of the larger parent population. This is known as Bessel's correction. Roughly, the reason for it is that the formula for the sample variance relies on computing differences of observations from the sample mean, and the sample mean itself was constructed to be as close as possible to the observations, so just dividing by n would underestimate the variability. Standard deviation of average height for adult men If the population of interest is approximately normally distributed, the standard deviation provides information on the proportion of observations above or below certain values. For example, the average height for adult men in the United States is about , with a standard deviation of around . This means that most men (about 68%, assuming a normal distribution) have a height within 3 inches of the mean ()one standard deviationand almost all men (about 95%) have a height within of the mean ()two standard deviations. If the standard deviation were zero, then all men would share an identical height of 69 inches. Three standard deviations account for 99.73% of the sample population being studied, assuming the distribution is normal or bell-shaped (see the 68–95–99.7 rule, or the empirical rule, for more information). Definition of population values Let μ be the expected value (the average) of random variable with density : The standard deviation of is defined as which can be shown to equal Using words, the standard deviation is the square root of the variance of . The standard deviation of a probability distribution is the same as that of a random variable having that distribution. Not all random variables have a standard deviation. If the distribution has fat tails going out to infinity, the standard deviation might not exist, because the integral might not converge. The normal distribution has tails going out to infinity, but its mean and standard deviation do exist, because the tails diminish quickly enough. The Pareto distribution with parameter has a mean, but not a standard deviation (loosely speaking, the standard deviation is infinite). The Cauchy distribution has neither a mean nor a standard deviation. Discrete random variable In the case where takes random values from a finite data set , with each value having the same probability, the standard deviation is Note: The above expression has a built-in bias. See the discussion on Bessel's correction further down below. or, by using summation notation, If, instead of having equal probabilities, the values have different probabilities, let have probability , have probability have probability . In this case, the standard deviation will be Continuous random variable The standard deviation of a continuous real-valued random variable with probability density function is and where the integrals are definite integrals taken for ranging over the set of possible values of the random variable . In the case of a parametric family of distributions, the standard deviation can be expressed in terms of the parameters. For example, in the case of the log-normal distribution with parameters and , the standard deviation is Estimation One can find the standard deviation of an entire population in cases (such as standardized testing) where every member of a population is sampled. In cases where that cannot be done, the standard deviation σ is estimated by examining a random sample taken from the population and computing a statistic of the sample, which is used as an estimate of the population standard deviation. Such a statistic is called an estimator, and the estimator (or the value of the estimator, namely the estimate) is called a sample standard deviation, and is denoted by s (possibly with modifiers). Unlike in the case of estimating the population mean of a normal distribution, for which the sample mean is a simple estimator with many desirable properties (unbiased, efficient, maximum likelihood), there is no single estimator for the standard deviation with all these properties, and unbiased estimation of standard deviation is a very technically involved problem. Most often, the standard deviation is estimated using the corrected sample standard deviation (using N − 1), defined below, and this is often referred to as the "sample standard deviation", without qualifiers. However, other estimators are better in other respects: the uncorrected estimator (using N) yields lower mean squared error, while using N − 1.5 (for the normal distribution) almost completely eliminates bias. Uncorrected sample standard deviation The formula for the population standard deviation (of a finite population) can be applied to the sample, using the size of the sample as the size of the population (though the actual population size from which the sample is drawn may be much larger). This estimator, denoted by sN, is known as the uncorrected sample standard deviation, or sometimes the standard deviation of the sample (considered as the entire population), and is defined as follows: where are the observed values of the sample items, and is the mean value of these observations, while the denominator N stands for the size of the sample: this is the square root of the sample variance, which is the average of the squared deviations about the sample mean. This is a consistent estimator (it converges in probability to the population value as the number of samples goes to infinity), and is the maximum-likelihood estimate when the population is normally distributed. However, this is a biased estimator, as the estimates are generally too low. The bias decreases as sample size grows, dropping off as 1/N, and thus is most significant for small or moderate sample sizes; for the bias is below 1%. Thus for very large sample sizes, the uncorrected sample standard deviation is generally acceptable. This estimator also has a uniformly smaller mean squared error than the corrected sample standard deviation. Corrected sample standard deviation If the biased sample variance (the second central moment of the sample, which is a downward-biased estimate of the population variance) is used to compute an estimate of the population's standard deviation, the result is Here taking the square root introduces further downward bias, by Jensen's inequality, due to the square root's being a concave function. The bias in the variance is easily corrected, but the bias from the square root is more difficult to correct, and depends on the distribution in question. An unbiased estimator for the variance is given by applying Bessel's correction, using N − 1 instead of N to yield the unbiased sample variance, denoted s2: This estimator is unbiased if the variance exists and the sample values are drawn independently with replacement. N − 1 corresponds to the number of degrees of freedom in the vector of deviations from the mean, Taking square roots reintroduces bias (because the square root is a nonlinear function which does not commute with the expectation, i.e. often ), yielding the corrected sample standard deviation, denoted by s: As explained above, while s2 is an unbiased estimator for the population variance, s is still a biased estimator for the population standard deviation, though markedly less biased than the uncorrected sample standard deviation. This estimator is commonly used and generally known simply as the "sample standard deviation". The bias may still be large for small samples (N less than 10). As sample size increases, the amount of bias decreases. We obtain more information and the difference between and becomes smaller. Unbiased sample standard deviation For unbiased estimation of standard deviation, there is no formula that works across all distributions, unlike for mean and variance. Instead, is used as a basis, and is scaled by a correction factor to produce an unbiased estimate. For the normal distribution, an unbiased estimator is given by , where the correction factor (which depends on ) is given in terms of the Gamma function, and equals: This arises because the sampling distribution of the sample standard deviation follows a (scaled) chi distribution, and the correction factor is the mean of the chi distribution. An approximation can be given by replacing with , yielding: The error in this approximation decays quadratically (as ), and it is suited for all but the smallest samples or highest precision: for the bias is equal to 1.3%, and for the bias is already less than 0.1%. A more accurate approximation is to replace above with . For other distributions, the correct formula depends on the distribution, but a rule of thumb is to use the further refinement of the approximation: where denotes the population excess kurtosis. The excess kurtosis may be either known beforehand for certain distributions, or estimated from the data. Confidence interval of a sampled standard deviation The standard deviation we obtain by sampling a distribution is itself not absolutely accurate, both for mathematical reasons (explained here by the confidence interval) and for practical reasons of measurement (measurement error). The mathematical effect can be described by the confidence interval or CI. To show how a larger sample will make the confidence interval narrower, consider the following examples: A small population of has only one degree of freedom for estimating the standard deviation. The result is that a 95% CI of the SD runs from 0.45 × SD to 31.9 × SD; the factors here are as follows: where is the -th quantile of the chi-square distribution with degrees of freedom, and is the confidence level. This is equivalent to the following: With , and . The reciprocals of the square roots of these two numbers give us the factors 0.45 and 31.9 given above. A larger population of has 9 degrees of freedom for estimating the standard deviation. The same computations as above give us in this case a 95% CI running from 0.69 × SD to 1.83 × SD. So even with a sample population of 10, the actual SD can still be almost a factor 2 higher than the sampled SD. For a sample population , this is down to 0.88 × SD to 1.16 × SD. To be more certain that the sampled SD is close to the actual SD we need to sample a large number of points. These same formulae can be used to obtain confidence intervals on the variance of residuals from a least squares fit under standard normal theory, where is now the number of degrees of freedom for error. Bounds on standard deviation For a set of data spanning a range of values , an upper bound on the standard deviation is given by . An estimate of the standard deviation for data taken to be approximately normal follows from the heuristic that 95% of the area under the normal curve lies roughly two standard deviations to either side of the mean, so that, with 95% probability the total range of values represents four standard deviations so that . This so-called range rule is useful in sample size estimation, as the range of possible values is easier to estimate than the standard deviation. Other divisors of the range such that are available for other values of and for non-normal distributions. Identities and mathematical properties The standard deviation is invariant under changes in location, and scales directly with the scale of the random variable. Thus, for a constant and random variables and : The standard deviation of the sum of two random variables can be related to their individual standard deviations and the covariance between them: where and stand for variance and covariance, respectively. The calculation of the sum of squared deviations can be related to moments calculated directly from the data. In the following formula, the letter is interpreted to mean expected value, i.e., mean. The sample standard deviation can be computed as: For a finite population with equal probabilities at all points, we have which means that the standard deviation is equal to the square root of the difference between the average of the squares of the values and the square of the average value. See computational formula for the variance for proof, and for an analogous result for the sample standard deviation. Interpretation and application A large standard deviation indicates that the data points can spread far from the mean and a small standard deviation indicates that they are clustered closely around the mean. For example, each of the three populations {0, 0, 14, 14}, {0, 6, 8, 14} and {6, 6, 8, 8} has a mean of 7. Their standard deviations are 7, 5, and 1, respectively. The third population has a much smaller standard deviation than the other two because its values are all close to 7. These standard deviations have the same units as the data points themselves. If, for instance, the data set {0, 6, 8, 14} represents the ages of a population of four siblings in years, the standard deviation is 5 years. As another example, the population {1000, 1006, 1008, 1014} may represent the distances traveled by four athletes, measured in meters. It has a mean of 1007 meters, and a standard deviation of 5 meters. Standard deviation may serve as a measure of uncertainty. In physical science, for example, the reported standard deviation of a group of repeated measurements gives the precision of those measurements. When deciding whether measurements agree with a theoretical prediction, the standard deviation of those measurements is of crucial importance: if the mean of the measurements is too far away from the prediction (with the distance measured in standard deviations), then the theory being tested probably needs to be revised. This makes sense since they fall outside the range of values that could reasonably be expected to occur if the prediction were correct and the standard deviation appropriately quantified. See prediction interval. While the standard deviation does measure how far typical values tend to be from the mean, other measures are available. An example is the mean absolute deviation, which might be considered a more direct measure of average distance, compared to the root mean square distance inherent in the standard deviation. Application examples The practical value of understanding the standard deviation of a set of values is in appreciating how much variation there is from the average (mean). Experiment, industrial and hypothesis testing Standard deviation is often used to compare real-world data against a model to test the model. For example, in industrial applications the weight of products coming off a production line may need to comply with a legally required value. By weighing some fraction of the products an average weight can be found, which will always be slightly different from the long-term average. By using standard deviations, a minimum and maximum value can be calculated that the averaged weight will be within some very high percentage of the time (99.9% or more). If it falls outside the range then the production process may need to be corrected. Statistical tests such as these are particularly important when the testing is relatively expensive. For example, if the product needs to be opened and drained and weighed, or if the product was otherwise used up by the test. In experimental science, a theoretical model of reality is used. Particle physics conventionally uses a standard of "5 sigma" for the declaration of a discovery. A five-sigma level translates to one chance in 3.5 million that a random fluctuation would yield the result. This level of certainty was required in order to assert that a particle consistent with the Higgs boson had been discovered in two independent experiments at CERN, also leading to the declaration of the first observation of gravitational waves. Weather As a simple example, consider the average daily maximum temperatures for two cities, one inland and one on the coast. It is helpful to understand that the range of daily maximum temperatures for cities near the coast is smaller than for cities inland. Thus, while these two cities may each have the same average maximum temperature, the standard deviation of the daily maximum temperature for the coastal city will be less than that of the inland city as, on any particular day, the actual maximum temperature is more likely to be farther from the average maximum temperature for the inland city than for the coastal one. Finance In finance, standard deviation is often used as a measure of the risk associated with price-fluctuations of a given asset (stocks, bonds, property, etc.), or the risk of a portfolio of assets (actively managed mutual funds, index mutual funds, or ETFs). Risk is an important factor in determining how to efficiently manage a portfolio of investments because it determines the variation in returns on the asset or portfolio and gives investors a mathematical basis for investment decisions (known as mean-variance optimization). The fundamental concept of risk is that as it increases, the expected return on an investment should increase as well, an increase known as the risk premium. In other words, investors should expect a higher return on an investment when that investment carries a higher level of risk or uncertainty. When evaluating investments, investors should estimate both the expected return and the uncertainty of future returns. Standard deviation provides a quantified estimate of the uncertainty of future returns. For example, assume an investor had to choose between two stocks. Stock A over the past 20 years had an average return of 10 percent, with a standard deviation of 20 percentage points (pp) and Stock B, over the same period, had average returns of 12 percent but a higher standard deviation of 30 pp. On the basis of risk and return, an investor may decide that Stock A is the safer choice, because Stock B's additional two percentage points of return is not worth the additional 10 pp standard deviation (greater risk or uncertainty of the expected return). Stock B is likely to fall short of the initial investment (but also to exceed the initial investment) more often than Stock A under the same circumstances, and is estimated to return only two percent more on average. In this example, Stock A is expected to earn about 10 percent, plus or minus 20 pp (a range of 30 percent to −10 percent), about two-thirds of the future year returns. When considering more extreme possible returns or outcomes in future, an investor should expect results of as much as 10 percent plus or minus 60 pp, or a range from 70 percent to −50 percent, which includes outcomes for three standard deviations from the average return (about 99.7 percent of probable returns). Calculating the average (or arithmetic mean) of the return of a security over a given period will generate the expected return of the asset. For each period, subtracting the expected return from the actual return results in the difference from the mean. Squaring the difference in each period and taking the average gives the overall variance of the return of the asset. The larger the variance, the greater risk the security carries. Finding the square root of this variance will give the standard deviation of the investment tool in question. Financial time series are known to be non-stationary series, whereas the statistical calculations above, such as standard deviation, apply only to stationary series. To apply the above statistical tools to non-stationary series, the series first must be transformed to a stationary series, enabling use of statistical tools that now have a valid basis from which to work. Geometric interpretation To gain some geometric insights and clarification, we will start with a population of three values, . This defines a point in . Consider the line . This is the "main diagonal" going through the origin. If our three given values were all equal, then the standard deviation would be zero and would lie on . So it is not unreasonable to assume that the standard deviation is related to the distance of to . That is indeed the case. To move orthogonally from to the point , one begins at the point: whose coordinates are the mean of the values we started out with. is on therefore for some . The line is to be orthogonal to the vector from to . Therefore: A little algebra shows that the distance between and (which is the same as the orthogonal distance between and the line ) is equal to the standard deviation of the vector , multiplied by the square root of the number of dimensions of the vector (3 in this case). Chebyshev's inequality An observation is rarely more than a few standard deviations away from the mean. Chebyshev's inequality ensures that, for all distributions for which the standard deviation is defined, the amount of data within a number of standard deviations of the mean is at least as much as given in the following table. Rules for normally distributed data The central limit theorem states that the distribution of an average of many independent, identically distributed random variables tends toward the famous bell-shaped normal distribution with a probability density function of where is the expected value of the random variables, equals their distribution's standard deviation divided by , and is the number of random variables. The standard deviation therefore is simply a scaling variable that adjusts how broad the curve will be, though it also appears in the normalizing constant. If a data distribution is approximately normal, then the proportion of data values within standard deviations of the mean is defined by: where is the error function. The proportion that is less than or equal to a number, , is given by the cumulative distribution function: If a data distribution is approximately normal then about 68 percent of the data values are within one standard deviation of the mean (mathematically, , where is the arithmetic mean), about 95 percent are within two standard deviations (), and about 99.7 percent lie within three standard deviations (). This is known as the 68–95–99.7 rule, or the empirical rule. For various values of , the percentage of values expected to lie in and outside the symmetric interval, , are as follows: Standard deviation matrix The standard deviation matrix is the extension of the standard deviation to multiple dimensions. It is the symmetric square root of the covariance matrix . linearly scales a random vector in multiple dimensions in the same way that does in one dimension. A scalar random variable with variance can be written as , where has unit variance. In the same way, a random vector in several dimensions with covariance can be written as , where is a normalized variable with identity covariance . This requires that . There are then infinite solutions for , and consequently there are multiple ways to whiten the distribution. The symmetric square root of is one of the solutions. For example, a multivariate normal vector can be defined as , where is the multivariate standard normal. Properties The eigenvectors and eigenvalues of correspond to the axes of the 1 sd error ellipsoid of the multivariate normal distribution. See Multivariate normal distribution: geometric interpretation. The standard deviation of the projection of the multivariate distribution (i.e. the marginal distribution) on to a line in the direction of the unit vector equals . The standard deviation of a slice of the multivariate distribution (i.e. the conditional distribution) along the line in the direction of the unit vector equals . The discriminability index between two equal-covariance distributions is their Mahalanobis distance, which can also be expressed in terms of the sd matrix: , where is the mean-difference vector. Since scales a normalized variable, it can be used to invert the transformation, and make it decorrelated and unit-variance: has zero mean and identity covariance. This is called the Mahalanobis whitening transform. Relationship between standard deviation and mean The mean and the standard deviation of a set of data are descriptive statistics usually reported together. In a certain sense, the standard deviation is a "natural" measure of statistical dispersion if the center of the data is measured about the mean. This is because the standard deviation from the mean is smaller than from any other point. The precise statement is the following: suppose are real numbers and define the function: Using calculus or by completing the square, it is possible to show that has a unique minimum at the mean: Variability can also be measured by the coefficient of variation, which is the ratio of the standard deviation to the mean. It is a dimensionless number. Standard deviation of the mean Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by: where is the number of observations in the sample used to estimate the mean. This can easily be proven with (see basic properties of the variance): (Statistical independence is assumed.) hence Resulting in: In order to estimate the standard deviation of the mean it is necessary to know the standard deviation of the entire population beforehand. However, in most applications this parameter is unknown. For example, if a series of 10 measurements of a previously unknown quantity is performed in a laboratory, it is possible to calculate the resulting sample mean and sample standard deviation, but it is impossible to calculate the standard deviation of the mean. However, one can estimate the standard deviation of the entire population from the sample, and thus obtain an estimate for the standard error of the mean. Rapid calculation methods The following two formulas can represent a running (repeatedly updated) standard deviation. A set of two power sums and are computed over a set of values of , denoted as : Given the results of these running summations, the values , , can be used at any time to compute the current value of the running standard deviation: Where , as mentioned above, is the size of the set of values (or can also be regarded as ). Similarly for sample standard deviation, In a computer implementation, as the two sums become large, we need to consider round-off error, arithmetic overflow, and arithmetic underflow. The method below calculates the running sums method with reduced rounding errors. This is a "one pass" algorithm for calculating variance of samples without the need to store prior data during the calculation. Applying this method to a time series will result in successive values of standard deviation corresponding to data points as grows larger with each new sample, rather than a constant-width sliding window calculation. For : where is the mean value. Note: since or . Sample variance: Population variance: Weighted calculation When the values are weighted with unequal weights , the power sums are each computed as: And the standard deviation equations remain unchanged. is now the sum of the weights and not the number of samples . The incremental method with reduced rounding errors can also be applied, with some additional complexity. A running sum of weights must be computed for each from 1 to : and places where is used above must be replaced by : In the final division, and or where is the total number of elements, and is the number of elements with non-zero weights. The above formulas become equal to the simpler formulas given above if weights are taken as equal to one. History The term standard deviation was first used in writing by Karl Pearson in 1894, following his use of it in lectures. This was as a replacement for earlier alternative names for the same idea: for example, Gauss used mean error. Standard deviation index The standard deviation index (SDI) is used in external quality assessments, particularly for medical laboratories. It is calculated as: Alternatives Standard deviation is algebraically simpler, though in practice less robust, than the average absolute deviation.
Mathematics
Statistics and probability
null
27593
https://en.wikipedia.org/wiki/Independence%20%28probability%20theory%29
Independence (probability theory)
Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other. When dealing with collections of more than two events, two notions of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while mutual independence (or collective independence) of events means, informally speaking, that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables. Mutual independence implies pairwise independence, but not the other way around. In the standard literature of probability theory, statistics, and stochastic processes, independence without further qualification usually refers to mutual independence. Definition For events Two events Two events and are independent (often written as or , where the latter symbol often is also used for conditional independence) if and only if their joint probability equals the product of their probabilities: indicates that two independent events and have common elements in their sample space so that they are not mutually exclusive (mutually exclusive iff ). Why this defines independence is made clear by rewriting with conditional probabilities as the probability at which the event occurs provided that the event has or is assumed to have occurred: and similarly Thus, the occurrence of does not affect the probability of , and vice versa. In other words, and are independent of each other. Although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if or are 0. Furthermore, the preferred definition makes clear by symmetry that when is independent of , is also independent of . Odds Stated in terms of odds, two events are independent if and only if the odds ratio of and is unity (1). Analogously with probability, this is equivalent to the conditional odds being equal to the unconditional odds: or to the odds of one event, given the other event, being the same as the odds of the event, given the other event not occurring: The odds ratio can be defined as or symmetrically for odds of given , and thus is 1 if and only if the events are independent. More than two events A finite set of events is pairwise independent if every pair of events is independent—that is, if and only if for all distinct pairs of indices , A finite set of events is mutually independent if every event is independent of any intersection of the other events—that is, if and only if for every and for every k indices , This is called the multiplication rule for independent events. It is not a single condition involving only the product of all the probabilities of all single events; it must hold true for all subsets of events. For more than two events, a mutually independent set of events is (by definition) pairwise independent; but the converse is not necessarily true. Log probability and information content Stated in terms of log probability, two events are independent if and only if the log probability of the joint event is the sum of the log probability of the individual events: In information theory, negative log probability is interpreted as information content, and thus two events are independent if and only if the information content of the combined event equals the sum of information content of the individual events: See for details. For real valued random variables Two random variables Two random variables and are independent if and only if (iff) the elements of the -system generated by them are independent; that is to say, for every and , the events and are independent events (as defined above in ). That is, and with cumulative distribution functions and , are independent iff the combined random variable has a joint cumulative distribution function or equivalently, if the probability densities and and the joint probability density exist, More than two random variables A finite set of random variables is pairwise independent if and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarily mutually independent as defined next. A finite set of random variables is mutually independent if and only if for any sequence of numbers , the events are mutually independent events (as defined above in ). This is equivalent to the following condition on the joint cumulative distribution function A finite set of random variables is mutually independent if and only if It is not necessary here to require that the probability distribution factorizes for all possible subsets as in the case for events. This is not required because e.g. implies . The measure-theoretically inclined reader may prefer to substitute events for events in the above definition, where is any Borel set. That definition is exactly equivalent to the one above when the values of the random variables are real numbers. It has the advantage of working also for complex-valued random variables or for random variables taking values in any measurable space (which includes topological spaces endowed by appropriate σ-algebras). For real valued random vectors Two random vectors and are called independent if where and denote the cumulative distribution functions of and and denotes their joint cumulative distribution function. Independence of and is often denoted by . Written component-wise, and are called independent if For stochastic processes For one stochastic process The definition of independence may be extended from random vectors to a stochastic process. Therefore, it is required for an independent stochastic process that the random variables obtained by sampling the process at any times are independent random variables for any . Formally, a stochastic process is called independent, if and only if for all and for all where Independence of a stochastic process is a property within a stochastic process, not between two stochastic processes. For two stochastic processes Independence of two stochastic processes is a property between two stochastic processes and that are defined on the same probability space . Formally, two stochastic processes and are said to be independent if for all and for all , the random vectors and are independent, i.e. if Independent σ-algebras The definitions above ( and ) are both generalized by the following definition of independence for σ-algebras. Let be a probability space and let and be two sub-σ-algebras of . and are said to be independent if, whenever and , Likewise, a finite family of σ-algebras , where is an index set, is said to be independent if and only if and an infinite family of σ-algebras is said to be independent if all its finite subfamilies are independent. The new definition relates to the previous ones very directly: Two events are independent (in the old sense) if and only if the σ-algebras that they generate are independent (in the new sense). The σ-algebra generated by an event is, by definition, Two random variables and defined over are independent (in the old sense) if and only if the σ-algebras that they generate are independent (in the new sense). The σ-algebra generated by a random variable taking values in some measurable space consists, by definition, of all subsets of of the form , where is any measurable subset of . Using this definition, it is easy to show that if and are random variables and is constant, then and are independent, since the σ-algebra generated by a constant random variable is the trivial σ-algebra . Probability zero events cannot affect independence so independence also holds if is only Pr-almost surely constant. Properties Self-independence Note that an event is independent of itself if and only if Thus an event is independent of itself if and only if it almost surely occurs or its complement almost surely occurs; this fact is useful when proving zero–one laws. Expectation and covariance If and are statistically independent random variables, then the expectation operator has the property and the covariance is zero, as follows from The converse does not hold: if two random variables have a covariance of 0 they still may be not independent. Similarly for two stochastic processes and : If they are independent, then they are uncorrelated. Characteristic function Two random variables and are independent if and only if the characteristic function of the random vector satisfies In particular the characteristic function of their sum is the product of their marginal characteristic functions: though the reverse implication is not true. Random variables that satisfy the latter condition are called subindependent. Examples Rolling dice The event of getting a 6 the first time a die is rolled and the event of getting a 6 the second time are independent. By contrast, the event of getting a 6 the first time a die is rolled and the event that the sum of the numbers seen on the first and second trial is 8 are not independent. Drawing cards If two cards are drawn with replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are independent. By contrast, if two cards are drawn without replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are not independent, because a deck that has had a red card removed has proportionately fewer red cards. Pairwise and mutual independence Consider the two probability spaces shown. In both cases, and . The events in the first space are pairwise independent because , , and ; but the three events are not mutually independent. The events in the second space are both pairwise independent and mutually independent. To illustrate the difference, consider conditioning on two events. In the pairwise independent case, although any one event is independent of each of the other two individually, it is not independent of the intersection of the other two: In the mutually independent case, however, Triple-independence but no pairwise-independence It is possible to create a three-event example in which and yet no two of the three events are pairwise independent (and hence the set of events are not mutually independent). This example shows that mutual independence involves requirements on the products of probabilities of all combinations of events, not just the single events as in this example. Conditional independence For events The events and are conditionally independent given an event when . For random variables Intuitively, two random variables and are conditionally independent given if, once is known, the value of does not add any additional information about . For instance, two measurements and of the same underlying quantity are not independent, but they are conditionally independent given (unless the errors in the two measurements are somehow connected). The formal definition of conditional independence is based on the idea of conditional distributions. If , , and are discrete random variables, then we define and to be conditionally independent given if for all , and such that . On the other hand, if the random variables are continuous and have a joint probability density function , then and are conditionally independent given if for all real numbers , and such that . If discrete and are conditionally independent given , then for any , and with . That is, the conditional distribution for given and is the same as that given alone. A similar equation holds for the conditional probability density functions in the continuous case. Independence can be seen as a special kind of conditional independence, since probability can be seen as a kind of conditional probability given no events. History Before 1933, independence, in probability theory, was defined in a verbal manner. For example, de Moivre gave the following definition: “Two events are independent, when they have no connexion one with the other, and that the happening of one neither forwards nor obstructs the happening of the other”. If there are n independent events, the probability of the event, that all of them happen was computed as the product of the probabilities of these n events. Apparently, there was the conviction, that this formula was a consequence of the above definition. (Sometimes this was called the Multiplication Theorem.), Of course, a proof of his assertion cannot work without further more formal tacit assumptions. The definition of independence, given in this article, became the standard definition (now used in all books) after it appeared in 1933 as part of Kolmogorov's axiomatization of probability. Kolmogorov credited it to S.N. Bernstein, and quoted a publication which had appeared in Russian in 1927. Unfortunately, both Bernstein and Kolmogorov had not been aware of the work of the Georg Bohlmann. Bohlmann had given the same definition for two events in 1901 and for n events in 1908 In the latter paper, he studied his notion in detail. For example, he gave the first example showing that pairwise independence does not imply mutual independence. Even today, Bohlmann is rarely quoted. More about his work can be found in On the contributions of Georg Bohlmann to probability theory from :de:Ulrich Krengel.
Mathematics
Probability
null
27609
https://en.wikipedia.org/wiki/Skeleton
Skeleton
A skeleton is the structural frame that supports the body of most animals. There are several types of skeletons, including the exoskeleton, which is a rigid outer shell that holds up an organism's shape; the endoskeleton, a rigid internal frame to which the organs and soft tissues attach; and the hydroskeleton, a flexible internal structure supported by the hydrostatic pressure of body fluids. Vertebrates are animals with an endoskeleton centered around an axial vertebral column, and their skeletons are typically composed of bones and cartilages. Invertebrates are other animals that lack a vertebral column, and their skeletons vary, including hard-shelled exoskeleton (arthropods and most molluscs), plated internal shells (e.g. cuttlebones in some cephalopods) or rods (e.g. ossicles in echinoderms), hydrostatically supported body cavities (most), and spicules (sponges). Cartilage is a rigid connective tissue that is found in the skeletal systems of vertebrates and invertebrates. Etymology The term skeleton comes . Sceleton is an archaic form of the word. Classification Skeletons can be defined by several attributes. Solid skeletons consist of hard substances, such as bone, cartilage, or cuticle. These can be further divided by location; internal skeletons are endoskeletons, and external skeletons are exoskeletons. Skeletons may also be defined by rigidity, where pliant skeletons are more elastic than rigid skeletons. Fluid or hydrostatic skeletons do not have hard structures like solid skeletons, instead functioning via pressurized fluids. Hydrostatic skeletons are always internal. Exoskeletons An exoskeleton is an external skeleton that covers the body of an animal, serving as armor to protect an animal from predators. Arthropods have exoskeletons that encase their bodies, and have to undergo periodic moulting or ecdysis as the animals grow. The shells of molluscs are another form of exoskeleton. Exoskeletons provide surfaces for the attachment of muscles, and specialized appendanges of the exoskeleton can assist with movement and defense. In arthropods, the exoskeleton also assists with sensory perception. An external skeleton can be quite heavy in relation to the overall mass of an animal, so on land, organisms that have an exoskeleton are mostly relatively small. Somewhat larger aquatic animals can support an exoskeleton because weight is less of a consideration underwater. The southern giant clam, a species of extremely large saltwater clam in the Pacific Ocean, has a shell that is massive in both size and weight. Syrinx aruanus is a species of sea snail with a very large shell. Endoskeletons Endoskeletons are the internal support structure of an animal, composed of mineralized tissues, such as the bone skeletons found in most vertebrates. Endoskeletons are highly specialized and vary significantly between animals. They vary in complexity from functioning purely for support (as in the case of sponges), to serving as an attachment site for muscles and a mechanism for transmitting muscular forces. A true endoskeleton is derived from mesodermal tissue. Endoskeletons occur in chordates, echinoderms, and sponges. Rigidity Pliant skeletons are capable of movement; thus, when stress is applied to the skeletal structure, it deforms and then regains its original shape. This skeletal structure is used in some invertebrates, for instance in the hinge of bivalve shells or the mesoglea of cnidarians such as jellyfish. Pliant skeletons are beneficial because only muscle contractions are needed to bend the skeleton; upon muscle relaxation, the skeleton will return to its original shape. Cartilage is one material that a pliant skeleton may be composed of, but most pliant skeletons are formed from a mixture of proteins, polysaccharides, and water. For additional structure or protection, pliant skeletons may be supported by rigid skeletons. Organisms that have pliant skeletons typically live in water, which supports body structure in the absence of a rigid skeleton. Rigid skeletons are not capable of movement when stressed, creating a strong support system most common in terrestrial animals. Such a skeleton type used by animals that live in water are more for protection (such as barnacle and snail shells) or for fast-moving animals that require additional support of musculature needed for swimming through water. Rigid skeletons are formed from materials including chitin (in arthropods), calcium compounds such as calcium carbonate (in stony corals and mollusks) and silicate (for diatoms and radiolarians). Hydrostatic skeletons Hydrostatic skeletons are flexible cavities within an animal that provide structure through fluid pressure, occurring in some types of soft-bodied organisms, including jellyfish, flatworms, nematodes, and earthworms. The walls of these cavities are made of muscle and connective tissue. In addition to providing structure for an animal's body, hydrostatic skeletons transmit the forces of muscle contraction, allowing an animal to move by alternating contractions and expansions of muscles along the animal's length. Cytoskeleton The cytoskeleton (cyto- meaning 'cell') is used to stabilize and preserve the form of the cells. It is a dynamic structure that maintains cell shape, protects the cell, enables cellular motion using structures such as flagella, cilia and lamellipodia, and transport within cells such as the movement of vesicles and organelles, and plays a role in cellular division. The cytoskeleton is not a skeleton in the sense that it provides the structural system for the body of an animal; rather, it serves a similar function at the cellular level. Vertebrate skeletons Vertebrate skeletons are endoskeletons, and the main skeletal component is bone. Bones compose a unique skeletal system for each type of animal. Another important component is cartilage which in mammals is found mainly in the joint areas. In other animals, such as the cartilaginous fishes, which include the sharks, the skeleton is composed entirely of cartilage. The segmental pattern of the skeleton is present in all vertebrates, with basic units being repeated, such as in the vertebral column and the ribcage. Bones are rigid organs providing structural support for the body, assistance in movement by opposing muscle contraction, and the forming of a protective wall around internal organs. Bones are primarily made of inorganic minerals, such as hydroxyapatite, while the remainder is made of an organic matrix and water. The hollow tubular structure of bones provide considerable resistance against compression while staying lightweight. Most cells in bones are osteoblasts, osteoclasts, or osteocytes. Bone tissue is a type of dense connective tissue, a type of mineralized tissue that gives rigidity and a honeycomb-like three-dimensional internal structure. Bones also produce red and white blood cells and serve as calcium and phosphate storage at the cellular level. Other types of tissue found in bones include bone marrow, endosteum and periosteum, nerves, blood vessels and cartilage. During embryonic development, bones are developed individually from skeletogenic cells in the ectoderm and mesoderm. Most of these cells develop into separate bone, cartilage, and joint cells, and they are then articulated with one another. Specialized skeletal tissues are unique to vertebrates. Cartilage grows more quickly than bone, causing it to be more prominent earlier in an animal's life before it is overtaken by bone. Cartilage is also used in vertebrates to resist stress at points of articulation in the skeleton. Cartilage in vertebrates is usually encased in perichondrium tissue. Ligaments are elastic tissues that connect bones to other bones, and tendons are elastic tissues that connect muscles to bones. Amphibians and reptiles The skeletons of turtles have evolved to develop a shell from the ribcage, forming an exoskeleton. The skeletons of snakes and caecilians have significantly more vertebrae than other animals. Snakes often have over 300, compared to the 65 that is typical in lizards. Birds The skeletons of birds are adapted for flight. The bones in bird skeletons are hollow and lightweight to reduce the metabolic cost of flight. Several attributes of the shape and structure of the bones are optimized to endure the physical stress associated with flight, including a round and thin humeral shaft and the fusion of skeletal elements into single ossifications. Because of this, birds usually have a smaller number of bones than other terrestrial vertebrates. Birds also lack teeth or even a true jaw, instead having evolved a beak, which is far more lightweight. The beaks of many baby birds have a projection called an egg tooth, which facilitates their exit from the amniotic egg. Fish The skeleton, which forms the support structure inside the fish is either made of cartilage as in the Chondrichthyes, or bones as in the Osteichthyes. The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. They are supported only by the muscles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays which, with the exception of the caudal fin (tail fin), have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk. Cartilaginous fish, such as sharks, rays, skates, and chimeras, have skeletons made entirely of cartilage. The lighter weight of cartilage allows these fish to expend less energy when swimming. Mammals Marine mammals To facilitate the movement of marine mammals in water, the hind legs were either lost altogether, as in the whales and manatees, or united in a single tail fin as in the pinnipeds (seals). In the whale, the cervical vertebrae are typically fused, an adaptation trading flexibility for stability during swimming. Humans The skeleton consists of both fused and individual bones supported and supplemented by ligaments, tendons, muscles and cartilage. It serves as a scaffold which supports organs, anchors muscles, and protects organs such as the brain, lungs, heart and spinal cord. The biggest bone in the body is the femur in the upper leg, and the smallest is the stapes bone in the middle ear. In an adult, the skeleton comprises around 13.1% of the total body weight, and half of this weight is water. Fused bones include those of the pelvis and the cranium. Not all bones are interconnected directly: There are three bones in each middle ear called the ossicles that articulate only with each other. The hyoid bone, which is located in the neck and serves as the point of attachment for the tongue, does not articulate with any other bones in the body, being supported by muscles and ligaments. There are 206 bones in the adult human skeleton, although this number depends on whether the pelvic bones (the hip bones on each side) are counted as one or three bones on each side (ilium, ischium, and pubis), whether the coccyx or tail bone is counted as one or four separate bones, and does not count the variable wormian bones between skull sutures. Similarly, the sacrum is usually counted as a single bone, rather than five fused vertebrae. There is also a variable number of small sesamoid bones, commonly found in tendons. The patella or kneecap on each side is an example of a larger sesamoid bone. The patellae are counted in the total, as they are constant. The number of bones varies between individuals and with age – newborn babies have over 270 bones some of which fuse together. These bones are organized into a longitudinal axis, the axial skeleton, to which the appendicular skeleton is attached. The human skeleton takes 20 years before it is fully developed, and the bones contain marrow, which produces blood cells. There exist several general differences between the male and female skeletons. The male skeleton, for example, is generally larger and heavier than the female skeleton. In the female skeleton, the bones of the skull are generally less angular. The female skeleton also has wider and shorter breastbone and slimmer wrists. There exist significant differences between the male and female pelvis which are related to the female's pregnancy and childbirth capabilities. The female pelvis is wider and shallower than the male pelvis. Female pelvises also have an enlarged pelvic outlet and a wider and more circular pelvic inlet. The angle between the pubic bones is known to be sharper in males, which results in a more circular, narrower, and near heart-shaped pelvis. Invertebrate skeletons Invertebrates are defined by a lack of vertebral column, and they do not have bone skeletons. Arthropods have exoskeletons and echinoderms have endoskeletons. Some soft-bodied organisms, such as jellyfish and earthworms, have hydrostatic skeletons. Arthropods The skeletons of arthropods, including insects, crustaceans, and arachnids, are cuticle exoskeletons. They are composed of chitin secreted by the epidermis. The cuticle covers the animal's body and lines several internal organs, including parts of the digestive system. Arthropods molt as they grow through a process of ecdysis, developing a new exoskeleton, digesting part of the previous skeleton, and leaving the remainder behind. An arthropod's skeleton serves many functions, working as an integument to provide a barrier and support the body, providing appendages for movement and defense, and assisting in sensory perception. Some arthropods, such as crustaceans, absorb biominerals like calcium carbonate from the environment to strengthen the cuticle. Echinoderms The skeletons of echinoderms, such as starfish and sea urchins, are endoskeletons that consist of large, well-developed sclerite plates that adjoin or overlap to cover the animal's body. The skeletons of sea cucumbers are an exception, having a reduced size to assist in feeding and movement. Echinoderm skeletons are composed of stereom, made up of calcite with a monocrystal structure. They also have a significant magnesium content, forming up to 15% of the skeleton's composition. The stereome structure is porous, and the pores fill with connective stromal tissue as the animal ages. Sea urchins have as many as ten variants of stereome structure. Among extant animals, such skeletons are unique to echinoderms, though similar skeletons were used by some Paleozoic animals. The skeletons of echinoderms are mesodermal, as they are mostly encased by soft tissue. Plates of the skeleton may be interlocked or connected through muscles and ligaments. Skeletal elements in echinoderms are highly specialized and take many forms, though they usually retain some form of symmetry. The spines of sea urchins are the largest type of echinoderm skeletal structure. Molluscs Some molluscs, such as conchs, scallops, and snails, have shells that serve as exoskeletons. They are produced by proteins and minerals secreted from the animal's mantle. Sponges The skeleton of sponges consists of microscopic calcareous or siliceous spicules. The demosponges include 90% of all species of sponges. Their "skeletons" are made of spicules consisting of fibers of the protein spongin, the mineral silica, or both. Where spicules of silica are present, they have a different shape from those in the otherwise similar glass sponges. Cartilage Cartilage is a connective skeletal tissue composed of specialized cells called chondrocytes that in an extracellular matrix. This matrix is typically composed of Type II collagen fibers, proteoglycans, and water. There are many types of cartilage, including elastic cartilage, hyaline cartilage, fibrocartilage, and lipohyaline cartilage. Unlike other connective tissues, cartilage does not contain blood vessels. The chondrocytes are supplied by diffusion, helped by the pumping action generated by compression of the articular cartilage or flexion of the elastic cartilage. Thus, compared to other connective tissues, cartilage grows and repairs more slowly.
Biology and health sciences
Biology
null
27616
https://en.wikipedia.org/wiki/Sunspot
Sunspot
Sunspots are temporary spots on the Sun's surface that are darker than the surrounding area. They are one of the most recognizable Solar phenomena and despite the fact that they are mostly visible in the solar photosphere they usually affect the entire solar atmosphere. They are regions of reduced surface temperature caused by concentrations of magnetic flux that inhibit convection. Sunspots appear within active regions, usually in pairs of opposite magnetic polarity. Their number varies according to the approximately 11-year solar cycle. Individual sunspots or groups of sunspots may last anywhere from a few days to a few months, but eventually decay. Sunspots expand and contract as they move across the surface of the Sun, with diameters ranging from to . Larger sunspots can be visible from Earth without the aid of a telescope. They may travel at relative speeds, or proper motions, of a few hundred meters per second when they first emerge. Indicating intense magnetic activity, sunspots accompany other active region phenomena such as coronal loops, prominences, and reconnection events. Most solar flares and coronal mass ejections originate in these magnetically active regions around visible sunspot groupings. Similar phenomena indirectly observed on stars other than the Sun are commonly called starspots, and both light and dark spots have been measured. History The earliest record of sunspots is found in the Chinese I Ching, completed before 800 BC. The text describes that a dou and mei were observed in the sun, where both words refer to a small obscuration. The earliest record of a deliberate sunspot observation also comes from China, and dates to 364 BC, based on comments by astronomer Gan De (甘德) in a star catalogue. By 28 BC, Chinese astronomers were regularly recording sunspot observations in official imperial records. The first clear mention of a sunspot in Western literature is circa 300 BC, by ancient Greek scholar Theophrastus, student of Plato and Aristotle and successor to the latter. The earliest known drawings of sunspots were made by English monk John of Worcester in December 1128. Sunspots were first observed telescopically in December 1610 by English astronomer Thomas Harriot. His observations were recorded in his notebooks and were followed in March 1611 by observations and reports by Frisian astronomers Johannes and David Fabricius. After Johannes Fabricius' death at the age of 29, his reports remained obscure and were overshadowed by the independent discoveries of and publications about sunspots by Christoph Scheiner and Galileo Galilei. Galileo likely began telescopic sunspot observations around the same time as Harriot; however, Galileo's records did not start until 1612. During the next decades numerous astronomers of that era participated in the pursuit of sunspots. One of these was the famous astronomer Johannes Hevelius who recorded 19 sunspot groups during the period of the early Maunder Minimum (1653-1679) in the book Machina Coelestis. In the early 19th Century, William Herschel was one of the first to hypothesize a connection of sunspots with temperatures on Earth and believed that certain features of sunspots would indicate increased heating on Earth. During his recognition of solar behavior and hypothesized solar structure, he inadvertently picked up the relative absence of sunspots from July 1795 to January 1800 and was perhaps the first to construct a past record of observed or missing sunspots. From this he found that the absence of sunspots coincided with high wheat prices in England. The president of the Royal Society commented that the upward trend in wheat prices was due to monetary inflation. Years later scientists such as Richard Carrington in 1865 and John Henry Poynting in 1884 tried and failed to find a connection between wheat prices and sunspots, and modern analysis finds that there is no statistically significant correlation between wheat prices and sunspot numbers. Physics Morphology Sunspots have two main structures: a central umbra and a surrounding penumbra. The umbra is the darkest region of a sunspot and is where the magnetic field is strongest and approximately vertical, or normal, to the Sun's surface, or photosphere. The umbra may be surrounded completely or only partially by a brighter region known as the penumbra. The penumbra is composed of radially elongated structures known as penumbral filaments and has a more inclined magnetic field than the umbra. Within sunspot groups, multiple umbrae may be surrounded by a single, continuous penumbra. The temperature of the umbra is roughly 3000–4500 K, in contrast to the surrounding material at about 5780 K, leaving sunspots clearly visible as dark spots. This is because the luminance of a heated black body (closely approximated by the photosphere) at these temperatures varies greatly with temperature. Isolated from the surrounding photosphere, a single sunspot would shine brighter than the full moon, with a crimson-orange color. In some forming and decaying sunspots, relatively narrow regions of bright material appear penetrating into or completely dividing an umbra. These formations, referred to as light bridges, have been found to have a weaker, more tilted magnetic field compared to the umbra at the same height in the photosphere. Higher in the photosphere, the light bridge magnetic field merges and becomes comparable to that of the umbra. Gas pressure in light bridges has also been found to dominate over magnetic pressure, and convective motions have been detected. The Wilson effect implies that sunspots are depressions on the Sun's surface. Lifecycle The appearance of an individual sunspot may last anywhere from a few days to a few months, though groups of sunspots and their associated active regions tend to last weeks or months. Sunspots expand and contract as they move across the surface of the Sun, with diameters ranging from to . Formation Although the details of sunspot formation are still a matter of ongoing research, it is widely understood that they are the visible manifestations of magnetic flux tubes in the Sun's convective zone projecting through the photosphere within active regions. Their characteristic darkening occurs due to this strong magnetic field inhibiting convection in the photosphere. As a result, the energy flux from the Sun's interior decreases, and with it, surface temperature, causing the surface area through which the magnetic field passes to look dark against the bright background of photospheric granules. Sunspots initially appear in the photosphere as small darkened spots lacking a penumbra. These structures are known as solar pores. Over time, these pores increase in size and move towards one another. When a pore gets large enough, typically around in diameter, a penumbra will begin to form. Decay Magnetic pressure should tend to remove field concentrations, causing the sunspots to disperse, but sunspot lifetimes are measured in days to weeks. In 2001, observations from the Solar and Heliospheric Observatory (SOHO) using sound waves traveling below the photosphere (local helioseismology) were used to develop a three-dimensional image of the internal structure below sunspots; these observations show that a powerful downdraft lies beneath each sunspot, forms a rotating vortex that sustains the concentrated magnetic field. Solar cycle Solar cycles last typically about eleven years, varying from just under 10 to just over 12 years. Over the solar cycle, sunspot populations increase quickly and then decrease more slowly. The point of highest sunspot activity during a cycle is known as solar maximum, and the point of lowest activity as solar minimum. This period is also observed in most other solar activity and is linked to a variation in the solar magnetic field that changes polarity with this period. Early in the cycle, sunspots appear at higher latitudes and then move towards the equator as the cycle approaches maximum, following Spörer's law. Spots from two sequential cycles co-exist for several years during the years near solar minimum. Spots from sequential cycles can be distinguished by direction of their magnetic field and their latitude. The Wolf number sunspot index counts the average number of sunspots and groups of sunspots during specific intervals. The 11-year solar cycles are numbered sequentially, starting with the observations made in the 1750s. George Ellery Hale first linked magnetic fields and sunspots in 1908. Hale suggested that the sunspot cycle period is 22 years, covering two periods of increased and decreased sunspot numbers, accompanied by polar reversals of the solar magnetic dipole field. Horace W. Babcock later proposed a qualitative model for the dynamics of the solar outer layers. The Babcock Model explains that magnetic fields cause the behavior described by Spörer's law, as well as other effects, which are twisted by the Sun's rotation. Longer-period trends Sunspot numbers also change over long periods. For example, during the period known as the modern maximum from 1900 to 1958 the solar maxima trend of sunspot count was upwards; for the following 60 years the trend was mostly downwards. Overall, the Sun was last as active as the modern maximum over 8,000 years ago. Sunspot number is correlated with the intensity of solar radiation over the period since 1979, when satellite measurements became available. The variation caused by the sunspot cycle to solar output is on the order of 0.1% of the solar constant (a peak-to-trough range of 1.3 W·m−2 compared with 1366 W·m−2 for the average solar constant). Modern observation Sunspots are observed with land-based and Earth-orbiting solar telescopes. These telescopes use filtration and projection techniques for direct observation, in addition to various types of filtered cameras. Specialized tools such as spectroscopes and spectrohelioscopes are used to examine sunspots and sunspot areas. Artificial eclipses allow viewing of the circumference of the Sun as sunspots rotate through the horizon. Since looking directly at the Sun with the naked eye permanently damages human vision, amateur observation of sunspots is generally conducted using projected images, or directly through protective filters. Small sections of very dark filter glass, such as a #14 welder's glass, are effective. A telescope eyepiece can project the image, without filtration, onto a white screen where it can be viewed indirectly, and even traced, to follow sunspot evolution. Special purpose hydrogen-alpha narrow bandpass filters and aluminum-coated glass attenuation filters (which have the appearance of mirrors due to their extremely high optical density) on the front of a telescope provide safe observation through the eyepiece. Application Due to their correlation with other kinds of solar activity, sunspots can be used to help predict space weather, the state of the ionosphere, and conditions relevant to short-wave radio propagation or satellite communications. High sunspot activity is celebrated by members of the amateur radio community as a harbinger of excellent ionospheric propagation conditions that greatly increase radio range in the HF bands. During peaks in sunspot activity, worldwide radio communication can be achieved on frequencies as high as the 6-meter VHF band. Solar activity (and the solar cycle) have been implicated as a factor in global warming. The first possible example of this is the Maunder Minimum period of low sunspot activity which occurred during the Little Ice Age in Europe. However, detailed studies from multiple paleoclimate indicators show that the lower northern hemisphere temperatures in the Little Ice Age began while sunspot numbers were still high before the start of the Maunder Minimum, and persisted until after the Maunder Minimum had ceased. Numerical climate modelling indicates that volcanic activity was the main driver of the Little Ice Age. Sunspots themselves, in terms of the magnitude of their radiant-energy deficit, have a weak effect on solar flux. The total effect of sunspots and other magnetic processes in the solar photosphere is an increase of roughly 0.1% in brightness of the Sun in comparison with its brightness at the solar-minimum level. This is a difference in total solar irradiance at Earth over the sunspot cycle of close to . Other magnetic phenomena which correlate with sunspot activity include faculae and the chromospheric network. The combination of these magnetic factors mean that the relationship of sunspot numbers to Total Solar Irradiance (TSI) over the decadal-scale solar cycle, and their relationship for century timescales, need not be the same. The main problem with quantifying the longer-term trends in TSI lies in the stability of the absolute radiometry measurements made from space, which has improved in recent decades but remains a problem. Analysis shows that it is possible that TSI was actually higher in the Maunder Minimum compared to present-day levels, but uncertainties are high, with best estimates in the range with a uncertainty range of . Sunspots, with their intense magnetic field concentrations, facilitate the complex transfer of energy and momentum to the upper solar atmosphere. This transfer occurs through a variety of mechanisms, including generated waves in the lower solar atmosphere and magnetic reconnection events. Starspot In 1947, G. E. Kron proposed that starspots were the reason for periodic changes in brightness on red dwarfs. Since the mid-1990s, starspot observations have been made using increasingly powerful techniques yielding more and more detail: photometry showed starspot growth and decay and showed cyclic behavior similar to the Sun's; spectroscopy examined the structure of starspot regions by analyzing variations in spectral line splitting due to the Zeeman effect; Doppler imaging showed differential rotation of spots for several stars and distributions different from the Sun's; spectral line analysis measured the temperature range of spots and the stellar surfaces. For example, in 1999, Strassmeier reported the largest cool starspot ever seen rotating the giant K0 star XX Trianguli (HD 12545) with a temperature of , together with a warm spot of .
Physical sciences
Solar System
Astronomy
27631
https://en.wikipedia.org/wiki/Subset
Subset
In mathematics, a set A is a subset of a set B if all elements of A are also elements of B; B is then a superset of A. It is possible for A and B to be equal; if they are unequal, then A is a proper subset of B. The relationship of one set being a subset of another is called inclusion (or sometimes containment). A is a subset of B may also be expressed as B includes (or contains) A or A is included (or contained) in B. A k-subset is a subset with k elements. When quantified, is represented as One can prove the statement by applying a proof technique known as the element argument:Let sets A and B be given. To prove that suppose that a is a particular but arbitrarily chosen element of A show that a is an element of B.The validity of this technique can be seen as a consequence of universal generalization: the technique shows for an arbitrarily chosen element c. Universal generalisation then implies which is equivalent to as stated above. Definition If A and B are sets and every element of A is also an element of B, then: A is a subset of B, denoted by , or equivalently, B is a superset of A, denoted by If A is a subset of B, but A is not equal to B (i.e. there exists at least one element of B which is not an element of A), then: A is a proper (or strict) subset of B, denoted by , or equivalently, B is a proper (or strict) superset of A, denoted by The empty set, written or has no elements, and therefore is vacuously a subset of any set X. Basic properties Reflexivity: Given any set , Transitivity: If and , then Antisymmetry: If and , then . Proper subset Irreflexivity: Given any set , is False. Transitivity: If and , then Asymmetry: If then is False. ⊂ and ⊃ symbols Some authors use the symbols and to indicate and respectively; that is, with the same meaning as and instead of the symbols and For example, for these authors, it is true of every set A that (a reflexive relation). Other authors prefer to use the symbols and to indicate (also called strict) subset and superset respectively; that is, with the same meaning as and instead of the symbols and This usage makes and analogous to the inequality symbols and For example, if then x may or may not equal y, but if then x definitely does not equal y, and is less than y (an irreflexive relation). Similarly, using the convention that is proper subset, if then A may or may not equal B, but if then A definitely does not equal B. Examples of subsets The set A = {1, 2} is a proper subset of B = {1, 2, 3}, thus both expressions and are true. The set D = {1, 2, 3} is a subset (but a proper subset) of E = {1, 2, 3}, thus is true, and is not true (false). The set {x: x is a prime number greater than 10} is a proper subset of {x: x is an odd number greater than 10} The set of natural numbers is a proper subset of the set of rational numbers; likewise, the set of points in a line segment is a proper subset of the set of points in a line. These are two examples in which both the subset and the whole set are infinite, and the subset has the same cardinality (the concept that corresponds to size, that is, the number of elements, of a finite set) as the whole; such cases can run counter to one's initial intuition. The set of rational numbers is a proper subset of the set of real numbers. In this example, both sets are infinite, but the latter set has a larger cardinality (or ) than the former set. Another example in an Euler diagram: Power set The set of all subsets of is called its power set, and is denoted by . The inclusion relation is a partial order on the set defined by . We may also partially order by reverse set inclusion by defining For the power set of a set S, the inclusion partial order is—up to an order isomorphism—the Cartesian product of (the cardinality of S) copies of the partial order on for which This can be illustrated by enumerating , and associating with each subset (i.e., each element of ) the k-tuple from of which the ith coordinate is 1 if and only if is a member of T. The set of all -subsets of is denoted by , in analogue with the notation for binomial coefficients, which count the number of -subsets of an -element set. In set theory, the notation is also common, especially when is a transfinite cardinal number. Other properties of inclusion A set A is a subset of B if and only if their intersection is equal to A. Formally: A set A is a subset of B if and only if their union is equal to B. Formally: A finite set A is a subset of B, if and only if the cardinality of their intersection is equal to the cardinality of A. Formally: The subset relation defines a partial order on sets. In fact, the subsets of a given set form a Boolean algebra under the subset relation, in which the join and meet are given by intersection and union, and the subset relation itself is the Boolean inclusion relation. Inclusion is the canonical partial order, in the sense that every partially ordered set is isomorphic to some collection of sets ordered by inclusion. The ordinal numbers are a simple example: if each ordinal n is identified with the set of all ordinals less than or equal to n, then if and only if
Mathematics
Discrete mathematics
null
27637
https://en.wikipedia.org/wiki/Structural%20geology
Structural geology
Structural geology is the study of the three-dimensional distribution of rock units with respect to their deformational histories. The primary goal of structural geology is to use measurements of present-day rock geometries to uncover information about the history of deformation (strain) in the rocks, and ultimately, to understand the stress field that resulted in the observed strain and geometries. This understanding of the dynamics of the stress field can be linked to important events in the geologic past; a common goal is to understand the structural evolution of a particular area with respect to regionally widespread patterns of rock deformation (e.g., mountain building, rifting) due to plate tectonics. Use and importance The study of geologic structures has been of prime importance in economic geology, both petroleum geology and mining geology. Folded and faulted rock strata commonly form traps that accumulate and concentrate fluids such as petroleum and natural gas. Similarly, faulted and structurally complex areas are notable as permeable zones for hydrothermal fluids, resulting in concentrated areas of base and precious metal ore deposits. Veins of minerals containing various metals commonly occupy faults and fractures in structurally complex areas. These structurally fractured and faulted zones often occur in association with intrusive igneous rocks. They often also occur around geologic reef complexes and collapse features such as ancient sinkholes. Deposits of gold, silver, copper, lead, zinc, and other metals, are commonly located in structurally complex areas. Structural geology is a critical part of engineering geology, which is concerned with the physical and mechanical properties of natural rocks. Structural fabrics and defects such as faults, folds, foliations and joints are internal weaknesses of rocks which may affect the stability of human engineered structures such as dams, road cuts, open pit mines and underground mines or road tunnels. Geotechnical risk, including earthquake risk can only be investigated by inspecting a combination of structural geology and geomorphology. In addition, areas of karst landscapes which reside atop caverns, potential sinkholes, or other collapse features are of particular importance for these scientists. In addition, areas of steep slopes are potential collapse or landslide hazards. Environmental geologists and hydrogeologists need to apply the tenets of structural geology to understand how geologic sites impact (or are impacted by) groundwater flow and penetration. For instance, a hydrogeologist may need to determine if seepage of toxic substances from waste dumps is occurring in a residential area or if salty water is seeping into an aquifer. Plate tectonics is a theory developed during the 1960s which describes the movement of continents by way of the separation and collision of crustal plates. It is in a sense structural geology on a planet scale, and is used throughout structural geology as a framework to analyze and understand global, regional, and local scale features. Methods Structural geologists use a variety of methods to (first) measure rock geometries, (second) reconstruct their deformational histories, and (third) estimate the stress field that resulted in that deformation. Geometries Primary data sets for structural geology are collected in the field. Structural geologists measure a variety of planar features (bedding planes, foliation planes, fold axial planes, fault planes, and joints), and linear features (stretching lineations, in which minerals are ductilely extended; fold axes; and intersection lineations, the trace of a planar feature on another planar surface). Measurement conventions The inclination of a planar structure in geology is measured by strike and dip. The strike is the line of intersection between the planar feature and a horizontal plane, taken according to the right hand convention, and the dip is the magnitude of the inclination, below horizontal, at right angles to strike. For example; striking 25 degrees East of North, dipping 45 degrees Southeast, recorded as N25E,45SE. Alternatively, dip and dip direction may be used as this is absolute. Dip direction is measured in 360 degrees, generally clockwise from North. For example, a dip of 45 degrees towards 115 degrees azimuth, recorded as 45/115. Note that this is the same as above. The term hade is occasionally used and is the deviation of a plane from vertical i.e. (90°-dip). Fold axis plunge is measured in dip and dip direction (strictly, plunge and azimuth of plunge). The orientation of a fold axial plane is measured in strike and dip or dip and dip direction. Lineations are measured in terms of dip and dip direction, if possible. Often lineations occur expressed on a planar surface and can be difficult to measure directly. In this case, the lineation may be measured from the horizontal as a rake or pitch upon the surface. Rake is measured by placing a protractor flat on the planar surface, with the flat edge horizontal and measuring the angle of the lineation clockwise from horizontal. The orientation of the lineation can then be calculated from the rake and strike-dip information of the plane it was measured from, using a stereographic projection. If a fault has lineations formed by movement on the plane, e.g.; slickensides, this is recorded as a lineation, with a rake, and annotated as to the indication of throw on the fault. Generally it is easier to record strike and dip information of planar structures in dip/dip direction format as this will match all the other structural information you may be recording about folds, lineations, etc., although there is an advantage to using different formats that discriminate between planar and linear data. Plane, fabric, fold and deformation conventions The convention for analysing structural geology is to identify the planar structures, often called planar fabrics because this implies a textural formation, the linear structures and, from analysis of these, unravel deformations. Planar structures are named according to their order of formation, with original sedimentary layering the lowest at S0. Often it is impossible to identify S0 in highly deformed rocks, so numbering may be started at an arbitrary number or given a letter (SA, for instance). In cases where there is a bedding-plane foliation caused by burial metamorphism or diagenesis this may be enumerated as S0a. If there are folds, these are numbered as F1, F2, etc. Generally the axial plane foliation or cleavage of a fold is created during folding, and the number convention should match. For example, an F2 fold should have an S2 axial foliation. Deformations are numbered according to their order of formation with the letter D denoting a deformation event. For example, D1, D2, D3. Folds and foliations, because they are formed by deformation events, should correlate with these events. For example, an F2 fold, with an S2 axial plane foliation would be the result of a D2 deformation. Metamorphic events may span multiple deformations. Sometimes it is useful to identify them similarly to the structural features for which they are responsible, e.g.; M2. This may be possible by observing porphyroblast formation in cleavages of known deformation age, by identifying metamorphic mineral assemblages created by different events, or via geochronology. Intersection lineations in rocks, as they are the product of the intersection of two planar structures, are named according to the two planar structures from which they are formed. For instance, the intersection lineation of a S1 cleavage and bedding is the L1-0 intersection lineation (also known as the cleavage-bedding lineation). Stretching lineations may be difficult to quantify, especially in highly stretched ductile rocks where minimal foliation information is preserved. Where possible, when correlated with deformations (as few are formed in folds, and many are not strictly associated with planar foliations), they may be identified similar to planar surfaces and folds, e.g.; L1, L2. For convenience some geologists prefer to annotate them with a subscript S, for example Ls1 to differentiate them from intersection lineations, though this is generally redundant. Stereographic projections Stereographic projection is a method for analyzing the nature and orientation of deformation stresses, lithological units and penetrative fabrics wherein linear and planar features (structural strike and dip readings, typically taken using a compass clinometer) passing through an imagined sphere are plotted on a two-dimensional grid projection, facilitating more holistic analysis of a set of measurements. Stereonet developed by Richard W. Allmendinger is widely used in the structural geology community. Rock macro-structures On a large scale, structural geology is the study of the three-dimensional interaction and relationships of stratigraphic units within terranes of rock or geological regions. This branch of structural geology deals mainly with the orientation, deformation and relationships of stratigraphy (bedding), which may have been faulted, folded or given a foliation by some tectonic event. This is mainly a geometric science, from which cross sections and three-dimensional block models of rocks, regions, terranes and parts of the Earth's crust can be generated. Study of regional structure is important in understanding orogeny, plate tectonics and more specifically in the oil, gas and mineral exploration industries as structures such as faults, folds and unconformities are primary controls on ore mineralisation and oil traps. Modern regional structure is being investigated using seismic tomography and seismic reflection in three dimensions, providing unrivaled images of the Earth's interior, its faults and the deep crust. Further information from geophysics such as gravity and airborne magnetics can provide information on the nature of rocks imaged to be in the deep crust. Rock microstructures Rock microstructure or texture of rocks is studied by structural geologists on a small scale to provide detailed information mainly about metamorphic rocks and some features of sedimentary rocks, most often if they have been folded. Textural study involves measurement and characterisation of foliations, crenulations, metamorphic minerals, and timing relationships between these structural features and mineralogical features. Usually this involves collection of hand specimens, which may be cut to provide petrographic thin sections which are analysed under a petrographic microscope. Microstructural analysis finds application also in multi-scale statistical analysis, aimed to analyze some rock features showing scale invariance. Kinematics Geologists use rock geometry measurements to understand the history of strain in rocks. Strain can take the form of brittle faulting and ductile folding and shearing. Brittle deformation takes place in the shallow crust, and ductile deformation takes place in the deeper crust, where temperatures and pressures are higher. Stress fields By understanding the constitutive relationships between stress and strain in rocks, geologists can translate the observed patterns of rock deformation into a stress field during the geologic past. The following list of features are typically used to determine stress fields from deformational structures. In perfectly brittle rocks, faulting occurs at 30° to the greatest compressional stress according to Byerlee's Law. The greatest compressive stress is normal to fold axial planes. Modeling For economic geology such as petroleum and mineral development, as well as research, modeling of structural geology is becoming increasingly important. 2D and 3D models of structural systems such as anticlines, synclines, fold and thrust belts, and other features can help better understand the evolution of a structure through time. Without modeling or interpretation of the subsurface, geologists are limited to their knowledge of the surface geological mapping. If only reliant on the surface geology, major economic potential could be missed by overlooking the structural and tectonic history of the area. Characterization of the mechanical properties of rock The mechanical properties of rock play a vital role in the structures that form during deformation deep below the earth's crust. The conditions in which a rock is present will result in different structures that geologists observe above ground in the field. The field of structural geology tries to relate the formations that humans see to the changes the rock went through to get to that final structure. Knowing the conditions of deformation that lead to such structures can illuminate the history of the deformation of the rock. Temperature and pressure play a huge role in the deformation of rock. At the conditions under the earth's crust of extreme high temperature and pressure, rocks are ductile. They can bend, fold or break. Other vital conditions that contribute to the formation of structure of rock under the earth are the stress and strain fields. Stress-strain curve Stress is a pressure, defined as a directional force over area. When a rock is subjected to stresses, it changes shape. When the stress is released, the rock may or may not return to its original shape. That change in shape is quantified by strain, the change in length over the original length of the material in one dimension. Stress induces strain which ultimately results in a changed structure. Elastic deformation refers to a reversible deformation. In other words, when stress on the rock is released, the rock returns to its original shape. Reversible, linear, elasticity involves the stretching, compressing, or distortion of atomic bonds. Because there is no breaking of bonds, the material springs back when the force is released. This type of deformation is modeled using a linear relationship between stress and strain, i.e. a Hookean relationship. Where σ denotes stress, denotes strain, and E is the elastic modulus, which is material dependent. The elastic modulus is, in effect, a measure of the strength of atomic bonds. Plastic deformation refers to non-reversible deformation. The relationship between stress and strain for permanent deformation is nonlinear. Stress has caused permanent change of shape in the material by involving the breaking of bonds. One mechanism of plastic deformation is the movement of dislocations by an applied stress. Because rocks are essentially aggregates of minerals, we can think of them as poly-crystalline materials. Dislocations are a type of crystallographic defect which consists of an extra or missing half plane of atoms in the periodic array of atoms that make up a crystal lattice. Dislocations are present in all real crystallographic materials. Hardness Hardness is difficult to quantify. It is a measure of resistance to deformation, specifically permanent deformation. There is precedent for hardness as a surface quality, a measure of the abrasiveness or surface-scratching resistance of a material. If the material being tested, however, is uniform in composition and structure, then the surface of the material is only a few atomic layers thick, and measurements are of the bulk material. Thus, simple surface measurements yield information about the bulk properties. Ways to measure hardness include: Mohs Scale Dorry abrasion test Deval abrasion test Indentation hardness Indentation hardness is used often in metallurgy and materials science and can be thought of as resistance to penetration by an indenter. Toughness Toughness can be described best by a material's resistance to cracking. During plastic deformation, a material absorbs energy until fracture occurs. The area under the stress-strain curve is the work required to fracture the material. The toughness modulus is defined as: Where is the ultimate tensile strength, and is the strain at failure. The modulus is the maximum amount of energy per unit volume a material can absorb without fracturing. From the equation for modulus, for large toughness, high strength and high ductility are needed. These two properties are usually mutually exclusive. Brittle materials have low toughness because low plastic deformation decreases the strain (low ductility). Ways to measure toughness include: Page impact machine and Charpy impact test. Resilience Resilience is a measure of the elastic energy absorbed of a material under stress. In other words, the external work performed on a material during deformation. The area under the elastic portion of the stress-strain curve is the strain energy absorbed per unit volume. The resilience modulus is defined as: where is the yield strength of the material and E is the elastic modulus of the material. To increase resilience, one needs increased elastic yield strength and decreased modulus of elasticity.
Physical sciences
Structural geology
Earth science
27661
https://en.wikipedia.org/wiki/Source%20code
Source code
In computing, source code, or simply code or source, is a plain text computer program written in a programming language. A programmer writes the human readable source code to control the behavior of a computer. Since a computer, at base, only understands machine code, source code must be translated before a computer can execute it. The translation process can be implemented three ways. Source code can be converted into machine code by a compiler or an assembler. The resulting executable is machine code ready for the computer. Alternatively, source code can be executed without conversion via an interpreter. An interpreter loads the source code into memory. It simultaneously translates and executes each statement. A method that combines compilation and interpretation is to first produce bytecode. Bytecode is an intermediate representation of source code that is quickly interpreted. Background The first programmable computers, which appeared at the end of the 1940s, were programmed in machine language (simple instructions that could be directly executed by the processor). Machine language was difficult to debug and was not portable between different computer systems. Initially, hardware resources were scarce and expensive, while human resources were cheaper. As programs grew more complex, programmer productivity became a bottleneck. This led to the introduction of high-level programming languages such as Fortran in the mid-1950s. These languages abstracted away the details of the hardware, instead being designed to express algorithms that could be understood more easily by humans. As instructions distinct from the underlying computer hardware, software is therefore relatively recent, dating to these early high-level programming languages such as Fortran, Lisp, and Cobol. The invention of high-level programming languages was simultaneous with the compilers needed to translate the source code automatically into machine code that can be directly executed on the computer hardware. Source code is the form of code that is modified directly by humans, typically in a high-level programming language. Object code can be directly executed by the machine and is generated automatically from the source code, often via an intermediate step, assembly language. While object code will only work on a specific platform, source code can be ported to a different machine and recompiled there. For the same source code, object code can vary significantly—not only based on the machine for which it is compiled, but also based on performance optimization from the compiler. Organization Most programs do not contain all the resources needed to run them and rely on external libraries. Part of the compiler's function is to link these files in such a way that the program can be executed by the hardware. Software developers often use configuration management to track changes to source code files (version control). The configuration management system also keeps track of which object code file corresponds to which version of the source code file. Purposes Estimation The number of lines of source code is often used as a metric when evaluating the productivity of computer programmers, the economic value of a code base, effort estimation for projects in development, and the ongoing cost of software maintenance after release. Communication Source code is also used to communicate algorithms between people e.g., code snippets online or in books. Computer programmers may find it helpful to review existing source code to learn about programming techniques. The sharing of source code between developers is frequently cited as a contributing factor to the maturation of their programming skills. Some people consider source code an expressive artistic medium. Source code often contains comments—blocks of text marked for the compiler to ignore. This content is not part of the program logic, but is instead intended to help readers understand the program. Companies often keep the source code confidential in order to hide algorithms considered a trade secret. Proprietary, secret source code and algorithms are widely used for sensitive government applications such as criminal justice, which results in black box behavior with a lack of transparency into the algorithm's methodology. The result is avoidance of public scrutiny of issues such as bias. Modification Access to the source code (not just the object code) is essential to modifying it. Understanding existing code is necessary to understand how it works and before modifying it. The rate of understanding depends both on the code base as well as the skill of the programmer. Experienced programmers have an easier time understanding what the code does at a high level. Software visualization is sometimes used to speed up this process. Many software programmers use an integrated development environment (IDE) to improve their productivity. IDEs typically have several features built in, including a source-code editor that can alert the programmer to common errors. Modification often includes code refactoring (improving the structure without changing functionality) and restructuring (improving structure and functionality at the same time). Nearly every change to code will introduce new bugs or unexpected ripple effects, which require another round of fixes. Code reviews by other developers are often used to scrutinize new code added to a project. The purpose of this phase is often to verify that the code meets style and maintainability standards and that it is a correct implementation of the software design. According to some estimates, code review dramatically reduce the number of bugs persisting after software testing is complete. Along with software testing that works by executing the code, static program analysis uses automated tools to detect problems with the source code. Many IDEs support code analysis tools, which might provide metrics on the clarity and maintainability of the code. Debuggers are tools that often enable programmers to step through execution while keeping track of which source code corresponds to each change of state. Compilation and execution Source code files in a high-level programming language must go through a stage of preprocessing into machine code before the instructions can be carried out. After being compiled, the program can be saved as an object file and the loader (part of the operating system) can take this saved file and execute it as a process on the computer hardware. Some programming languages use an interpreter instead of a compiler. An interpreter converts the program into machine code at run time, which makes them 10 to 100 times slower than compiled programming languages. Quality Software quality is an overarching term that can refer to a code's correct and efficient behavior, its reusability and portability, or the ease of modification. It is usually more cost-effective to build quality into the product from the beginning rather than try to add it later in the development process. Higher quality code will reduce lifetime cost to both suppliers and customers as it is more reliable and easier to maintain. Maintainability is the quality of software enabling it to be easily modified without breaking existing functionality. Following coding conventions such as using clear function and variable names that correspond to their purpose makes maintenance easier. Use of conditional loop statements only if the code could execute more than once, and eliminating code that will never execute can also increase understandability. Many software development organizations neglect maintainability during the development phase, even though it will increase long-term costs. Technical debt is incurred when programmers, often out of laziness or urgency to meet a deadline, choose quick and dirty solutions rather than build maintainability into their code. A common cause is underestimates in software development effort estimation, leading to insufficient resources allocated to development. A challenge with maintainability is that many software engineering courses do not emphasize it. Development engineers who know that they will not be responsible for maintaining the software do not have an incentive to build in maintainability. Copyright and licensing The situation varies worldwide, but in the United States before 1974, software and its source code was not copyrightable and therefore always public domain software. In 1974, the US Commission on New Technological Uses of Copyrighted Works (CONTU) decided that "computer programs, to the extent that they embody an author's original creation, are proper subject matter of copyright". Proprietary software is rarely distributed as source code. Although the term open-source software literally refers to public access to the source code, open-source software has additional requirements: free redistribution, permission to modify the source code and release derivative works under the same license, and nondiscrimination between different uses—including commercial use. The free reusability of open-source software can speed up development.
Technology
Software development: General
null
27667
https://en.wikipedia.org/wiki/Space
Space
Space is a three-dimensional continuum containing positions and directions. In classical physics, physical space is often conceived in three linear dimensions. Modern physicists usually consider it, with time, to be part of a boundless four-dimensional continuum known as spacetime. The concept of space is considered to be of fundamental importance to an understanding of the physical universe. However, disagreement continues between philosophers over whether it is itself an entity, a relationship between entities, or part of a conceptual framework. In the 19th and 20th centuries mathematicians began to examine geometries that are non-Euclidean, in which space is conceived as curved, rather than flat, as in the Euclidean space. According to Albert Einstein's theory of general relativity, space around gravitational fields deviates from Euclidean space. Experimental tests of general relativity have confirmed that non-Euclidean geometries provide a better model for the shape of space. Philosophy of space Debates concerning the nature, essence and the mode of existence of space date back to antiquity; namely, to treatises like the Timaeus of Plato, or Socrates in his reflections on what the Greeks called khôra (i.e. "space"), or in the Physics of Aristotle (Book IV, Delta) in the definition of topos (i.e. place), or in the later "geometrical conception of place" as "space qua extension" in the Discourse on Place (Qawl fi al-Makan) of the 11th-century Arab polymath Alhazen. Many of these classical philosophical questions were discussed in the Renaissance and then reformulated in the 17th century, particularly during the early development of classical mechanics. Isaac Newton viewed space as absolute, existing permanently and independently of whether there was any matter in it. In contrast, other natural philosophers, notably Gottfried Leibniz, thought that space was in fact a collection of relations between objects, given by their distance and direction from one another. In the 18th century, the philosopher and theologian George Berkeley attempted to refute the "visibility of spatial depth" in his Essay Towards a New Theory of Vision. Later, the metaphysician Immanuel Kant said that the concepts of space and time are not empirical ones derived from experiences of the outside world—they are elements of an already given systematic framework that humans possess and use to structure all experiences. Kant referred to the experience of "space" in his Critique of Pure Reason as being a subjective "pure a priori form of intuition". Galileo Galilean and Cartesian theories about space, matter, and motion are at the foundation of the Scientific Revolution, which is understood to have culminated with the publication of Newton's Principia Mathematica in 1687. Newton's theories about space and time helped him explain the movement of objects. While his theory of space is considered the most influential in physics, it emerged from his predecessors' ideas about the same. As one of the pioneers of modern science, Galileo revised the established Aristotelian and Ptolemaic ideas about a geocentric cosmos. He backed the Copernican theory that the universe was heliocentric, with a stationary Sun at the center and the planets—including the Earth—revolving around the Sun. If the Earth moved, the Aristotelian belief that its natural tendency was to remain at rest was in question. Galileo wanted to prove instead that the Sun moved around its axis, that motion was as natural to an object as the state of rest. In other words, for Galileo, celestial bodies, including the Earth, were naturally inclined to move in circles. This view displaced another Aristotelian idea—that all objects gravitated towards their designated natural place-of-belonging. René Descartes Descartes set out to replace the Aristotelian worldview with a theory about space and motion as determined by natural laws. In other words, he sought a metaphysical foundation or a mechanical explanation for his theories about matter and motion. Cartesian space was Euclidean in structure—infinite, uniform and flat. It was defined as that which contained matter; conversely, matter by definition had a spatial extension so that there was no such thing as empty space. The Cartesian notion of space is closely linked to his theories about the nature of the body, mind and matter. He is famously known for his "cogito ergo sum" (I think therefore I am), or the idea that we can only be certain of the fact that we can doubt, and therefore think and therefore exist. His theories belong to the rationalist tradition, which attributes knowledge about the world to our ability to think rather than to our experiences, as the empiricists believe. He posited a clear distinction between the body and mind, which is referred to as the Cartesian dualism. Leibniz and Newton Following Galileo and Descartes, during the seventeenth century the philosophy of space and time revolved around the ideas of Gottfried Leibniz, a German philosopher–mathematician, and Isaac Newton, who set out two opposing theories of what space is. Rather than being an entity that independently exists over and above other matter, Leibniz held that space is no more than the collection of spatial relations between objects in the world: "space is that which results from places taken together". Unoccupied regions are those that could have objects in them, and thus spatial relations with other places. For Leibniz, then, space was an idealised abstraction from the relations between individual entities or their possible locations and therefore could not be continuous but must be discrete. Space could be thought of in a similar way to the relations between family members. Although people in the family are related to one another, the relations do not exist independently of the people. Leibniz argued that space could not exist independently of objects in the world because that implies a difference between two universes exactly alike except for the location of the material world in each universe. But since there would be no observational way of telling these universes apart then, according to the identity of indiscernibles, there would be no real difference between them. According to the principle of sufficient reason, any theory of space that implied that there could be these two possible universes must therefore be wrong. Newton took space to be more than relations between material objects and based his position on observation and experimentation. For a relationist there can be no real difference between inertial motion, in which the object travels with constant velocity, and non-inertial motion, in which the velocity changes with time, since all spatial measurements are relative to other objects and their motions. But Newton argued that since non-inertial motion generates forces, it must be absolute. He used the example of water in a spinning bucket to demonstrate his argument. Water in a bucket is hung from a rope and set to spin, starts with a flat surface. After a while, as the bucket continues to spin, the surface of the water becomes concave. If the bucket's spinning is stopped then the surface of the water remains concave as it continues to spin. The concave surface is therefore apparently not the result of relative motion between the bucket and the water. Instead, Newton argued, it must be a result of non-inertial motion relative to space itself. For several centuries the bucket argument was considered decisive in showing that space must exist independently of matter. Kant In the eighteenth century the German philosopher Immanuel Kant published his theory of space as "a property of our mind" by which "we represent to ourselves objects as outside us, and all as in space" in the Critique of Pure Reason On his view the nature of spatial predicates are "relations that only attach to the form of intuition alone, and thus to the subjective constitution of our mind, without which these predicates could not be attached to anything at all." This develops his theory of knowledge in which knowledge about space itself can be both a priori and synthetic. According to Kant, knowledge about space is synthetic because any proposition about space cannot be true merely in virtue of the meaning of the terms contained in the proposition. In the counter-example, the proposition "all unmarried men are bachelors" is true by virtue of each term's meaning. Further, space is a priori because it is the form of our receptive abilities to receive information about the external world. For example, someone without sight can still perceive spatial attributes via touch, hearing, and smell. Knowledge of space itself is a priori because it belongs to the subjective constitution of our mind as the form or manner of our intuition of external objects. Non-Euclidean geometry Euclid's Elements contained five postulates that form the basis for Euclidean geometry. One of these, the parallel postulate, has been the subject of debate among mathematicians for many centuries. It states that on any plane on which there is a straight line L1 and a point P not on L1, there is exactly one straight line L2 on the plane that passes through the point P and is parallel to the straight line L1. Until the 19th century, few doubted the truth of the postulate; instead debate centered over whether it was necessary as an axiom, or whether it was a theory that could be derived from the other axioms. Around 1830 though, the Hungarian János Bolyai and the Russian Nikolai Ivanovich Lobachevsky separately published treatises on a type of geometry that does not include the parallel postulate, called hyperbolic geometry. In this geometry, an infinite number of parallel lines pass through the point P. Consequently, the sum of angles in a triangle is less than 180° and the ratio of a circle's circumference to its diameter is greater than pi. In the 1850s, Bernhard Riemann developed an equivalent theory of elliptical geometry, in which no parallel lines pass through P. In this geometry, triangles have more than 180° and circles have a ratio of circumference-to-diameter that is less than pi. Gauss and Poincaré Although there was a prevailing Kantian consensus at the time, once non-Euclidean geometries had been formalised, some began to wonder whether or not physical space is curved. Carl Friedrich Gauss, a German mathematician, was the first to consider an empirical investigation of the geometrical structure of space. He thought of making a test of the sum of the angles of an enormous stellar triangle, and there are reports that he actually carried out a test, on a small scale, by triangulating mountain tops in Germany. Henri Poincaré, a French mathematician and physicist of the late 19th century, introduced an important insight in which he attempted to demonstrate the futility of any attempt to discover which geometry applies to space by experiment. He considered the predicament that would face scientists if they were confined to the surface of an imaginary large sphere with particular properties, known as a sphere-world. In this world, the temperature is taken to vary in such a way that all objects expand and contract in similar proportions in different places on the sphere. With a suitable falloff in temperature, if the scientists try to use measuring rods to determine the sum of the angles in a triangle, they can be deceived into thinking that they inhabit a plane, rather than a spherical surface. In fact, the scientists cannot in principle determine whether they inhabit a plane or sphere and, Poincaré argued, the same is true for the debate over whether real space is Euclidean or not. For him, which geometry was used to describe space was a matter of convention. Since Euclidean geometry is simpler than non-Euclidean geometry, he assumed the former would always be used to describe the 'true' geometry of the world. Einstein In 1905, Albert Einstein published his special theory of relativity, which led to the concept that space and time can be viewed as a single construct known as spacetime. In this theory, the speed of light in vacuum is the same for all observers—which has the result that two events that appear simultaneous to one particular observer will not be simultaneous to another observer if the observers are moving with respect to one another. Moreover, an observer will measure a moving clock to tick more slowly than one that is stationary with respect to them; and objects are measured to be shortened in the direction that they are moving with respect to the observer. Subsequently, Einstein worked on a general theory of relativity, which is a theory of how gravity interacts with spacetime. Instead of viewing gravity as a force field acting in spacetime, Einstein suggested that it modifies the geometric structure of spacetime itself. According to the general theory, time goes more slowly at places with lower gravitational potentials and rays of light bend in the presence of a gravitational field. Scientists have studied the behaviour of binary pulsars, confirming the predictions of Einstein's theories. Non-Euclidean geometry is usually used to describe spacetime. Mathematics In modern mathematics spaces are defined as sets with some added structure. They are typically topological spaces, in which a concept of neighbourhood is defined, frequently by means of a distance (metric spaces). The elements of a space are often called points, but they can have other names such as vectors in vector spaces and functions in function spaces. Physics Space is one of the few fundamental quantities in physics, meaning that it cannot be defined via other quantities because nothing more fundamental is known at the present. On the other hand, it can be related to other fundamental quantities. Thus, similar to other fundamental quantities (like time and mass), space can be explored via measurement and experiment. Today, our three-dimensional space is viewed as embedded in a four-dimensional spacetime, called Minkowski space (see special relativity). The idea behind spacetime is that time is hyperbolic-orthogonal to each of the three spatial dimensions. Relativity Before Albert Einstein's work on relativistic physics, time and space were viewed as independent dimensions. Einstein's discoveries showed that due to relativity of motion our space and time can be mathematically combined into one object–spacetime. It turns out that distances in space or in time separately are not invariant with respect to Lorentz coordinate transformations, but distances in Minkowski space along spacetime intervals are—which justifies the name. In addition, time and space dimensions should not be viewed as exactly equivalent in Minkowski space. One can freely move in space but not in time. Thus, time and space coordinates are treated differently both in special relativity (where time is sometimes considered an imaginary coordinate) and in general relativity (where different signs are assigned to time and space components of spacetime metric). Furthermore, in Einstein's general theory of relativity, it is postulated that spacetime is geometrically distorted – curved – near to gravitationally significant masses. One consequence of this postulate, which follows from the equations of general relativity, is the prediction of moving ripples of spacetime, called gravitational waves. While indirect evidence for these waves has been found (in the motions of the Hulse–Taylor binary system, for example) experiments attempting to directly measure these waves are ongoing at the LIGO and Virgo collaborations. LIGO scientists reported the first such direct observation of gravitational waves on 14 September 2015. Cosmology Relativity theory leads to the cosmological question of what shape the universe is, and where space came from. It appears that space was created in the Big Bang, 13.8 billion years ago and has been expanding ever since. The overall shape of space is not known, but space is known to be expanding very rapidly due to the cosmic inflation. Spatial measurement The measurement of physical space has long been important. Although earlier societies had developed measuring systems, the International System of Units, (SI), is now the most common system of units used in the measuring of space, and is almost universally used. Currently, the standard space interval, called a standard meter or simply meter, is defined as the distance traveled by light in vacuum during a time interval of exactly 1/299,792,458 of a second. This definition coupled with present definition of the second is based on the special theory of relativity in which the speed of light plays the role of a fundamental constant of nature. Geographical space Geography is the branch of science concerned with identifying and describing places on Earth, utilizing spatial awareness to try to understand why things exist in specific locations. Cartography is the mapping of spaces to allow better navigation, for visualization purposes and to act as a locational device. Geostatistics apply statistical concepts to collected spatial data of Earth to create an estimate for unobserved phenomena. Geographical space is often considered as land, and can have a relation to ownership usage (in which space is seen as property or territory). While some cultures assert the rights of the individual in terms of ownership, other cultures will identify with a communal approach to land ownership, while still other cultures such as Australian Aboriginals, rather than asserting ownership rights to land, invert the relationship and consider that they are in fact owned by the land. Spatial planning is a method of regulating the use of space at land-level, with decisions made at regional, national and international levels. Space can also impact on human and cultural behavior, being an important factor in architecture, where it will impact on the design of buildings and structures, and on farming. Ownership of space is not restricted to land. Ownership of airspace and of waters is decided internationally. Other forms of ownership have been recently asserted to other spaces—for example to the radio bands of the electromagnetic spectrum or to cyberspace. Public space is a term used to define areas of land as collectively owned by the community, and managed in their name by delegated bodies; such spaces are open to all, while private property is the land culturally owned by an individual or company, for their own use and pleasure. Abstract space is a term used in geography to refer to a hypothetical space characterized by complete homogeneity. When modeling activity or behavior, it is a conceptual tool used to limit extraneous variables such as terrain. In psychology Psychologists first began to study the way space is perceived in the middle of the 19th century. Those now concerned with such studies regard it as a distinct branch of psychology. Psychologists analyzing the perception of space are concerned with how recognition of an object's physical appearance or its interactions are perceived, see, for example, visual space. Other, more specialized topics studied include amodal perception and object permanence. The perception of surroundings is important due to its necessary relevance to survival, especially with regards to hunting and self preservation as well as simply one's idea of personal space. Several space-related phobias have been identified, including agoraphobia (the fear of open spaces), astrophobia (the fear of celestial space) and claustrophobia (the fear of enclosed spaces). The understanding of three-dimensional space in humans is thought to be learned during infancy using unconscious inference, and is closely related to hand-eye coordination. The visual ability to perceive the world in three dimensions is called depth perception. In the social sciences Space has been studied in the social sciences from the perspectives of Marxism, feminism, postmodernism, postcolonialism, urban theory and critical geography. These theories account for the effect of the history of colonialism, transatlantic slavery and globalization on our understanding and experience of space and place. The topic has garnered attention since the 1980s, after the publication of Henri Lefebvre's The Production of Space . In this book, Lefebvre applies Marxist ideas about the production of commodities and accumulation of capital to discuss space as a social product. His focus is on the multiple and overlapping social processes that produce space. In his book The Condition of Postmodernity, David Harvey describes what he terms the "time-space compression." This is the effect of technological advances and capitalism on our perception of time, space and distance. Changes in the modes of production and consumption of capital affect and are affected by developments in transportation and technology. These advances create relationships across time and space, new markets and groups of wealthy elites in urban centers, all of which annihilate distances and affect our perception of linearity and distance. In his book Thirdspace, Edward Soja describes space and spatiality as an integral and neglected aspect of what he calls the "trialectics of being," the three modes that determine how we inhabit, experience and understand the world. He argues that critical theories in the Humanities and Social Sciences study the historical and social dimensions of our lived experience, neglecting the spatial dimension. He builds on Henri Lefebvre's work to address the dualistic way in which humans understand space—as either material/physical or as represented/imagined. Lefebvre's "lived space" and Soja's "thirdspace" are terms that account for the complex ways in which humans understand and navigate place, which "firstspace" and "Secondspace" (Soja's terms for material and imagined spaces respectively) do not fully encompass. Postcolonial theorist Homi Bhabha's concept of Third Space is different from Soja's Thirdspace, even though both terms offer a way to think outside the terms of a binary logic. Bhabha's Third Space is the space in which hybrid cultural forms and identities exist. In his theories, the term hybrid describes new cultural forms that emerge through the interaction between colonizer and colonized.
Physical sciences
Physics
null
27672
https://en.wikipedia.org/wiki/Sailing
Sailing
Sailing employs the wind—acting on sails, wingsails or kites—to propel a craft on the surface of the water (sailing ship, sailboat, raft, windsurfer, or kitesurfer), on ice (iceboat) or on land (land yacht) over a chosen course, which is often part of a larger plan of navigation. From prehistory until the second half of the 19th century, sailing craft were the primary means of maritime trade and transportation; exploration across the seas and oceans was reliant on sail for anything other than the shortest distances. Naval power in this period used sail to varying degrees depending on the current technology, culminating in the gun-armed sailing warships of the Age of Sail. Sail was slowly replaced by steam as the method of propulsion for ships over the latter part of the 19th century – seeing a gradual improvement in the technology of steam through a number of developmental steps. Steam allowed scheduled services that ran at higher average speeds than sailing vessels. Large improvements in fuel economy allowed steam to progressively outcompete sail in, ultimately, all commercial situations, giving ship-owning investors a better return on capital. In the 21st century, most sailing represents a form of recreation or sport. Recreational sailing or yachting can be divided into racing and cruising. Cruising can include extended offshore and ocean-crossing trips, coastal sailing within sight of land, and daysailing. Sailing relies on the physics of sails as they derive power from the wind, generating both lift and drag. On a given course, the sails are set to an angle that optimizes the development of wind power, as determined by the apparent wind, which is the wind as sensed from a moving vessel. The forces transmitted via the sails are resisted by forces from the hull, keel, and rudder of a sailing craft, by forces from skate runners of an iceboat, or by forces from wheels of a land sailing craft which are steering the course. This combination of forces means that it is possible to sail an upwind course as well as downwind. The course with respect to the true wind direction (as would be indicated by a stationary flag) is called a point of sail. Conventional sailing craft cannot derive wind power on a course with a point of sail that is too close into the wind. History Throughout history, sailing was a key form of propulsion that allowed for greater mobility than travel over land. This greater mobility increased capacity for exploration, trade, transport, warfare, and fishing, especially when compared to overland options. Until the significant improvements in land transportation that occurred during the 19th century, if water transport was an option, it was faster, cheaper and safer than making the same journey by land. This applied equally to sea crossings, coastal voyages and use of rivers and lakes. Examples of the consequences of this include the large grain trade in the Mediterranean during the classical period. Cities such as Rome were totally reliant on the delivery by sailing ships of the large amounts of grain needed. It has been estimated that it cost less for a sailing ship of the Roman Empire to carry grain the length of the Mediterranean than to move the same amount 15 miles by road. Rome consumed about 150,000 tons of Egyptian grain each year over the first three centuries AD. A similar but more recent trade, in coal, was from the mines situated close to the River Tyne to London – which was already being carried out in the 14th century and grew as the city increased in size. In 1795, 4,395 cargoes of coal were delivered to London. This would have needed a fleet of about 500 sailing colliers (making 8 or 9 trips a year). This quantity had doubled by 1839. (The first steam-powered collier was not launched until 1852 and sailing colliers continued working into the 20th century.) Exploration and research The earliest image suggesting the use of sail on a boat may be on a piece of pottery from Mesopotamia, dated to the 6th millennium BCE. The image is thought to show a bipod mast mounted on the hull of a reed boat – no sail is depicted. The earliest representation of a sail, from Egypt, is dated to circa 3100 BCE. The Nile is considered a suitable place for early use of sail for propulsion. This is because the river's current flows from south to north, whilst the prevailing wind direction is north to south. Therefore, a boat of that time could use the current to go north – an unobstructed trip of 750 miles – and sail to make the return trip. Evidence of early sailors has also been found in other locations, such as Kuwait, Turkey, Syria, Minoa, Bahrain, and India, among others. Austronesian peoples used sails from some time before 2000 BCE. Their expansion from what is now Southern China and Taiwan started in 3000 BCE. Their technology came to include outriggers, catamarans, and crab claw sails, which enabled the Austronesian Expansion at around 3000 to 1500 BCE into the islands of Maritime Southeast Asia, and thence to Micronesia, Island Melanesia, Polynesia, and Madagascar. Since there is no commonality between the boat technology of China and the Austronesians, these distinctive characteristics must have been developed at or some time after the beginning of the expansion. They traveled vast distances of open ocean in outrigger canoes using navigation methods such as stick charts. The windward sailing capability of Austronesian boats allowed a strategy of sailing to windward on a voyage of exploration, with a return downwind either to report a discovery or if no land was found. This was well suited to the prevailing winds as Pacific islands were steadily colonized. By the time of the Age of Discovery—starting in the 15th century—square-rigged, multi-masted vessels were the norm and were guided by navigation techniques that included the magnetic compass and making sightings of the sun and stars that allowed transoceanic voyages. During the Age of Discovery, sailing ships figured in European voyages around Africa to China and Japan; and across the Atlantic Ocean to North and South America. Later, sailing ships ventured into the Arctic to explore northern sea routes and assess natural resources. In the 18th and 19th centuries sailing vessels made Hydrographic surveys to develop charts for navigation and, at times, carried scientists aboard as with the voyages of James Cook and the Second voyage of HMS Beagle with naturalist Charles Darwin. Commerce In the early 1800s, fast blockade-running schooners and brigantines—Baltimore Clippers—evolved into three-masted, typically ship-rigged sailing vessels with fine lines that enhanced speed, but lessened capacity for high-value cargo, like tea from China. Masts were as high as and were able to achieve speeds of , allowing for passages of up to per 24 hours. Clippers yielded to bulkier, slower vessels, which became economically competitive in the mid 19th century. Sail plans with just fore-and-aft sails (schooners), or a mixture of the two (brigantines, barques and barquentines) emerged. Coastal top-sail schooners with a crew as small as two managing the sail handling became an efficient way to carry bulk cargo, since only the fore-sails required tending while tacking and steam-driven machinery was often available for raising the sails and the anchor. Iron-hulled sailing ships represented the final evolution of sailing ships at the end of the Age of Sail. They were built to carry bulk cargo for long distances in the nineteenth and early twentieth centuries. They were the largest of merchant sailing ships, with three to five masts and square sails, as well as other sail plans. They carried bulk cargoes between continents. Iron-hulled sailing ships were mainly built from the 1870s to 1900, when steamships began to outpace them economically because of their ability to keep a schedule regardless of the wind. Steel hulls also replaced iron hulls at around the same time. Even into the twentieth century, sailing ships could hold their own on transoceanic voyages such as Australia to Europe, since they did not require bunkerage for coal nor fresh water for steam, and they were faster than the early steamers, which usually could barely make . Ultimately, the steamships' independence from the wind and their ability to take shorter routes, passing through the Suez and Panama Canals, made sailing ships uneconomical. Naval power Until the general adoption of carvel-built ships that relied on an internal skeleton structure to bear the weight of the ship and for gun ports to be cut in the side, sailing ships were just vehicles for delivering fighters to the enemy for engagement. Early Phoenician, Greek, Roman galleys would ram each other, then pour onto the decks of the opposing force and continue the fight by hand, meaning that these galleys required speed and maneuverability. This need for speed translated into longer ships with multiple rows of oars along the sides, known as biremes and triremes. Typically, the sailing ships during this time period were the merchant ships. By 1500, Gun ports allowed sailing vessels to sail alongside an enemy vessel and fire a broadside of multiple cannon. This development allowed for naval fleets to array themselves into a line of battle, whereby, warships would maintain their place in the line to engage the enemy in a parallel or perpendicular line. Modern applications While the use of sailing vessels for commerce or naval power has been supplanted with engine-driven vessels, there continue to be commercial operations that take passengers on sailing cruises. Modern navies also employ sailing vessels to train cadets in seamanship. Recreation or sport accounts for the bulk of sailing in modern boats. Recreation Recreational sailing can be divided into two categories, day-sailing, where one gets off the boat for the night, and cruising, where one stays aboard. Day-sailing primarily affords experiencing the pleasure of sailing a boat. No destination is required. It is an opportunity to share the experience with others. A variety of boats with no overnight accommodations, ranging in size from to over , may be regarded as day sailors. Cruising on a sailing yacht may be either near-shore or passage-making out of sight of land and entails the use of sailboats that support sustained overnight use. Coastal cruising grounds include areas of the Mediterranean and Black Seas, Northern Europe, Western Europe and islands of the North Atlantic, West Africa and the islands of the South Atlantic, the Caribbean, and regions of North and Central America. Passage-making under sail occurs on routes through oceans all over the world. Circular routes exist between the Americas and Europe, and between South Africa and South America. There are many routes from the Americas, Australia, New Zealand, and Asia to island destinations in the South Pacific. Some cruisers circumnavigate the globe. Sport Sailing as a sport is organized on a hierarchical basis, starting at the yacht club level and reaching up into national and international federations; it may entail racing yachts, sailing dinghies, or other small, open sailing craft, including iceboats and land yachts. Sailboat racing is governed by World Sailing with most racing formats using the Racing Rules of Sailing. It entails a variety of different disciplines, including: Oceanic racing, held over long distances and in open water, often last multiple days and include world circumnavigation, such as the Vendée Globe and The Ocean Race. Fleet racing, featuring multiple boats in a regatta that comprises multiple races or heats. Match racing comprises two boats competing against each other, as is done with the America's Cup, vying to cross a finish line, first. Team racing between two teams of three boats each in a format analogous to match racing. Speed sailing to set new records for different categories of craft with oversight by the World Sailing Speed Record Council. Sail boarding has a variety of disciplines particular to that sport. Navigation Point of sail A sailing craft's ability to derive power from the wind depends on the point of sail it is on—the direction of travel under sail in relation to the true wind direction over the surface. The principal points of sail roughly correspond to 45° segments of a circle, starting with 0° directly into the wind. For many sailing craft, the arc spanning 45° on either side of the wind is a "no-go" zone, where a sail is unable to mobilize power from the wind. Sailing on a course as close to the wind as possible—approximately 45°—is termed "close-hauled". At 90° off the wind, a craft is on a "beam reach". At 135° off the wind, a craft is on a "broad reach". At 180° off the wind (sailing in the same direction as the wind), a craft is "running downwind". In points of sail that range from close-hauled to a broad reach, sails act substantially like a wing, with lift predominantly propelling the craft. In points of sail from a broad reach to down wind, sails act substantially like a parachute, with drag predominantly propelling the craft. For craft with little forward resistance, such as ice boats and land yachts, this transition occurs further off the wind than for sailboats and sailing ships. Wind direction for points of sail always refers to the true wind—the wind felt by a stationary observer. The apparent wind—the wind felt by an observer on a moving sailing craft—determines the motive power for sailing craft. Effect on apparent wind True wind velocity (VT) combines with the sailing craft's velocity (VB) to give the apparent wind velocity (VA), the air velocity experienced by instrumentation or crew on a moving sailing craft. Apparent wind velocity provides the motive power for the sails on any given point of sail. It varies from being the true wind velocity of a stopped craft in irons in the no-go zone, to being faster than the true wind speed as the sailing craft's velocity adds to the true windspeed on a reach. It diminishes towards zero for a craft sailing dead downwind. The speed of sailboats through the water is limited by the resistance that results from hull drag in the water. Ice boats typically have the least resistance to forward motion of any sailing craft. Consequently, a sailboat experiences a wider range of apparent wind angles than does an ice boat, whose speed is typically great enough to have the apparent wind coming from a few degrees to one side of its course, necessitating sailing with the sail sheeted in for most points of sail. On conventional sailboats, the sails are set to create lift for those points of sail where it's possible to align the leading edge of the sail with the apparent wind. For a sailboat, point of sail affects lateral force significantly. The higher the boat points to the wind under sail, the stronger the lateral force, which requires resistance from a keel or other underwater foils, including daggerboard, centerboard, skeg and rudder. Lateral force also induces heeling in a sailboat, which requires resistance by weight of ballast from the crew or the boat itself and by the shape of the boat, especially with a catamaran. As the boat points off the wind, lateral force and the forces required to resist it become less important. On ice boats, lateral forces are countered by the lateral resistance of the blades on ice and their distance apart, which generally prevents heeling. Course under sail Wind and currents are important factors to plan on for both offshore and inshore sailing. Predicting the availability, strength and direction of the wind is key to using its power along the desired course. Ocean currents, tides and river currents may deflect a sailing vessel from its desired course. If the desired course is within the no-go zone, then the sailing craft must follow a zig-zag route into the wind to reach its waypoint or destination. Downwind, certain high-performance sailing craft can reach the destination more quickly by following a zig-zag route on a series of broad reaches. Negotiating obstructions or a channel may also require a change of direction with respect to the wind, necessitating changing of tack with the wind on the opposite side of the craft, from before. Changing tack is called tacking when the wind crosses over the bow of the craft as it turns and jibing (or gybing) if the wind passes over the stern. Upwind A sailing craft can sail on a course anywhere outside of its no-go zone. If the next waypoint or destination is within the arc defined by the no-go zone from the craft's current position, then it must perform a series of tacking maneuvers to get there on a zigzag route, called beating to windward. The progress along that route is called the course made good; the speed between the starting and ending points of the route is called the speed made good and is calculated by the distance between the two points, divided by the travel time. The limiting line to the waypoint that allows the sailing vessel to leave it to leeward is called the layline. Whereas some Bermuda-rigged sailing yachts can sail as close as 30° to the wind, most 20th-Century square riggers are limited to 60° off the wind. Fore-and-aft rigs are designed to operate with the wind on either side, whereas square rigs and kites are designed to have the wind come from one side of the sail only. Because the lateral wind forces are highest when sailing close-hauled, the resisting water forces around the vessel's keel, centerboard, rudder and other foils must also be highest in order to limit sideways motion or leeway. Ice boats and land yachts minimize lateral motion with resistance from their blades or wheels. Changing tack by tacking Tacking or coming about is a maneuver by which a sailing craft turns its bow into and through the wind (referred to as "the eye of the wind") so that the apparent wind changes from one side to the other, allowing progress on the opposite tack. The type of sailing rig dictates the procedures and constraints on achieving a tacking maneuver. Fore-and-aft rigs allow their sails to hang limp as they tack; square rigs must present the full frontal area of the sail to the wind, when changing from side to side; and windsurfers have flexibly pivoting and fully rotating masts that get flipped from side to side. Downwind A sailing craft can travel directly downwind only at a speed that is less than the wind speed. However, some sailing craft such as iceboats, sand yachts, and some high-performance sailboats can achieve a higher downwind velocity made good by traveling on a series of broad reaches, punctuated by jibes in between. It was explored by sailing vessels starting in 1975 and now extends to high-performance skiffs, catamarans and foiling sailboats. Navigating a channel or a downwind course among obstructions may necessitate changes in direction that require a change of tack, accomplished with a jibe. Changing tack by jibing Jibing or gybing is a sailing maneuver by which a sailing craft turns its stern past the eye of the wind so that the apparent wind changes from one side to the other, allowing progress on the opposite tack. This maneuver can be done on smaller boats by pulling the tiller towards yourself (the opposite side of the sail). As with tacking, the type of sailing rig dictates the procedures and constraints for jibing. Fore-and-aft sails with booms, gaffs or sprits are unstable when the free end points into the eye of the wind and must be controlled to avoid a violent change to the other side; square rigs as they present the full area of the sail to the wind from the rear experience little change of operation from one tack to the other; and windsurfers again have flexibly pivoting and fully rotating masts that get flipped from side to side. Wind and currents Winds and oceanic currents are both the result of the sun powering their respective fluid media. Wind powers the sailing craft and the ocean bears the craft on its course, as currents may alter the course of a sailing vessel on the ocean or a river. Wind – On a global scale, vessels making long voyages must take atmospheric circulation into account, which causes zones of westerlies, easterlies, trade winds and high-pressure zones with light winds, sometimes called horse latitudes, in between. Sailors predict wind direction and strength with knowledge of high- and low-pressure areas, and the weather fronts that accompany them. Along coastal areas, sailors contend with diurnal changes in wind direction—flowing off the shore at night and onto the shore during the day. Local temporary wind shifts are called lifts, when they improve the sailing craft's ability travel along its rhumb line in the direction of the next waypoint. Unfavorable wind shifts are called headers. Currents – On a global scale, vessels making long voyages must take major ocean current circulation into account. Major oceanic currents, like the Gulf Stream in the Atlantic Ocean and the Kuroshio Current in the Pacific Ocean require planning for the effect that they will have on a transiting vessel's track. Likewise, tides affect a vessel's track, especially in areas with large tidal ranges, like the Bay of Fundy or along Southeast Alaska, or where the tide flows through straits, like Deception Pass in Puget Sound. Mariners use tide and current tables to inform their navigation. Before the advent of motors, it was advantageous for sailing vessels to enter or leave port or to pass through a strait with the tide. Trimming Trimming refers to adjusting the lines that control sails, including the sheets that control angle of the sails with respect to the wind, the halyards that raise and tighten the sail, and to adjusting the hull's resistance to heeling, yawing or progress through the water. Sails In their most developed version, square sails are controlled by two each of: sheets, braces, clewlines, and reef tackles, plus four buntlines, each of which may be controlled by a crew member as the sail is adjusted. Towards the end of the Age of Sail, steam-powered machinery reduced the number of crew required to trim sail. Adjustment of the angle of a fore-and-aft sail with respect to the apparent wind is controlled with a line, called a "sheet". On points of sail between close-hauled and a broad reach, the goal is typically to create flow along the sail to maximize power through lift. Streamers placed on the surface of the sail, called tell-tales, indicate whether that flow is smooth or turbulent. Smooth flow on both sides indicates proper trim. A jib and mainsail are typically configured to be adjusted to create a smooth laminar flow, leading from one to the other in what is called the "slot effect". On downwind points of sail, power is achieved primarily with the wind pushing on the sail, as indicated by drooping tell-tales. Spinnakers are light-weight, large-area, highly curved sails that are adapted to sailing off the wind. In addition to using the sheets to adjust the angle with respect to the apparent wind, other lines control the shape of the sail, notably the outhaul, halyard, boom vang and backstay. These control the curvature that is appropriate to the windspeed, the higher the wind, the flatter the sail. When the wind strength is greater than these adjustments can accommodate to prevent overpowering the sailing craft, then reducing sail area through reefing, substituting a smaller sail or by other means. Reducing sail Reducing sail on square-rigged ships could be accomplished by exposing less of each sail, by tying it off higher up with reefing points. Additionally, as winds get stronger, sails can be furled or removed from the spars, entirely until the vessel is surviving hurricane-force winds under "bare poles". On fore-and-aft rigged vessels, reducing sail may furling the jib and by reefing or partially lowering the mainsail, that is reducing the area of a sail without actually changing it for a smaller sail. This results both in a reduced sail area but also in a lower centre of effort from the sails, reducing the heeling moment and keeping the boat more upright. There are three common methods of reefing the mainsail: Slab reefing, which involves lowering the sail by about one-quarter to one-third of its full length and tightening the lower part of the sail using an outhaul or a pre-loaded reef line through a cringle at the new clew, and hook through a cringle at the new tack. In-boom roller-reefing, with a horizontal foil inside the boom. This method allows for standard- or full-length horizontal battens. In-mast (or on-mast) roller-reefing. This method rolls the sail up around a vertical foil either inside a slot in the mast, or affixed to the outside of the mast. It requires a mainsail with either no battens, or newly developed vertical battens. Hull Hull trim has three aspects, each tied to an axis of rotation, they are controlling: Heeling (rotation about the longitudinal axis or leaning to either port or starboard) Helm force (rotation about the vertical axis) Hull drag (rotation about the horizontal axis amidships) Each is a reaction to forces on sails and is achieved either by weight distribution or by management of the center of force of the underwater foils (keel, daggerboard, etc.), compared with the center of force on the sails. Heeling A sailing vessel heels when the boat leans over to the side in reaction to wind forces on the sails. A sailing vessel's form stability (derived from the shape of the hull and the position of the center of gravity) is the starting point for resisting heeling. Catamarans and iceboats have a wide stance that makes them resistant to heeling. Additional measures for trimming a sailing craft to control heeling include: Ballast in the keel, which counteracts heeling as the boat rolls. Shifting of weight, which might be crew on a trapeze or moveable ballast across the boat. Reducing sail Adjusting the depth of underwater foils to control their lateral resistance force and center of resistance Helm force The alignment of center of force of the sails with center of resistance of the hull and its appendices controls whether the craft will track straight with little steering input, or whether correction needs to be made to hold it away from turning into the wind (a weather helm) or turning away from the wind (a lee helm). A center of force behind the center of resistance causes a weather helm. The center of force ahead of the center of resistance causes a lee helm. When the two are closely aligned, the helm is neutral and requires little input to maintain course. Hull drag Fore-and-aft weight distribution changes the cross-section of a vessel in the water. Small sailing craft are sensitive to crew placement. They are usually designed to have the crew stationed midships to minimize hull drag in the water. Other aspects of seamanship Seamanship encompasses all aspects of taking a sailing vessel in and out of port, navigating it to its destination, and securing it at anchor or alongside a dock. Important aspects of seamanship include employing a common language aboard a sailing craft and the management of lines that control the sails and rigging. Nautical terms Nautical terms for elements of a vessel: starboard (right-hand side), port or larboard (left-hand side), forward or fore (frontward), aft or abaft (rearward), bow (forward part of the hull), stern (aft part of the hull), beam (the widest part). Spars, supporting sails, include masts, booms, yards, gaffs and poles. Moveable lines that control sails or other equipment are known collectively as a vessel's running rigging. Lines that raise sails are called halyards while those that strike them are called downhauls. Lines that adjust (trim) the sails are called sheets. These are often referred to using the name of the sail they control (such as main sheet or jib sheet). Guys are used to control the ends of other spars such as spinnaker poles. Lines used to tie a boat up when alongside are called docklines, docking cables or mooring warps. A rode is what attaches an anchored boat to its anchor. Other than starboard and port, the sides of the boat are defined by their relationship to the wind. The terms to describe the two sides are Windward and leeward. The windward side of the boat is the side that is upwind while the leeward side is the side that is downwind. Management of lines The following knots are commonly used to handle ropes and lines on sailing craft: Bowline – forms a loop at the end of a rope or line, useful for lassoing a piling. Cleat hitch – affixes a line to a cleat, used with docking lines. Clove hitch – two half hitches, used for tying onto a post or hanging a fender. Figure-eight – a stopper knot, prevents a line from sliding past the opening in a fitting. Rolling hitch – a friction hitch onto a line or a spar that pulls in one direction and slides in the other. Sheet bend – joins two rope ends, when improvising a longer line. Reef knot or square knot – used for reefing or storing a sail by tying two ends of a line together. Lines and halyards are typically coiled neatly for stowage and reuse. Sail physics The physics of sailing arises from a balance of forces between the wind powering the sailing craft as it passes over its sails and the resistance by the sailing craft against being blown off course, which is provided in the water by the keel, rudder, underwater foils and other elements of the underbody of a sailboat, on ice by the runners of an iceboat, or on land by the wheels of a sail-powered land vehicle. Forces on sails depend on wind speed and direction and the speed and direction of the craft. The speed of the craft at a given point of sail contributes to the "apparent wind"—the wind speed and direction as measured on the moving craft. The apparent wind on the sail creates a total aerodynamic force, which may be resolved into drag—the force component in the direction of the apparent wind—and lift—the force component normal (90°) to the apparent wind. Depending on the alignment of the sail with the apparent wind (angle of attack), lift or drag may be the predominant propulsive component. Depending on the angle of attack of a set of sails with respect to the apparent wind, each sail is providing motive force to the sailing craft either from lift-dominant attached flow or drag-dominant separated flow. Additionally, sails may interact with one another to create forces that are different from the sum of the individual contributions of each sail, when used alone. Apparent wind velocity The term "velocity" refers both to speed and direction. As applied to wind, apparent wind velocity (VA) is the air velocity acting upon the leading edge of the most forward sail or as experienced by instrumentation or crew on a moving sailing craft. In nautical terminology, wind speeds are normally expressed in knots and wind angles in degrees. All sailing craft reach a constant forward velocity (VB) for a given true wind velocity (VT) and point of sail. The craft's point of sail affects its velocity for a given true wind velocity. Conventional sailing craft cannot derive power from the wind in a "no-go" zone that is approximately 40° to 50° away from the true wind, depending on the craft. Likewise, the directly downwind speed of all conventional sailing craft is limited to the true wind speed. As a sailboat sails further from the wind, the apparent wind becomes smaller and the lateral component becomes less; boat speed is highest on the beam reach. To act like an airfoil, the sail on a sailboat is sheeted further out as the course is further off the wind. As an iceboat sails further from the wind, the apparent wind increases slightly and the boat speed is highest on the broad reach. In order to act like an airfoil, the sail on an iceboat is sheeted in for all three points of sail. Lift and drag on sails Lift on a sail, acting as an airfoil, occurs in a direction perpendicular to the incident airstream (the apparent wind velocity for the headsail) and is a result of pressure differences between the windward and leeward surfaces and depends on the angle of attack, sail shape, air density, and speed of the apparent wind. The lift force results from the average pressure on the windward surface of the sail being higher than the average pressure on the leeward side. These pressure differences arise in conjunction with the curved airflow. As air follows a curved path along the windward side of a sail, there is a pressure gradient perpendicular to the flow direction with higher pressure on the outside of the curve and lower pressure on the inside. To generate lift, a sail must present an "angle of attack" between the chord line of the sail and the apparent wind velocity. The angle of attack is a function of both the craft's point of sail and how the sail is adjusted with respect to the apparent wind. As the lift generated by a sail increases, so does lift-induced drag, which together with parasitic drag constitute total drag, which acts in a direction parallel to the incident airstream. This occurs as the angle of attack increases with sail trim or change of course and causes the lift coefficient to increase up to the point of aerodynamic stall along with the lift-induced drag coefficient. At the onset of stall, lift is abruptly decreased, as is lift-induced drag. Sails with the apparent wind behind them (especially going downwind) operate in a stalled condition. Lift and drag are components of the total aerodynamic force on sail, which are resisted by forces in the water (for a boat) or on the traveled surface (for an iceboat or land sailing craft). Sails act in two basic modes; under the lift-predominant mode, the sail behaves in a manner analogous to a wing with airflow attached to both surfaces; under the drag-predominant mode, the sail acts in a manner analogous to a parachute with airflow in detached flow, eddying around the sail. Lift predominance (wing mode) Sails allow progress of a sailing craft to windward, thanks to their ability to generate lift (and the craft's ability to resist the lateral forces that result). Each sail configuration has a characteristic coefficient of lift and attendant coefficient of drag, which can be determined experimentally and calculated theoretically. Sailing craft orient their sails with a favorable angle of attack between the entry point of the sail and the apparent wind even as their course changes. The ability to generate lift is limited by sailing too close to the wind when no effective angle of attack is available to generate lift (causing luffing) and sailing sufficiently off the wind that the sail cannot be oriented at a favorable angle of attack to prevent the sail from stalling with flow separation. Drag predominance (parachute mode) When sailing craft are on a course where the angle between the sail and the apparent wind (the angle of attack) exceeds the point of maximum lift, separation of flow occurs. Drag increases and lift decreases with increasing angle of attack as the separation becomes progressively pronounced until the sail is perpendicular to the apparent wind, when lift becomes negligible and drag predominates. In addition to the sails used upwind, spinnakers provide area and curvature appropriate for sailing with separated flow on downwind points of sail, analogous to parachutes, which provide both lift and drag. Wind variation with height and time Wind speed increases with height above the surface; at the same time, wind speed may vary over short periods of time as gusts. Wind shear affects sailing craft in motion by presenting a different wind speed and direction at different heights along the mast. Wind shear occurs because of friction above a water surface slowing the flow of air. The ratio of wind at the surface to wind at a height above the surface varies by a power law with an exponent of 0.11-0.13 over the ocean. This means that a wind at 3 m above the water would be approximately at above the water. In hurricane-force winds with at the surface the speed at would be This suggests that sails that reach higher above the surface can be subject to stronger wind forces that move the centre of effort on them higher above the surface and increase the heeling moment. Additionally, apparent wind direction moves aft with height above water, which may necessitate a corresponding twist in the shape of the sail to achieve attached flow with height. Gusts may be predicted by the same value that serves as an exponent for wind shear, serving as a gust factor. So, one can expect gusts to be about 1.5 times stronger than the prevailing wind speed (a 10-knot wind might gust up to 15 knots). This, combined with changes in wind direction suggest the degree to which a sailing craft must adjust sail angle to wind gusts on a given course. Hull physics Waterborne sailing craft rely on the design of the hull and keel to provide minimal forward drag in opposition to the sails' propulsive power and maximum resistance to the sails' lateral forces. In modern sailboats, drag is minimized by control of the hull's shape (blunt or fine), appendages, and slipperiness. The keel or other underwater foils provide the lateral resistance to forces on the sails. Heeling increases both drag and the ability of the boat to track along its desired course. Wave generation for a displacement hull is another important limitation on boat speed. Drag Drag from its form is described by a prismatic coefficient, Cp = displaced volume of the vessel divided by waterline length times maximum displaced section area—the maximum value of Cp = 1.0 being for a constant displace cross section area, as would be found on a barge. For modern sailboats, values of 0.53 ≤ Cp ≤ 0.6 are likely because of the tapered shape of the submerged hull towards both ends. Reducing interior volume allows creating a finer hull with less drag. Because a keel or other underwater foil produces lift, it also produces drag, which increases as the boat heels. Wetted area of the hull affects total the amount of friction between the water and the hull's surface, creating another component of drag. Lateral resistance Sailboats use some sort of underwater foil to generate lift that maintains the forward direction of the boat under sail. Whereas sails operate at angles of attack between 10° and 90° incident to the wind, underwater foils operate at angles of attack between 0° and 10° incident to the water passing by. Neither their angle of attack nor surface is adjustable (except for moveable foils) and they are never intentionally stalled, while making way through the water. Heeling the vessel away from perpendicular into the water significantly degrades the boat's ability to point into the wind. Hull speed and beyond Hull speed is the speed at which the wavelength of a vessel's bow wave is equal to its waterline length and is proportional to the square root of the vessel's length at the waterline. Applying more power does not significantly increase the speed of a displacement vessel beyond hull speed. This is because the vessel is climbing up an increasingly steep bow wave with the addition of power without the wave propagating forward faster. Planing and foiling vessels are not limited by hull speed, as they rise out of the water without building a bow wave with the application of power. Long narrow hulls, such as those of catamarans, surpass hull speed by piercing through the bow wave. Hull speed does not apply to sailing craft on ice runners or wheels because they do not displace water.
Technology
Maritime transport
null
27675
https://en.wikipedia.org/wiki/Simple%20Mail%20Transfer%20Protocol
Simple Mail Transfer Protocol
The Simple Mail Transfer Protocol (SMTP) is an Internet standard communication protocol for electronic mail transmission. Mail servers and other message transfer agents use SMTP to send and receive mail messages. User-level email clients typically use SMTP only for sending messages to a mail server for relaying, and typically submit outgoing email to the mail server on port 587 or 465 per . For retrieving messages, IMAP (which replaced the older POP3) is standard, but proprietary servers also often implement proprietary protocols, e.g., Exchange ActiveSync. SMTP's origins began in 1980, building on concepts implemented on the ARPANET since 1971. It has been updated, modified and extended multiple times. The protocol version in common use today has extensible structure with various extensions for authentication, encryption, binary data transfer, and internationalized email addresses. SMTP servers commonly use the Transmission Control Protocol on port number 25 (between servers) and 587 (for submission from authenticated clients), both with or without encryption. History Predecessors to SMTP Various forms of one-to-one electronic messaging were used in the 1960s. Users communicated using systems developed for specific mainframe computers. As more computers were interconnected, especially in the U.S. Government's ARPANET, standards were developed to permit exchange of messages between different operating systems. Mail on the ARPANET traces its roots to 1971: the Mail Box Protocol, which was not implemented, but is discussed in ; and the SNDMSG program, which Ray Tomlinson of BBN adapted that year to send messages across two computers on the ARPANET. A further proposal for a Mail Protocol was made in RFC 524 in June 1973, which was not implemented. The use of the File Transfer Protocol (FTP) for "network mail" on the ARPANET was proposed in RFC 469 in March 1973. Through RFC 561, RFC 680, RFC 724, and finally RFC 733 in November 1977, a standardized framework for "electronic mail" using FTP mail servers on was developed. SMTP grew out of these standards developed during the 1970s. Ray Tomlinson discussed network mail among the International Network Working Group in INWG Protocol note 2, written in September 1974. INWG discussed protocols for electronic mail in 1979, which was referenced by Jon Postel in his early work on Internet email. Postel first proposed an Internet Message Protocol in 1979 as part of the Internet Experiment Note (IEN) series. Original SMTP In 1980, Postel and Suzanne Sluizer published which proposed the Mail Transfer Protocol as a replacement for the use of the FTP for mail. of May 1981 removed all references to FTP and allocated port 57 for TCP and UDP, an allocation that has since been removed by IANA. In November 1981, Postel published "Simple Mail Transfer Protocol". The SMTP standard was developed around the same time as Usenet, a one-to-many communication network with some similarities. SMTP became widely used in the early 1980s. At the time, it was a complement to the Unix to Unix Copy Program (UUCP), which was better suited for handling email transfers between machines that were intermittently connected. SMTP, on the other hand, works best when both the sending and receiving machines are connected to the network all the time. Both used a store and forward mechanism and are examples of push technology. Though Usenet's newsgroups were still propagated with UUCP between servers, UUCP as a mail transport has virtually disappeared along with the "bang paths" it used as message routing headers. Sendmail, released with 4.1cBSD in 1983, was one of the first mail transfer agents to implement SMTP. Over time, as BSD Unix became the most popular operating system on the Internet, Sendmail became the most common MTA (mail transfer agent). The original SMTP protocol supported only unauthenticated unencrypted 7-bit ASCII text communications, susceptible to trivial man-in-the-middle attack, spoofing, and spamming, and requiring any binary data to be encoded to readable text before transmission. Due to absence of a proper authentication mechanism, by design every SMTP server was an open mail relay. The Internet Mail Consortium (IMC) reported that 55% of mail servers were open relays in 1998, but less than 1% in 2002. Because of spam concerns most email providers blocklist open relays, making original SMTP essentially impractical for general use on the Internet. Modern SMTP In November 1995, defined Extended Simple Mail Transfer Protocol (ESMTP), which established a general structure for all existing and future extensions which aimed to add-in the features missing from the original SMTP. ESMTP defines consistent and manageable means by which ESMTP clients and servers can be identified and servers can indicate supported extensions. Message submission () and SMTP-AUTH () were introduced in 1998 and 1999, both describing new trends in email delivery. Originally, SMTP servers were typically internal to an organization, receiving mail for the organization from the outside, and relaying messages from the organization to the outside. But as time went on, SMTP servers (mail transfer agents), in practice, were expanding their roles to become message submission agents for mail user agents, some of which were now relaying mail from the outside of an organization. (e.g. a company executive wishes to send email while on a trip using the corporate SMTP server.) This issue, a consequence of the rapid expansion and popularity of the World Wide Web, meant that SMTP had to include specific rules and methods for relaying mail and authenticating users to prevent abuses such as relaying of unsolicited email (spam). Work on message submission () was originally started because popular mail servers would often rewrite mail in an attempt to fix problems in it, for example, adding a domain name to an unqualified address. This behavior is helpful when the message being fixed is an initial submission, but dangerous and harmful when the message originated elsewhere and is being relayed. Cleanly separating mail into submission and relay was seen as a way to permit and encourage rewriting submissions while prohibiting rewriting relay. As spam became more prevalent, it was also seen as a way to provide authorization for mail being sent out from an organization, as well as traceability. This separation of relay and submission quickly became a foundation for modern email security practices. As this protocol started out purely ASCII text-based, it did not deal well with binary files, or characters in many non-English languages. Standards such as Multipurpose Internet Mail Extensions (MIME) were developed to encode binary files for transfer through SMTP. Mail transfer agents (MTAs) developed after Sendmail also tended to be implemented 8-bit clean, so that the alternate "just send eight" strategy could be used to transmit arbitrary text data (in any 8-bit ASCII-like character encoding) via SMTP. Mojibake was still a problem due to differing character set mappings between vendors, although the email addresses themselves still allowed only ASCII. 8-bit-clean MTAs today tend to support the 8BITMIME extension, permitting some binary files to be transmitted almost as easily as plain text (limits on line length and permitted octet values still apply, so that MIME encoding is needed for most non-text data and some text formats). In 2012, the SMTPUTF8 extension was created to support UTF-8 text, allowing international content and addresses in non-Latin scripts like Cyrillic or Chinese. Many people contributed to the core SMTP specifications, among them Jon Postel, Eric Allman, Dave Crocker, Ned Freed, Randall Gellens, John Klensin, and Keith Moore. Mail processing model Email is submitted by a mail client (mail user agent, MUA) to a mail server (mail submission agent, MSA) using SMTP on TCP port 587. Most mailbox providers still allow submission on traditional port 25. The MSA delivers the mail to its mail transfer agent (MTA). Often, these two agents are instances of the same software launched with different options on the same machine. Local processing can be done either on a single machine, or split among multiple machines; mail agent processes on one machine can share files, but if processing is on multiple machines, they transfer messages between each other using SMTP, where each machine is configured to use the next machine as a smart host. Each process is an MTA (an SMTP server) in its own right. The boundary MTA uses DNS to look up the MX (mail exchanger) record for the recipient's domain (the part of the email address on the right of @). The MX record contains the name of the target MTA. Based on the target host and other factors, the sending MTA selects a recipient server and connects to it to complete the mail exchange. Message transfer can occur in a single connection between two MTAs, or in a series of hops through intermediary systems. A receiving SMTP server may be the ultimate destination, an intermediate "relay" (that is, it stores and forwards the message) or a "gateway" (that is, it may forward the message using some protocol other than SMTP). Per section 2.1, each hop is a formal handoff of responsibility for the message, whereby the receiving server must either deliver the message or properly report the failure to do so. Once the final hop accepts the incoming message, it hands it to a mail delivery agent (MDA) for local delivery. An MDA saves messages in the relevant mailbox format. As with sending, this reception can be done using one or multiple computers, but in the diagram above the MDA is depicted as one box near the mail exchanger box. An MDA may deliver messages directly to storage, or forward them over a network using SMTP or other protocol such as Local Mail Transfer Protocol (LMTP), a derivative of SMTP designed for this purpose. Once delivered to the local mail server, the mail is stored for batch retrieval by authenticated mail clients (MUAs). Mail is retrieved by end-user applications, called email clients, using Internet Message Access Protocol (IMAP), a protocol that both facilitates access to mail and manages stored mail, or the Post Office Protocol (POP) which typically uses the traditional mbox mail file format or a proprietary system such as Microsoft Exchange/Outlook or Lotus
Technology
Networks
null
27680
https://en.wikipedia.org/wiki/Supernova
Supernova
A supernova (: supernovae or supernovas) is a powerful and luminous explosion of a star. A supernova occurs during the last evolutionary stages of a massive star, or when a white dwarf is triggered into runaway nuclear fusion. The original object, called the progenitor, either collapses to a neutron star or black hole, or is completely destroyed to form a diffuse nebula. The peak optical luminosity of a supernova can be comparable to that of an entire galaxy before fading over several weeks or months. The last supernova directly observed in the Milky Way was Kepler's Supernova in 1604, appearing not long after Tycho's Supernova in 1572, both of which were visible to the naked eye. The remnants of more recent supernovae have been found, and observations of supernovae in other galaxies suggest they occur in the Milky Way on average about three times every century. A supernova in the Milky Way would almost certainly be observable through modern astronomical telescopes. The most recent naked-eye supernova was SN 1987A, which was the explosion of a blue supergiant star in the Large Magellanic Cloud, a satellite galaxy of the Milky Way. Theoretical studies indicate that most supernovae are triggered by one of two basic mechanisms: the sudden re-ignition of nuclear fusion in a white dwarf, or the sudden gravitational collapse of a massive star's core. In the re-ignition of a white dwarf, the object's temperature is raised enough to trigger runaway nuclear fusion, completely disrupting the star. Possible causes are an accumulation of material from a binary companion through accretion, or by a stellar merger. In the case of a massive star's sudden implosion, the core of a massive star will undergo sudden collapse once it is unable to produce sufficient energy from fusion to counteract the star's own gravity, which must happen once the star begins fusing iron, but may happen during an earlier stage of metal fusion. Supernovae can expel several solar masses of material at speeds up to several percent of the speed of light. This drives an expanding shock wave into the surrounding interstellar medium, sweeping up an expanding shell of gas and dust observed as a supernova remnant. Supernovae are a major source of elements in the interstellar medium from oxygen to rubidium. The expanding shock waves of supernovae can trigger the formation of new stars. Supernovae are a major source of cosmic rays. They might also produce gravitational waves. Etymology The word supernova has the plural form supernovae () or supernovas and is often abbreviated as SN or SNe. It is derived from the Latin word , meaning , which refers to what appears to be a temporary new bright star. Adding the prefix "super-" distinguishes supernovae from ordinary novae, which are far less luminous. The word supernova was coined by Walter Baade and Fritz Zwicky, who began using it in astrophysics lectures in 1931. Its first use in a journal article came the following year in a publication by Knut Lundmark, who may have coined it independently. Observation history Compared to a star's entire history, the visual appearance of a supernova is very brief, sometimes spanning several months, so that the chances of observing one with the naked eye are roughly once in a lifetime. Only a tiny fraction of the 100 billion stars in a typical galaxy have the capacity to become a supernova, the ability being restricted to those having high mass and those in rare kinds of binary star systems with at least one white dwarf. Early discoveries The earliest record of a possible supernova, known as HB9, was likely viewed by an unknown prehistoric people of the Indian subcontinent and recorded on a rock carving in the Burzahama region of Kashmir, dated to . Later, SN 185 was documented by Chinese astronomers in 185 AD. The brightest recorded supernova was SN 1006, which was observed in AD 1006 in the constellation of Lupus. This event was described by observers in China, Japan, Iraq, Egypt and Europe. The widely observed supernova SN 1054 produced the Crab Nebula. Supernovae SN 1572 and SN 1604, the latest Milky Way supernovae to be observed with the naked eye, had a notable influence on the development of astronomy in Europe because they were used to argue against the Aristotelian idea that the universe beyond the Moon and planets was static and unchanging. Johannes Kepler began observing SN 1604 at its peak on 17 October 1604, and continued to make estimates of its brightness until it faded from naked eye view a year later. It was the second supernova to be observed in a generation, after Tycho Brahe observed SN 1572 in Cassiopeia. There is some evidence that the youngest known supernova in our galaxy, G1.9+0.3, occurred in the late 19th century, considerably more recently than Cassiopeia A from around 1680. Neither was noted at the time. In the case of G1.9+0.3, high extinction from dust along the plane of the galactic disk could have dimmed the event sufficiently for it to go unnoticed. The situation for Cassiopeia A is less clear; infrared light echoes have been detected showing that it was not in a region of especially high extinction. Telescope findings With the development of the astronomical telescope, observation and discovery of fainter and more distant supernovae became possible. The first such observation was of SN 1885A in the Andromeda Galaxy. A second supernova, SN 1895B, was discovered in NGC 5253 a decade later. Early work on what was originally believed to be simply a new category of novae was performed during the 1920s. These were variously called "upper-class Novae", "Hauptnovae", or "giant novae". The name "supernovae" is thought to have been coined by Walter Baade and Zwicky in lectures at Caltech in 1931. It was used, as "super-Novae", in a journal paper published by Knut Lundmark in 1933, and in a 1934 paper by Baade and Zwicky. By 1938, the hyphen was no longer used and the modern name was in use. American astronomers Rudolph Minkowski and Fritz Zwicky developed the modern supernova classification scheme beginning in 1941. During the 1960s, astronomers found that the maximum intensities of supernovae could be used as standard candles, hence indicators of astronomical distances. Some of the most distant supernovae observed in 2003 appeared dimmer than expected. This supports the view that the expansion of the universe is accelerating. Techniques were developed for reconstructing supernovae events that have no written records of being observed. The date of the Cassiopeia A supernova event was determined from light echoes off nebulae, while the age of supernova remnant RX J0852.0-4622 was estimated from temperature measurements and the gamma ray emissions from the radioactive decay of titanium-44. The most luminous supernova ever recorded is ASASSN-15lh, at a distance of 3.82 gigalight-years. It was first detected in June 2015 and peaked at , which is twice the bolometric luminosity of any other known supernova. The nature of this supernova is debated and several alternative explanations, such as tidal disruption of a star by a black hole, have been suggested. SN 2013fs was recorded three hours after the supernova event on 6 October 2013, by the Intermediate Palomar Transient Factory. This is among the earliest supernovae caught after detonation, and it is the earliest for which spectra have been obtained, beginning six hours after the actual explosion. The star is located in a spiral galaxy named NGC 7610, 160 million light-years away in the constellation of Pegasus. The supernova SN 2016gkg was detected by amateur astronomer Victor Buso from Rosario, Argentina, on 20 September 2016. It was the first time that the initial "shock breakout" from an optical supernova had been observed. The progenitor star has been identified in Hubble Space Telescope images from before its collapse. Astronomer Alex Filippenko noted: "Observations of stars in the first moments they begin exploding provide information that cannot be directly obtained in any other way." The James Webb Space Telescope (JWST) has significantly advanced our understanding of supernovae by identifying around 80 new instances through its JWST Advanced Deep Extragalactic Survey (JADES) program. This includes the most distant spectroscopically confirmed supernova at a redshift of 3.6, indicating its explosion occurred when the universe was merely 1.8 billion years old. These findings offer crucial insights into the early universe's stellar evolution and the frequency of supernovae during its formative years. Discovery programs Because supernovae are relatively rare events within a galaxy, occurring about three times a century in the Milky Way, obtaining a good sample of supernovae to study requires regular monitoring of many galaxies. Today, amateur and professional astronomers are finding about two thousand every year, some when near maximum brightness, others on old astronomical photographs or plates. Supernovae in other galaxies cannot be predicted with any meaningful accuracy. Normally, when they are discovered, they are already in progress. To use supernovae as standard candles for measuring distance, observation of their peak luminosity is required. It is therefore important to discover them well before they reach their maximum. Amateur astronomers, who greatly outnumber professional astronomers, have played an important role in finding supernovae, typically by looking at some of the closer galaxies through an optical telescope and comparing them to earlier photographs. Toward the end of the 20th century, astronomers increasingly turned to computer-controlled telescopes and CCDs for hunting supernovae. While such systems are popular with amateurs, there are also professional installations such as the Katzman Automatic Imaging Telescope. The Supernova Early Warning System (SNEWS) project uses a network of neutrino detectors to give early warning of a supernova in the Milky Way galaxy. Neutrinos are subatomic particles that are produced in great quantities by a supernova, and they are not significantly absorbed by the interstellar gas and dust of the galactic disk. Supernova searches fall into two classes: those focused on relatively nearby events and those looking farther away. Because of the expansion of the universe, the distance to a remote object with a known emission spectrum can be estimated by measuring its Doppler shift (or redshift); on average, more-distant objects recede with greater velocity than those nearby, and so have a higher redshift. Thus the search is split between high redshift and low redshift, with the boundary falling around a redshift range of z=0.1–0.3, where z is a dimensionless measure of the spectrum's frequency shift. High redshift searches for supernovae usually involve the observation of supernova light curves. These are useful for standard or calibrated candles to generate Hubble diagrams and make cosmological predictions. Supernova spectroscopy, used to study the physics and environments of supernovae, is more practical at low than at high redshift. Low redshift observations also anchor the low-distance end of the Hubble curve, which is a plot of distance versus redshift for visible galaxies. As survey programmes rapidly increase the number of detected supernovae, collated collections of observations (light decay curves, astrometry, pre-supernova observations, spectroscopy) have been assembled. The Pantheon data set, assembled in 2018, detailed 1048 supernovae. In 2021, this data set was expanded to 1701 light curves for 1550 supernovae taken from 18 different surveys, a 50% increase in under 3 years. Naming convention Supernova discoveries are reported to the International Astronomical Union's Central Bureau for Astronomical Telegrams, which sends out a circular with the name it assigns to that supernova. The name is formed from the prefix SN, followed by the year of discovery, suffixed with a one or two-letter designation. The first 26 supernovae of the year are designated with a capital letter from A to Z. Next, pairs of lower-case letters are used: aa, ab, and so on. Hence, for example, SN 2003C designates the third supernova reported in the year 2003. The last supernova of 2005, SN 2005nc, was the 367th (14 × 26 + 3 = 367). Since 2000, professional and amateur astronomers have been finding several hundred supernovae each year (572 in 2007, 261 in 2008, 390 in 2009; 231 in 2013). Historical supernovae are known simply by the year they occurred: SN 185, SN 1006, SN 1054, SN 1572 (called Tycho's Nova) and SN 1604 (Kepler's Star). Since 1885 the additional letter notation has been used, even if there was only one supernova discovered that year (for example, SN 1885A, SN 1907A, etc.); this last happened with SN 1947A. SN, for SuperNova, is a standard prefix. Until 1987, two-letter designations were rarely needed; since 1988, they have been needed every year. Since 2016, the increasing number of discoveries has regularly led to the additional use of three-letter designations. After zz comes aaa, then aab, aac, and so on. For example, the last supernova retained in the Asiago Supernova Catalogue  when it was terminated on 31 December 2017 bears the designation SN 2017jzp. Classification Astronomers classify supernovae according to their light curves and the absorption lines of different chemical elements that appear in their spectra. If a supernova's spectrum contains lines of hydrogen (known as the Balmer series in the visual portion of the spectrum) it is classified Type II; otherwise it is Type I. In each of these two types there are subdivisions according to the presence of lines from other elements or the shape of the light curve (a graph of the supernova's apparent magnitude as a function of time). Type I Type I supernovae are subdivided on the basis of their spectra, with type Ia showing a strong ionised silicon absorption line. Type I supernovae without this strong line are classified as type Ib and Ic, with type Ib showing strong neutral helium lines and type Ic lacking them. Historically, the light curves of type I supernovae were seen as all broadly similar, too much so to make useful distinctions. While variations in light curves have been studied, classification continues to be made on spectral grounds rather than light-curve shape. A small number of type Ia supernovae exhibit unusual features, such as non-standard luminosity or broadened light curves, and these are typically categorised by referring to the earliest example showing similar features. For example, the sub-luminous SN 2008ha is often referred to as SN 2002cx-like or class Ia-2002cx. A small proportion of type Ic supernovae show highly broadened and blended emission lines which are taken to indicate very high expansion velocities for the ejecta. These have been classified as type Ic-BL or Ic-bl. Calcium-rich supernovae are a rare type of very fast supernova with unusually strong calcium lines in their spectra. Models suggest they occur when material is accreted from a helium-rich companion rather than a hydrogen-rich star. Because of helium lines in their spectra, they can resemble type Ib supernovae, but are thought to have very different progenitors. Type II The supernovae of type II can also be sub-divided based on their spectra. While most type II supernovae show very broad emission lines which indicate expansion velocities of many thousands of kilometres per second, some, such as SN 2005gl, have relatively narrow features in their spectra. These are called type IIn, where the "n" stands for "narrow". A few supernovae, such as SN 1987K and SN 1993J, appear to change types: they show lines of hydrogen at early times, but, over a period of weeks to months, become dominated by lines of helium. The term "type IIb" is used to describe the combination of features normally associated with types II and Ib. Type II supernovae with normal spectra dominated by broad hydrogen lines that remain for the life of the decline are classified on the basis of their light curves. The most common type shows a distinctive "plateau" in the light curve shortly after peak brightness where the visual luminosity stays relatively constant for several months before the decline resumes. These are called type II-P referring to the plateau. Less common are type II-L supernovae that lack a distinct plateau. The "L" signifies "linear" although the light curve is not actually a straight line. Supernovae that do not fit into the normal classifications are designated peculiar, or "pec". Types III, IV and V Zwicky defined additional supernovae types based on a very few examples that did not cleanly fit the parameters for type I or type II supernovae. SN 1961i in NGC 4303 was the prototype and only member of the type III supernova class, noted for its broad light curve maximum and broad hydrogen Balmer lines that were slow to develop in the spectrum. SN 1961f in NGC 3003 was the prototype and only member of the type IV class, with a light curve similar to a type II-P supernova, with hydrogen absorption lines but weak hydrogen emission lines. The type V class was coined for SN 1961V in NGC 1058, an unusual faint supernova or supernova impostor with a slow rise to brightness, a maximum lasting many months, and an unusual emission spectrum. The similarity of SN 1961V to the Eta Carinae Great Outburst was noted. Supernovae in M101 (1909) and M83 (1923 and 1957) were also suggested as possible type IV or type V supernovae. These types would now all be treated as peculiar type II supernovae (IIpec), of which many more examples have been discovered, although it is still debated whether SN 1961V was a true supernova following an LBV outburst or an impostor. Current models Supernova type codes, as summarised in the table above, are taxonomic: the type number is based on the light observed from the supernova, not necessarily its cause. For example, type Ia supernovae are produced by runaway fusion ignited on degenerate white dwarf progenitors, while the spectrally similar type Ib/c are produced from massive stripped progenitor stars by core collapse. Thermal runaway A white dwarf star may accumulate sufficient material from a stellar companion to raise its core temperature enough to ignite carbon fusion, at which point it undergoes runaway nuclear fusion, completely disrupting it. There are three avenues by which this detonation is theorised to happen: stable accretion of material from a companion, the collision of two white dwarfs, or accretion that causes ignition in a shell that then ignites the core. The dominant mechanism by which type Ia supernovae are produced remains unclear. Despite this uncertainty in how type Ia supernovae are produced, type Ia supernovae have very uniform properties and are useful standard candles over intergalactic distances. Some calibrations are required to compensate for the gradual change in properties or different frequencies of abnormal luminosity supernovae at high redshift, and for small variations in brightness identified by light curve shape or spectrum. Normal type Ia There are several means by which a supernova of this type can form, but they share a common underlying mechanism. If a carbon-oxygen white dwarf accreted enough matter to reach the Chandrasekhar limit of about 1.44 solar masses (for a non-rotating star), it would no longer be able to support the bulk of its mass through electron degeneracy pressure and would begin to collapse. However, the current view is that this limit is not normally attained; increasing temperature and density inside the core ignite carbon fusion as the star approaches the limit (to within about 1%) before collapse is initiated. In contrast, for a core primarily composed of oxygen, neon and magnesium, the collapsing white dwarf will typically form a neutron star. In this case, only a fraction of the star's mass will be ejected during the collapse. Within a few seconds of the collapse process, a substantial fraction of the matter in the white dwarf undergoes nuclear fusion, releasing enough energy (1–) to unbind the star in a supernova. An outwardly expanding shock wave is generated, with matter reaching velocities on the order of 5,000–20,000 km/s, or roughly 3% of the speed of light. There is also a significant increase in luminosity, reaching an absolute magnitude of −19.3 (or 5 billion times brighter than the Sun), with little variation. The model for the formation of this category of supernova is a close binary star system. The larger of the two stars is the first to evolve off the main sequence, and it expands to form a red giant. The two stars now share a common envelope, causing their mutual orbit to shrink. The giant star then sheds most of its envelope, losing mass until it can no longer continue nuclear fusion. At this point, it becomes a white dwarf star, composed primarily of carbon and oxygen. Eventually, the secondary star also evolves off the main sequence to form a red giant. Matter from the giant is accreted by the white dwarf, causing the latter to increase in mass. The exact details of initiation and of the heavy elements produced in the catastrophic event remain unclear. Type Ia supernovae produce a characteristic light curve—the graph of luminosity as a function of time—after the event. This luminosity is generated by the radioactive decay of nickel-56 through cobalt-56 to iron-56. The peak luminosity of the light curve is extremely consistent across normal type Ia supernovae, having a maximum absolute magnitude of about −19.3. This is because typical type Ia supernovae arise from a consistent type of progenitor star by gradual mass acquisition, and explode when they acquire a consistent typical mass, giving rise to very similar supernova conditions and behaviour. This allows them to be used as a secondary standard candle to measure the distance to their host galaxies. A second model for the formation of type Ia supernovae involves the merger of two white dwarf stars, with the combined mass momentarily exceeding the Chandrasekhar limit. This is sometimes referred to as the double-degenerate model, as both stars are degenerate white dwarfs. Due to the possible combinations of mass and chemical composition of the pair there is much variation in this type of event, and, in many cases, there may be no supernova at all, in which case they will have a less luminous light curve than the more normal SN type Ia. Non-standard type Ia Abnormally bright type Ia supernovae occur when the white dwarf already has a mass higher than the Chandrasekhar limit, possibly enhanced further by asymmetry, but the ejected material will have less than normal kinetic energy. This super-Chandrasekhar-mass scenario can occur, for example, when the extra mass is supported by differential rotation. There is no formal sub-classification for non-standard type Ia supernovae. It has been proposed that a group of sub-luminous supernovae that occur when helium accretes onto a white dwarf should be classified as type Iax. This type of supernova may not always completely destroy the white dwarf progenitor and could leave behind a zombie star. One specific type of supernova originates from exploding white dwarfs, like type Ia, but contains hydrogen lines in their spectra, possibly because the white dwarf is surrounded by an envelope of hydrogen-rich circumstellar material. These supernovae have been dubbed type Ia/IIn, type Ian, type IIa and type IIan. The quadruple star HD 74438, belonging to the open cluster IC 2391 the Vela constellation, has been predicted to become a non-standard type Ia supernova. Core collapse Very massive stars can undergo core collapse when nuclear fusion becomes unable to sustain the core against its own gravity; passing this threshold is the cause of all types of supernova except type Ia. The collapse may cause violent expulsion of the outer layers of the star resulting in a supernova. However, if the release of gravitational potential energy is insufficient, the star may instead collapse into a black hole or neutron star with little radiated energy. Core collapse can be caused by several different mechanisms: exceeding the Chandrasekhar limit; electron capture; pair-instability; or photodisintegration. When a massive star develops an iron core larger than the Chandrasekhar mass it will no longer be able to support itself by electron degeneracy pressure and will collapse further to a neutron star or black hole. Electron capture by magnesium in a degenerate O/Ne/Mg core (8–10 solar mass progenitor star) removes support and causes gravitational collapse followed by explosive oxygen fusion, with very similar results. Electron-positron pair production in a large post-helium burning core removes thermodynamic support and causes initial collapse followed by runaway fusion, resulting in a pair-instability supernova. A sufficiently large and hot stellar core may generate gamma-rays energetic enough to initiate photodisintegration directly, which will cause a complete collapse of the core. The table below lists the known reasons for core collapse in massive stars, the types of stars in which they occur, their associated supernova type, and the remnant produced. The metallicity is the proportion of elements other than hydrogen or helium, as compared to the Sun. The initial mass is the mass of the star prior to the supernova event, given in multiples of the Sun's mass, although the mass at the time of the supernova may be much lower. Type IIn supernovae are not listed in the table. They can be produced by various types of core collapse in different progenitor stars, possibly even by type Ia white dwarf ignitions, although it seems that most will be from iron core collapse in luminous supergiants or hypergiants (including LBVs). The narrow spectral lines for which they are named occur because the supernova is expanding into a small dense cloud of circumstellar material. It appears that a significant proportion of supposed type IIn supernovae are supernova impostors, massive eruptions of LBV-like stars similar to the Great Eruption of Eta Carinae. In these events, material previously ejected from the star creates the narrow absorption lines and causes a shock wave through interaction with the newly ejected material. Detailed process When a stellar core is no longer supported against gravity, it collapses in on itself with velocities reaching 70,000 km/s (0.23c), resulting in a rapid increase in temperature and density. What follows depends on the mass and structure of the collapsing core, with low-mass degenerate cores forming neutron stars, higher-mass degenerate cores mostly collapsing completely to black holes, and non-degenerate cores undergoing runaway fusion. The initial collapse of degenerate cores is accelerated by beta decay, photodisintegration and electron capture, which causes a burst of electron neutrinos. As the density increases, neutrino emission is cut off as they become trapped in the core. The inner core eventually reaches typically 30 km in diameter with a density comparable to that of an atomic nucleus, and neutron degeneracy pressure tries to halt the collapse. If the core mass is more than about 15 solar masses then neutron degeneracy is insufficient to stop the collapse and a black hole forms directly with no supernova. In lower mass cores the collapse is stopped and the newly formed neutron core has an initial temperature of about 100 billion kelvin, 6,000 times the temperature of the Sun's core. At this temperature, neutrino-antineutrino pairs of all flavours are efficiently formed by thermal emission. These thermal neutrinos are several times more abundant than the electron-capture neutrinos. About 1046 joules, approximately 10% of the star's rest mass, is converted into a ten-second burst of neutrinos, which is the main output of the event. The suddenly halted core collapse rebounds and produces a shock wave that stalls in the outer core within milliseconds as energy is lost through the dissociation of heavy elements. A process that is is necessary to allow the outer layers of the core to reabsorb around 1044 joules (1 foe) from the neutrino pulse, producing the visible brightness, although there are other theories that could power the explosion. Some material from the outer envelope falls back onto the neutron star, and, for cores beyond about , there is sufficient fallback to form a black hole. This fallback will reduce the kinetic energy created and the mass of expelled radioactive material, but in some situations, it may also generate relativistic jets that result in a gamma-ray burst or an exceptionally luminous supernova. The collapse of a massive non-degenerate core will ignite further fusion. When the core collapse is initiated by pair instability (photons turning into electron-positron pairs, thereby reducing the radiation pressure) oxygen fusion begins and the collapse may be halted. For core masses of , the collapse halts and the star remains intact, but collapse will occur again when a larger core has formed. For cores of around , the fusion of oxygen and heavier elements is so energetic that the entire star is disrupted, causing a supernova. At the upper end of the mass range, the supernova is unusually luminous and extremely long-lived due to many solar masses of ejected 56Ni. For even larger core masses, the core temperature becomes high enough to allow photodisintegration and the core collapses completely into a black hole. Type II Stars with initial masses less than about never develop a core large enough to collapse and they eventually lose their atmospheres to become white dwarfs. Stars with at least (possibly as much as ) evolve in a complex fashion, progressively burning heavier elements at hotter temperatures in their cores. The star becomes layered like an onion, with the burning of more easily fused elements occurring in larger shells. Although popularly described as an onion with an iron core, the least massive supernova progenitors only have oxygen-neon(-magnesium) cores. These super-AGB stars may form the majority of core collapse supernovae, although less luminous and so less commonly observed than those from more massive progenitors. If core collapse occurs during a supergiant phase when the star still has a hydrogen envelope, the result is a type II supernova. The rate of mass loss for luminous stars depends on the metallicity and luminosity. Extremely luminous stars at near solar metallicity will lose all their hydrogen before they reach core collapse and so will not form a supernova of type II. At low metallicity, all stars will reach core collapse with a hydrogen envelope but sufficiently massive stars collapse directly to a black hole without producing a visible supernova. Stars with an initial mass up to about 90 times the Sun, or a little less at high metallicity, result in a type II-P supernova, which is the most commonly observed type. At moderate to high metallicity, stars near the upper end of that mass range will have lost most of their hydrogen when core collapse occurs and the result will be a type II-L supernova. At very low metallicity, stars of around will reach core collapse by pair instability while they still have a hydrogen atmosphere and an oxygen core and the result will be a supernova with type II characteristics but a very large mass of ejected 56Ni and high luminosity. Type Ib and Ic These supernovae, like those of type II, are massive stars that undergo core collapse. Unlike the progenitors of type II supernovae, the stars which become types Ib and Ic supernovae have lost most of their outer (hydrogen) envelopes due to strong stellar winds or else from interaction with a companion. These stars are known as Wolf–Rayet stars, and they occur at moderate to high metallicity where continuum driven winds cause sufficiently high mass-loss rates. Observations of type Ib/c supernova do not match the observed or expected occurrence of Wolf–Rayet stars. Alternate explanations for this type of core collapse supernova involve stars stripped of their hydrogen by binary interactions. Binary models provide a better match for the observed supernovae, with the proviso that no suitable binary helium stars have ever been observed. Type Ib supernovae are the more common and result from Wolf–Rayet stars of type WC which still have helium in their atmospheres. For a narrow range of masses, stars evolve further before reaching core collapse to become WO stars with very little helium remaining, and these are the progenitors of type Ic supernovae. A few percent of the type Ic supernovae are associated with gamma-ray bursts (GRB), though it is also believed that any hydrogen-stripped type Ib or Ic supernova could produce a GRB, depending on the circumstances of the geometry. The mechanism for producing this type of GRB is the jets produced by the magnetic field of the rapidly spinning magnetar formed at the collapsing core of the star. The jets would also transfer energy into the expanding outer shell, producing a super-luminous supernova. Ultra-stripped supernovae occur when the exploding star has been stripped (almost) all the way to the metal core, via mass transfer in a close binary. As a result, very little material is ejected from the exploding star (c. ). In the most extreme cases, ultra-stripped supernovae can occur in naked metal cores, barely above the Chandrasekhar mass limit. SN 2005ek might be the first observational example of an ultra-stripped supernova, giving rise to a relatively dim and fast decaying light curve. The nature of ultra-stripped supernovae can be both iron core-collapse and electron capture supernovae, depending on the mass of the collapsing core. Ultra-stripped supernovae are believed to be associated with the second supernova explosion in a binary system, producing for example a tight double neutron star system. In 2022 a team of astronomers led by researchers from the Weizmann Institute of Science reported the first supernova explosion showing direct evidence for a Wolf-Rayet progenitor star. SN 2019hgp was a type Icn supernova and is also the first in which the element neon has been detected. Electron-capture supernovae In 1980, a "third type" of supernova was predicted by Ken'ichi Nomoto of the University of Tokyo, called an electron-capture supernova. It would arise when a star "in the transitional range (~8 to 10 solar masses) between white dwarf formation and iron core-collapse supernovae", and with a degenerate O+Ne+Mg core, imploded after its core ran out of nuclear fuel, causing gravity to compress the electrons in the star's core into their atomic nuclei, leading to a supernova explosion and leaving behind a neutron star. In June 2021, a paper in the journal Nature Astronomy reported that the 2018 supernova SN 2018zd (in the galaxy NGC 2146, about 31 million light-years from Earth) appeared to be the first observation of an electron-capture supernova. The 1054 supernova explosion that created the Crab Nebula in our galaxy had been thought to be the best candidate for an electron-capture supernova, and the 2021 paper makes it more likely that this was correct. Failed supernovae The core collapse of some massive stars may not result in a visible supernova. This happens if the initial core collapse cannot be reversed by the mechanism that produces an explosion, usually because the core is too massive. These events are difficult to detect, but large surveys have detected possible candidates. The red supergiant N6946-BH1 in NGC 6946 underwent a modest outburst in March 2009, before fading from view. Only a faint infrared source remains at the star's location. Light curves The ejecta gases would dim quickly without some energy input to keep them hot. The source of this energy—which can maintain the optical supernova glow for months—was, at first, a puzzle. Some considered rotational energy from the central pulsar as a source. Although the energy that initially powers each type of supernovae is delivered promptly, the light curves are dominated by subsequent radioactive heating of the rapidly expanding ejecta. The intensely radioactive nature of the ejecta gases was first calculated on sound nucleosynthesis grounds in the late 1960s, and this has since been demonstrated as correct for most supernovae. It was not until SN 1987A that direct observation of gamma-ray lines unambiguously identified the major radioactive nuclei. It is now known by direct observation that much of the light curve (the graph of luminosity as a function of time) after the occurrence of a type II Supernova, such as SN 1987A, is explained by those predicted radioactive decays. Although the luminous emission consists of optical photons, it is the radioactive power absorbed by the ejected gases that keeps the remnant hot enough to radiate light. The radioactive decay of 56Ni through its daughters 56Co to 56Fe produces gamma-ray photons, primarily with energies of and , that are absorbed and dominate the heating and thus the luminosity of the ejecta at intermediate times (several weeks) to late times (several months). Energy for the peak of the light curve of SN1987A was provided by the decay of 56Ni to 56Co (half-life 6 days) while energy for the later light curve in particular fit very closely with the 77.3-day half-life of 56Co decaying to 56Fe. Later measurements by space gamma-ray telescopes of the small fraction of the 56Co and 57Co gamma rays that escaped the SN 1987A remnant without absorption confirmed earlier predictions that those two radioactive nuclei were the power sources. The late-time decay phase of visual light curves for different supernova types all depend on radioactive heating, but they vary in shape and amplitude because of the underlying mechanisms, the way that visible radiation is produced, the epoch of its observation, and the transparency of the ejected material. The light curves can be significantly different at other wavelengths. For example, at ultraviolet wavelengths there is an early extremely luminous peak lasting only a few hours corresponding to the breakout of the shock launched by the initial event, but that breakout is hardly detectable optically. The light curves for type Ia are mostly very uniform, with a consistent maximum absolute magnitude and a relatively steep decline in luminosity. Their optical energy output is driven by radioactive decay of ejected nickel-56 (half-life 6 days), which then decays to radioactive cobalt-56 (half-life 77 days). These radioisotopes excite the surrounding material to incandescence. Modern studies of cosmology rely on 56Ni radioactivity providing the energy for the optical brightness of supernovae of type Ia, which are the "standard candles" of cosmology but whose diagnostic and gamma rays were first detected only in 2014. The initial phases of the light curve decline steeply as the effective size of the photosphere decreases and trapped electromagnetic radiation is depleted. The light curve continues to decline in the B band while it may show a small shoulder in the visual at about 40 days, but this is only a hint of a secondary maximum that occurs in the infra-red as certain ionised heavy elements recombine to produce infra-red radiation and the ejecta become transparent to it. The visual light curve continues to decline at a rate slightly greater than the decay rate of the radioactive cobalt (which has the longer half-life and controls the later curve), because the ejected material becomes more diffuse and less able to convert the high energy radiation into visual radiation. After several months, the light curve changes its decline rate again as positron emission from the remaining cobalt-56 becomes dominant, although this portion of the light curve has been little-studied. Type Ib and Ic light curves are similar to type Ia although with a lower average peak luminosity. The visual light output is again due to radioactive decay being converted into visual radiation, but there is a much lower mass of the created nickel-56. The peak luminosity varies considerably and there are even occasional type Ib/c supernovae orders of magnitude more and less luminous than the norm. The most luminous type Ic supernovae are referred to as hypernovae and tend to have broadened light curves in addition to the increased peak luminosity. The source of the extra energy is thought to be relativistic jets driven by the formation of a rotating black hole, which also produce gamma-ray bursts. The light curves for type II supernovae are characterised by a much slower decline than type I, on the order of 0.05 magnitudes per day, excluding the plateau phase. The visual light output is dominated by kinetic energy rather than radioactive decay for several months, due primarily to the existence of hydrogen in the ejecta from the atmosphere of the supergiant progenitor star. In the initial destruction this hydrogen becomes heated and ionised. The majority of type II supernovae show a prolonged plateau in their light curves as this hydrogen recombines, emitting visible light and becoming more transparent. This is then followed by a declining light curve driven by radioactive decay although slower than in type I supernovae, due to the efficiency of conversion into light by all the hydrogen. In type II-L the plateau is absent because the progenitor had relatively little hydrogen left in its atmosphere, sufficient to appear in the spectrum but insufficient to produce a noticeable plateau in the light output. In type IIb supernovae the hydrogen atmosphere of the progenitor is so depleted (thought to be due to tidal stripping by a companion star) that the light curve is closer to a type I supernova and the hydrogen even disappears from the spectrum after several weeks. Type IIn supernovae are characterised by additional narrow spectral lines produced in a dense shell of circumstellar material. Their light curves are generally very broad and extended, occasionally also extremely luminous and referred to as a superluminous supernova. These light curves are produced by the highly efficient conversion of kinetic energy of the ejecta into electromagnetic radiation by interaction with the dense shell of material. This only occurs when the material is sufficiently dense and compact, indicating that it has been produced by the progenitor star itself only shortly before the supernova occurs. Large numbers of supernovae have been catalogued and classified to provide distance candles and test models. Average characteristics vary somewhat with distance and type of host galaxy, but can broadly be specified for each supernova type.
Physical sciences
Astronomy
null
27683
https://en.wikipedia.org/wiki/Satellite
Satellite
A satellite or artificial satellite is an object, typically a spacecraft, placed into orbit around a celestial body. They have a variety of uses, including communication relay, weather forecasting, navigation (GPS), broadcasting, scientific research, and Earth observation. Additional military uses are reconnaissance, early warning, signals intelligence and, potentially, weapon delivery. Other satellites include the final rocket stages that place satellites in orbit and formerly useful satellites that later become defunct. Except for passive satellites, most satellites have an electricity generation system for equipment on board, such as solar panels or radioisotope thermoelectric generators (RTGs). Most satellites also have a method of communication to ground stations, called transponders. Many satellites use a standardized bus to save cost and work, the most popular of which are small CubeSats. Similar satellites can work together as groups, forming constellations. Because of the high launch cost to space, most satellites are designed to be as lightweight and robust as possible. Most communication satellites are radio relay stations in orbit and carry dozens of transponders, each with a bandwidth of tens of megahertz. Satellites are placed from the surface to the orbit by launch vehicles, high enough to avoid orbital decay by the atmosphere. Satellites can then change or maintain the orbit by propulsion, usually by chemical or ion thrusters. As of 2018, about 90% of the satellites orbiting the Earth are in low Earth orbit or geostationary orbit; geostationary means the satellites stay still in the sky (relative to a fixed point on the ground). Some imaging satellites chose a Sun-synchronous orbit because they can scan the entire globe with similar lighting. As the number of satellites and space debris around Earth increases, the threat of collision has become more severe. A small number of satellites orbit other bodies (such as the Moon, Mars, and the Sun) or many bodies at once (two for a halo orbit, three for a Lissajous orbit). Earth observation satellites gather information for reconnaissance, mapping, monitoring the weather, ocean, forest, etc. Space telescopes take advantage of outer space's near perfect vacuum to observe objects with the entire electromagnetic spectrum. Because satellites can see a large portion of the Earth at once, communications satellites can relay information to remote places. The signal delay from satellites and their orbit's predictability are used in satellite navigation systems, such as GPS. Space probes are satellites designed for robotic space exploration outside of Earth, and space stations are in essence crewed satellites. The first artificial satellite launched into the Earth's orbit was the Soviet Union's Sputnik 1, on October 4, 1957. As of December 31, 2022, there are 6,718 operational satellites in the Earth's orbit, of which 4,529 belong to the United States (3,996 commercial), 590 belong to China, 174 belong to Russia, and 1,425 belong to other nations. History Early proposals The first published mathematical study of the possibility of an artificial satellite was Newton's cannonball, a thought experiment by Isaac Newton to explain the motion of natural satellites, in his Philosophiæ Naturalis Principia Mathematica (1687). The first fictional depiction of a satellite being launched into orbit was a short story by Edward Everett Hale, "The Brick Moon" (1869). The idea surfaced again in Jules Verne's The Begum's Fortune (1879). In 1903, Konstantin Tsiolkovsky (1857–1935) published Exploring Space Using Jet Propulsion Devices, which was the first academic treatise on the use of rocketry to launch spacecraft. He calculated the orbital speed required for a minimal orbit, and inferred that a multi-stage rocket fueled by liquid propellants could achieve this. Herman Potočnik explored the idea of using orbiting spacecraft for detailed peaceful and military observation of the ground in his 1928 book, The Problem of Space Travel. He described how the special conditions of space could be useful for scientific experiments. The book described geostationary satellites (first put forward by Konstantin Tsiolkovsky) and discussed the communication between them and the ground using radio, but fell short with the idea of using satellites for mass broadcasting and as telecommunications relays. In a 1945 Wireless World article, English science fiction writer Arthur C. Clarke described in detail the possible use of communications satellites for mass communications. He suggested that three geostationary satellites would provide coverage over the entire planet. In May 1946, the United States Air Force's Project RAND released the Preliminary Design of an Experimental World-Circling Spaceship, which stated "A satellite vehicle with appropriate instrumentation can be expected to be one of the most potent scientific tools of the Twentieth Century." The United States had been considering launching orbital satellites since 1945 under the Bureau of Aeronautics of the United States Navy. Project RAND eventually released the report, but considered the satellite to be a tool for science, politics, and propaganda, rather than a potential military weapon. In 1946, American theoretical astrophysicist Lyman Spitzer proposed an orbiting space telescope. In February 1954, Project RAND released "Scientific Uses for a Satellite Vehicle", by R. R. Carhart. This expanded on potential scientific uses for satellite vehicles and was followed in June 1955 with "The Scientific Use of an Artificial Satellite", by H. K. Kallmann and W. W. Kellogg. First satellites The first artificial satellite was Sputnik 1, launched by the Soviet Union on 4 October 1957 under the Sputnik program, with Sergei Korolev as chief designer. Sputnik 1 helped to identify the density of high atmospheric layers through measurement of its orbital change and provided data on radio-signal distribution in the ionosphere. The unanticipated announcement of Sputnik 1's success precipitated the Sputnik crisis in the United States and ignited the so-called Space Race within the Cold War. In the context of activities planned for the International Geophysical Year (1957–1958), the White House announced on 29 July 1955 that the U.S. intended to launch satellites by the spring of 1958. This became known as Project Vanguard. On 31 July, the Soviet Union announced its intention to launch a satellite by the fall of 1957. Sputnik 2 was launched on 3 November 1957 and carried the first living passenger into orbit, a dog named Laika. The dog was sent without possibility of return. In early 1955, after being pressured by the American Rocket Society, the National Science Foundation, and the International Geophysical Year, the Army and Navy worked on Project Orbiter with two competing programs. The army used the Jupiter C rocket, while the civilian–Navy program used the Vanguard rocket to launch a satellite. Explorer 1 became the United States' first artificial satellite, on 31 January 1958. The information sent back from its radiation detector led to the discovery of the Earth's Van Allen radiation belts. The TIROS-1 spacecraft, launched on April 1, 1960, as part of NASA's Television Infrared Observation Satellite (TIROS) program, sent back the first television footage of weather patterns to be taken from space. In June 1961, three and a half years after the launch of Sputnik 1, the United States Space Surveillance Network cataloged 115 Earth-orbiting satellites. While Canada was the third country to build a satellite which was launched into space, it was launched aboard an American rocket from an American spaceport. The same goes for Australia, whose launch of the first satellite involved a donated U.S. Redstone rocket and American support staff as well as a joint launch facility with the United Kingdom. The first Italian satellite San Marco 1 was launched on 15 December 1964 on a U.S. Scout rocket from Wallops Island (Virginia, United States) with an Italian launch team trained by NASA. In similar occasions, almost all further first national satellites were launched by foreign rockets. France was the third country to launch a satellite on its own rocket. On 26 November 1965, the Astérix or A-1 (initially conceptualized as FR.2 or FR-2), was put into orbit by a Diamant A rocket launched from the CIEES site at Hammaguir, Algeria. With Astérix, France became the sixth country to have an artificial satellite. Later Satellite Development Early satellites were built to unique designs. With advancements in technology, multiple satellites began to be built on single model platforms called satellite buses. The first standardized satellite bus design was the HS-333 geosynchronous (GEO) communication satellite launched in 1972. Beginning in 1997, FreeFlyer is a commercial off-the-shelf software application for satellite mission analysis, design, and operations. After the late 2010s, and especially after the advent and operational fielding of large satellite internet constellations—where on-orbit active satellites more than doubled over a period of five years—the companies building the constellations began to propose regular planned deorbiting of the older satellites that reached the end of life, as a part of the regulatory process of obtaining a launch license. The largest artificial satellite ever is the International Space Station. By the early 2000s, and particularly after the advent of CubeSats and increased launches of microsats—frequently launched to the lower altitudes of low Earth orbit (LEO)—satellites began to more frequently be designed to get destroyed, or breakup and burnup entirely in the atmosphere. For example, SpaceX Starlink satellites, the first large satellite internet constellation to exceed 1000 active satellites on orbit in 2020, are designed to be 100% demisable and burn up completely on their atmospheric reentry at the end of their life, or in the event of an early satellite failure. In different periods, many countries, such as Algeria, Argentina, Australia, Austria, Brazil, Canada, Chile, China, Denmark, Egypt, Finland, France, Germany, India, Iran, Israel, Italy, Japan, Kazakhstan, South Korea, Malaysia, Mexico, the Netherlands, Norway, Pakistan, Poland, Russia, Saudi Arabia, South Africa, Spain, Switzerland, Thailand, Turkey, Ukraine, the United Kingdom and the United States, had some satellites in orbit. Japan's space agency (JAXA) and NASA plan to send a wooden satellite prototype called LingoSat into orbit in the summer of 2024. They have been working on this project for few years and sent first wood samples to the space in 2021 to test the material's resilience to space conditions. Components Orbit and altitude control Most satellites use chemical or ion propulsion to adjust or maintain their orbit, coupled with reaction wheels to control their three axis of rotation or attitude. Satellites close to Earth are affected the most by variations in the Earth's magnetic, gravitational field and the Sun's radiation pressure; satellites that are further away are affected more by other bodies' gravitational field by the Moon and the Sun. Satellites utilize ultra-white reflective coatings to prevent damage from UV radiation. Without orbit and orientation control, satellites in orbit will not be able to communicate with ground stations on the Earth. Chemical thrusters on satellites usually use monopropellant (one-part) or bipropellant (two-parts) that are hypergolic. Hypergolic means able to combust spontaneously when in contact with each other or to a catalyst. The most commonly used propellant mixtures on satellites are hydrazine-based monopropellants or monomethylhydrazine–dinitrogen tetroxide bipropellants. Ion thrusters on satellites usually are Hall-effect thrusters, which generate thrust by accelerating positive ions through a negatively-charged grid. Ion propulsion is more efficient propellant-wise than chemical propulsion but its thrust is very small (around ), and thus requires a longer burn time. The thrusters usually use xenon because it is inert, can be easily ionized, has a high atomic mass and storable as a high-pressure liquid. Power Most satellites use solar panels to generate power, and a few in deep space with limited sunlight use radioisotope thermoelectric generators. Slip rings attach solar panels to the satellite; the slip rings can rotate to be perpendicular with the sunlight and generate the most power. All satellites with a solar panel must also have batteries, because sunlight is blocked inside the launch vehicle and at night. The most common types of batteries for satellites are lithium-ion, and in the past nickel–hydrogen. Communications Applications Earth observation Earth observation satellites are designed to monitor and survey the Earth, called remote sensing. Most Earth observation satellites are placed in low Earth orbit for a high data resolution, though some are placed in a geostationary orbit for an uninterrupted coverage. Some satellites are placed in a Sun-synchronous orbit to have consistent lighting and obtain a total view of the Earth. Depending on the satellites' functions, they might have a normal camera, radar, lidar, photometer, or atmospheric instruments. Earth observation satellite's data is most used in archaeology, cartography, environmental monitoring, meteorology, and reconnaissance applications. As of 2021, there are over 950 Earth observation satellites, with the largest number of satellites operated with Planet Labs. Weather satellites monitor clouds, city lights, fires, effects of pollution, auroras, sand and dust storms, snow cover, ice mapping, boundaries of ocean currents, energy flows, etc. Environmental monitoring satellites can detect changes in the Earth's vegetation, atmospheric trace gas content, sea state, ocean color, and ice fields. By monitoring vegetation changes over time, droughts can be monitored by comparing the current vegetation state to its long term average. Anthropogenic emissions can be monitored by evaluating data of tropospheric NO2 and SO2. Communication Spy satellites When an Earth observation satellite or a communications satellite is deployed for military or intelligence purposes, it is known as a spy satellite or reconnaissance satellite. Their uses include early missile warning, nuclear explosion detection, electronic reconnaissance, and optical or radar imaging surveillance. Navigation Navigational satellites are satellites that use radio time signals transmitted to enable mobile receivers on the ground to determine their exact location. The relatively clear line of sight between the satellites and receivers on the ground, combined with ever-improving electronics, allows satellite navigation systems to measure location to accuracies on the order of a few meters in real time. Telescope Astronomical satellites are satellites used for observation of distant planets, galaxies, and other outer space objects. Experimental Tether satellites are satellites that are connected to another satellite by a thin cable called a tether. Recovery satellites are satellites that provide a recovery of reconnaissance, biological, space-production and other payloads from orbit to Earth. Biosatellites are satellites designed to carry living organisms, generally for scientific experimentation. Space-based solar power satellites are proposed satellites that would collect energy from sunlight and transmit it for use on Earth or other places. Weapon Since the mid-2000s, satellites have been hacked by militant organizations to broadcast propaganda and to pilfer classified information from military communication networks. For testing purposes, satellites in low earth orbit have been destroyed by ballistic missiles launched from the Earth. Russia, United States, China and India have demonstrated the ability to eliminate satellites. In 2007, the Chinese military shot down an aging weather satellite, followed by the US Navy shooting down a defunct spy satellite in February 2008. On 18 November 2015, after two failed attempts, Russia successfully carried out a flight test of an anti-satellite missile known as Nudol. On 27 March 2019, India shot down a live test satellite at 300 km altitude in 3 minutes, becoming the fourth country to have the capability to destroy live satellites. Environmental impact The environmental impact of satellites is not currently well understood as they were previously assumed to be benign due to the rarity of satellite launches. However, the exponential increase and projected growth of satellite launches are bringing the issue into consideration. The main issues are resource use and the release of pollutants into the atmosphere which can happen at different stages of a satellite's lifetime. Resource use Resource use is difficult to monitor and quantify for satellites and launch vehicles due to their commercially sensitive nature. However, aluminium is a preferred metal in satellite construction due to its lightweight and relative cheapness and typically constitutes around 40% of a satellite's mass. Through mining and refining, aluminium has numerous negative environmental impacts and is one of the most carbon-intensive metals. Satellite manufacturing also requires rare elements such as lithium, gold, and gallium, some of which have significant environmental consequences linked to their mining and processing and/or are in limited supply. Launch vehicles require larger amounts of raw materials to manufacture and the booster stages are usually dropped into the ocean after fuel exhaustion. They are not normally recovered. Two empty boosters used for Ariane 5, which were composed mainly of steel, weighed around 38 tons each, to give an idea of the quantity of materials that are often left in the ocean. Launches Rocket launches release numerous pollutants into every layer of the atmosphere, especially affecting the atmosphere above the tropopause where the byproducts of combustion can reside for extended periods. These pollutants can include black carbon, CO2, nitrogen oxides (NOx), aluminium and water vapour, but the mix of pollutants is dependent on rocket design and fuel type. The amount of green house gases emitted by rockets is considered trivial as it contributes significantly less, around 0.01%, than the aviation industry yearly which itself accounts for 2-3% of the total global greenhouse gas emissions. Rocket emissions in the stratosphere and their effects are only beginning to be studied and it is likely that the impacts will be more critical than emissions in the troposphere. The stratosphere includes the ozone layer and pollutants emitted from rockets can contribute to ozone depletion in a number of ways. Radicals such as NOx, HOx, and ClOx deplete stratospheric O3 through intermolecular reactions and can have huge impacts in trace amounts. However, it is currently understood that launch rates would need to increase by ten times to match the impact of regulated ozone-depleting substances. Whilst emissions of water vapour are largely deemed as inert, H2O is the source gas for HOx and can also contribute to ozone loss through the formation of ice particles. Black carbon particles emitted by rockets can absorb solar radiation in the stratosphere and cause warming in the surrounding air which can then impact the circulatory dynamics of the stratosphere. Both warming and changes in circulation can then cause depletion of the ozone layer. Operational Low earth orbit satellites Several pollutants are released in the upper atmospheric layers during the orbital lifetime of LEO satellites. Orbital decay is caused by atmospheric drag and to keep the satellite in the correct orbit the platform occasionally needs repositioning. To do this nozzle-based systems use a chemical propellant to create thrust. In most cases hydrazine is the chemical propellant used which then releases ammonia, hydrogen and nitrogen as gas into the upper atmosphere. Also, the environment of the outer atmosphere causes the degradation of exterior materials. The atomic oxygen in the upper atmosphere oxidises hydrocarbon-based polymers like Kapton, Teflon and Mylar that are used to insulate and protect the satellite which then emits gasses like CO2 and CO into the atmosphere. Night sky Given the current surge in satellites in the sky, soon hundreds of satellites may be clearly visible to the human eye at dark sites. It is estimated that the overall levels of diffuse brightness of the night skies has increased by up to 10% above natural levels. This has the potential to confuse organisms, like insects and night-migrating birds, that use celestial patterns for migration and orientation. The impact this might have is currently unclear. The visibility of man-made objects in the night sky may also impact people's linkages with the world, nature, and culture. Ground-based infrastructure At all points of a satellite's lifetime, its movement and processes are monitored on the ground through a network of facilities. The environmental cost of the infrastructure as well as day-to-day operations is likely to be quite high, but quantification requires further investigation. Degeneration Particular threats arise from uncontrolled de-orbit. Some notable satellite failures that polluted and dispersed radioactive materials are Kosmos 954, Kosmos 1402 and the Transit 5-BN-3. When in a controlled manner satellites reach the end of life they are intentionally deorbited or moved to a graveyard orbit further away from Earth in order to reduce space debris. Physical collection or removal is not economical or even currently possible. Moving satellites out to a graveyard orbit is also unsustainable because they remain there for hundreds of years. It will lead to the further pollution of space and future issues with space debris. When satellites deorbit much of it is destroyed during re-entry into the atmosphere due to the heat. This introduces more material and pollutants into the atmosphere. There have been concerns expressed about the potential damage to the ozone layer and the possibility of increasing the earth's albedo, reducing warming but also resulting in accidental geoengineering of the earth's climate. After deorbiting 70% of satellites end up in the ocean and are rarely recovered. Mitigation Using wood as an alternative material has been posited in order to reduce pollution and debris from satellites that reenter the atmosphere. Interference Collision threat Space debris pose dangers to the spacecraft (including satellites) in or crossing geocentric orbits and have the potential to drive a Kessler syndrome which could potentially curtail humanity from conducting space endeavors in the future. With increase in the number of satellite constellations, like SpaceX Starlink, the astronomical community, such as the IAU, report that orbital pollution is getting increased significantly. A report from the SATCON1 workshop in 2020 concluded that the effects of large satellite constellations can severely affect some astronomical research efforts and lists six ways to mitigate harm to astronomy. The IAU is establishing a center (CPS) to coordinate or aggregate measures to mitigate such detrimental effects. Radio interference Due to the low received signal strength of satellite transmissions, they are prone to jamming by land-based transmitters. Such jamming is limited to the geographical area within the transmitter's range. GPS satellites are potential targets for jamming, but satellite phone and television signals have also been subjected to jamming. Also, it is very easy to transmit a carrier radio signal to a geostationary satellite and thus interfere with the legitimate uses of the satellite's transponder. It is common for Earth stations to transmit at the wrong time or on the wrong frequency in commercial satellite space, and dual-illuminate the transponder, rendering the frequency unusable. Satellite operators now have sophisticated monitoring tools and methods that enable them to pinpoint the source of any carrier and manage the transponder space effectively. Regulation Issues like space debris, radio and light pollution are increasing in magnitude and at the same time lack progress in national or international regulation. Liability Generally liability has been covered by the Liability Convention. Operation The operation capabilities and use have very much diversified and is broadening increasingly. Satellite operation needs not only access to financial, manufacturing and launch capabilities, but also a ground segment infrastructure.
Technology
Space
null
27686
https://en.wikipedia.org/wiki/Spreadsheet
Spreadsheet
A spreadsheet is a computer application for computation, organization, analysis and storage of data in tabular form. Spreadsheets were developed as computerized analogs of paper accounting worksheets. The program operates on data entered in cells of a table. Each cell may contain either numeric or text data, or the results of formulas that automatically calculate and display a value based on the contents of other cells. The term spreadsheet may also refer to one such electronic document. Spreadsheet users can adjust any stored value and observe the effects on calculated values. This makes the spreadsheet useful for "what-if" analysis since many cases can be rapidly investigated without manual recalculation. Modern spreadsheet software can have multiple interacting sheets and can display data either as text and numerals or in graphical form. Besides performing basic arithmetic and mathematical functions, modern spreadsheets provide built-in functions for common financial accountancy and statistical operations. Such calculations as net present value or standard deviation can be applied to tabular data with a pre-programmed function in a formula. Spreadsheet programs also provide conditional expressions, functions to convert between text and numbers, and functions that operate on strings of text. Spreadsheets have replaced paper-based systems throughout the business world. Although they were first developed for accounting or bookkeeping tasks, they now are used extensively in any context where tabular lists are built, sorted, and shared. Basics LANPAR, available in 1969, was the first electronic spreadsheet on mainframe and time sharing computers. LANPAR was an acronym: LANguage for Programming Arrays at Random. VisiCalc (1979) was the first electronic spreadsheet on a microcomputer, and it helped turn the Apple II into a popular and widely used personal computer. Lotus 1-2-3 was the leading spreadsheet when DOS was the dominant operating system. Microsoft Excel now has the largest market share on the Windows and Macintosh platforms. A spreadsheet program is a standard feature of an office productivity suite. In 2006 Google launched a beta release spreadsheet web application, this is currently known as Google Sheets and one of the applications provided in Google Drive. A spreadsheet consists of a table of cells arranged into rows and columns and referred to by the X and Y locations. X locations, the columns, are normally represented by letters, "A," "B," "C," etc., while rows are normally represented by numbers, 1, 2, 3, etc. A single cell can be referred to by addressing its row and column, "C10". This electronic concept of cell references was first introduced in LANPAR (Language for Programming Arrays at Random) (co-invented by Rene Pardo and Remy Landau) and a variant used in VisiCalc and known as "A1 notation". Additionally, spreadsheets have the concept of a range, a group of cells, normally contiguous. For instance, one can refer to the first ten cells in the first column with the range "A1:A10". LANPAR innovated forward referencing/natural order calculation which didn't re-appear until Lotus 123 and Microsoft's MultiPlan Version 2. In modern spreadsheet applications, several spreadsheets, often known as worksheets or simply sheets, are gathered together to form a workbook. A workbook is physically represented by a file containing all the data for the book, the sheets, and the cells with the sheets. Worksheets are normally represented by tabs that flip between pages, each one containing one of the sheets, although Numbers changes this model significantly. Cells in a multi-sheet book add the sheet name to their reference, for instance, "Sheet 1!C10". Some systems extend this syntax to allow cell references to different workbooks. Users interact with sheets primarily through the cells. A given cell can hold data by simply entering it in, or a formula, which is normally created by preceding the text with an equals sign. Data might include the string of text hello world, the number 5 or the date 10-Sep-97. A formula would begin with the equals sign, =5*3, but this would normally be invisible because the display shows the result of the calculation, 15 in this case, not the formula itself. This may lead to confusion in some cases. The key feature of spreadsheets is the ability for a formula to refer to the contents of other cells, which may, in turn, be the result of a formula. To make such a formula, one replaces a number with a cell reference. For instance, the formula =5*C10 would produce the result of multiplying the value in cell C10 by the number 5. If C10 holds the value 3 the result will be 15. But C10 might also hold its formula referring to other cells, and so on. The ability to chain formulas together is what gives a spreadsheet its power. Many problems can be broken down into a series of individual mathematical steps, and these can be assigned to individual formulas in cells. Some of these formulas can apply to ranges as well, like the SUM function that adds up all the numbers within a range. Spreadsheets share many principles and traits of databases, but spreadsheets and databases are not the same things. A spreadsheet is essentially just one table, whereas a database is a collection of many tables with machine-readable semantic relationships. While it is true that a workbook that contains three sheets is indeed a file containing multiple tables that can interact with each other, it lacks the relational structure of a database. Spreadsheets and databases are interoperable—sheets can be imported into databases to become tables within them, and database queries can be exported into spreadsheets for further analysis. A spreadsheet program is one of the main components of an office productivity suite, which usually also contains a word processor, a presentation program, and a database management system. Programs within a suite use similar commands for similar functions. Usually, sharing data between the components is easier than with a non-integrated collection of functionally equivalent programs. This was particularly an advantage at a time when many personal computer systems used text-mode displays and commands instead of a graphical user interface. History Paper spreadsheets Humans have organized data into tables, that is, grids of columns and rows, since ancient times. The Babylonians used clay tablets to store data as far back as 1800 BCE. Other examples can be found in book-keeping ledgers and astronomical records. Since at least 1906 the term "spread sheet" has been used in accounting to mean a grid of columns and rows in a ledger. And prior to the rise of computerized spreadsheets, "spread" referred to a newspaper or magazine item (text or graphics) that covers two facing pages, extending across the centerfold and treating the two pages as one large page. The compound word 'spread-sheet' came to mean the format used to present book-keeping ledgers—with columns for categories of expenditures across the top, invoices listed down the left margin, and the amount of each payment in the cell where its row and column intersect—which were, traditionally, a "spread" across facing pages of a bound ledger (book for keeping accounting records) or on oversized sheets of paper (termed 'analysis paper') ruled into rows and columns in that format and approximately twice as wide as ordinary paper. Electronic spreadsheets Batch spreadsheet report generator BSRG A batch "spreadsheet" is indistinguishable from a batch compiler with added input data, producing an output report, i.e., a 4GL or conventional, non-interactive, batch computer program. However, this concept of an electronic spreadsheet was outlined in the 1961 paper "Budgeting Models and System Simulation" by Richard Mattessich. The subsequent work by Mattessich (1964a, Chpt. 9, Accounting and Analytical Methods) and its companion volume, Mattessich (1964b, Simulation of the Firm through a Budget Computer Program) applied computerized spreadsheets to accounting and budgeting systems (on mainframe computers programmed in FORTRAN IV). These batch Spreadsheets dealt primarily with the addition or subtraction of entire columns or rows (of input variables), rather than individual cells. In 1962, this concept of the spreadsheet, called BCL for Business Computer Language, was implemented on an IBM 1130 and in 1963 was ported to an IBM 7040 by R. Brian Walsh at Marquette University, Wisconsin. This program was written in Fortran. Primitive timesharing was available on those machines. In 1968 BCL was ported by Walsh to the IBM 360/67 timesharing machine at Washington State University. It was used to assist in the teaching of finance to business students. Students were able to take information prepared by the professor and manipulate it to represent it and show ratios etc. In 1964, a book entitled Business Computer Language was written by Kimball, Stoffells and Walsh. Both the book and program were copyrighted in 1966 and years later that copyright was renewed. Applied Data Resources had a FORTRAN preprocessor called Empires. In the late 1960s, Xerox used BCL to develop a more sophisticated version for their timesharing system. LANPAR spreadsheet compiler A key invention in the development of electronic spreadsheets was made by Rene K. Pardo and Remy Landau, who filed in 1970 on a spreadsheet automatic natural order calculation algorithm. While the patent was initially rejected by the patent office as being a purely mathematical invention, following 12 years of appeals, Pardo and Landau won a landmark court case at the Predecessor Court of the Federal Circuit (CCPA), overturning the Patent Office in 1983 — establishing that "something does not cease to become patentable merely because the point of novelty is in an algorithm." However, in 1995 a federal district court ruled the patent unenforceable due to inequitable conduct by the inventors during the application process. The United States Court of Appeals for the Federal Circuit upheld that decision in 1996. The actual software was called LANPAR — LANguage for Programming Arrays at Random. This was conceived and entirely developed in the summer of 1969, following Pardo and Landau's recent graduation from Harvard University. Co-inventor Rene Pardo recalls that he felt that one manager at Bell Canada should not have to depend on programmers to program and modify budgeting forms, and he thought of letting users type out forms in any order and having an electronic computer calculate results in the right order ("Forward Referencing/Natural Order Calculation"). Pardo and Landau developed and implemented the software in 1969. LANPAR was used by Bell Canada, AT&T, and the 18 operating telephone companies nationwide for their local and national budgeting operations. LANPAR was also used by General Motors. Its uniqueness was Pardo's co-invention incorporating forward referencing/natural order calculation (one of the first "non-procedural" computer languages) as opposed to left-to-right, top to bottom sequence for calculating the results in each cell that was used by VisiCalc, SuperCalc, and the first version of MultiPlan. Without forward referencing/natural order calculation, the user had to refresh the spreadsheet until the values in all cells remained unchanged. Once the cell values stayed constant, the user was assured that there were no remaining forward references within the spreadsheet. Autoplan/Autotab spreadsheet programming language In 1968, three former employees from the General Electric computer company headquartered in Phoenix, Arizona set out to start their own software development house. A. Leroy Ellison, Harry N. Cantrell, and Russell E. Edwards found themselves doing a large number of calculations when making tables for the business plans that they were presenting to venture capitalists. They decided to save themselves a lot of effort and wrote a computer program that produced their tables for them. This program, originally conceived as a simple utility for their personal use, would turn out to be the first software product offered by the company that would become known as Capex Corporation. "AutoPlan" ran on GE's Time-sharing service; afterward, a version that ran on IBM mainframes was introduced under the name AutoTab. (National CSS offered a similar product, CSSTAB, which had a moderate timesharing user base by the early 1970s. A major application was opinion research tabulation.) AutoPlan/AutoTab was not a WYSIWYG interactive spreadsheet program, it was a simple scripting language for spreadsheets. The user defined the names and labels for the rows and columns, then the formulas that defined each row or column. In 1975, Autotab-II was advertised as extending the original to a maximum of "1,500 rows and columns, combined in any proportion the user requires..." GE Information Services, which operated the time-sharing service, also launched its own spreadsheet system, Financial Analysis Language (FAL), circa 1974. It was later supplemented by an additional spreadsheet language, TABOL, which was developed by an independent author, Oliver Vellacott in the UK. Both FAL and TABOL were integrated with GEIS's database system, DMS. IBM Financial Planning and Control System The IBM Financial Planning and Control System was developed in 1976, by Brian Ingham at IBM Canada. It was implemented by IBM in at least 30 countries. It ran on an IBM mainframe and was the first application for financial planning developed with APL that completely hid the programming language from the end-user. Through IBM's VM operating system, it was among the first programs to auto-update each copy of the application as new versions were released. Users could specify simple mathematical relationships between rows and between columns. Compared to any contemporary alternatives, it could support very large spreadsheets. It loaded actual financial planning data drawn from the legacy batch system into each user's spreadsheet monthly. It was designed to optimize the power of APL through object kernels, increasing program efficiency by as much as 50 fold over traditional programming approaches. APLDOT modeling language An example of an early "industrial weight" spreadsheet was APLDOT, developed in 1976 at the United States Railway Association on an IBM 360/91, running at The Johns Hopkins University Applied Physics Laboratory in Laurel, MD. The application was used successfully for many years in developing such applications as financial and costing models for the US Congress and for Conrail. APLDOT was dubbed a "spreadsheet" because financial analysts and strategic planners used it to solve the same problems they addressed with paper spreadsheet pads. VisiCalc for the Apple II The concept of spreadsheets became widely known due to VisiCalc, developed for the Apple II in 1979 by VisiCorp staff Dan Bricklin and Bob Frankston. Significantly, it also turned the personal computer from a hobby for computer enthusiasts into a business tool. VisiCalc was the first spreadsheet that combined many of the essential features of modern spreadsheet applications, such as a WYSIWYG interactive user interface, automatic recalculation, status and formula lines, range copying with relative and absolute references, and formula building by selecting referenced cells. Unaware of LANPAR at the time, PC World magazine called VisiCalc the first electronic spreadsheet. Bricklin has spoken of watching his university professor create a table of calculation results on a blackboard. When the professor found an error, he had to tediously erase and rewrite several sequential entries in the table, triggering Bricklin to think that he could replicate the process on a computer, using the blackboard as the model to view results of underlying formulas. His idea became VisiCalc. VisiCalc for the Apple II went on to become the first killer application, a program so compelling, people would buy a particular computer just to use it. It was ported to other computers, including CP/M machines, Atari 8-bit computers, and the Commodore PET, but VisiCalc remains best known as an Apple II program. SuperCalc for CP/M SuperCalc was a spreadsheet application published by Sorcim in 1980, and originally bundled (along with WordStar) as part of the CP/M software package included with the Osborne 1 portable computer. It quickly became the de facto standard spreadsheet for CP/M. Lotus 1-2-3 spreadsheet for IBM PC DOS The introduction of Lotus 1-2-3 in November 1982 accelerated the acceptance of the IBM Personal Computer. It was written especially for IBM PC DOS and had improvements in speed and graphics compared to VisiCalc on the Apple II, this helped it grow in popularity. Lotus 1-2-3 was the leading spreadsheet for several years. Microsoft Excel for Apple Macintosh and Windows Microsoft released the first version of Excel for the Apple Macintosh on September 30, 1985, and then ported it to Windows, with the first version being numbered 2.05 (to synchronize with the Macintosh version 2.2) and released in November 1987. Microsoft's Windows 3.x platforms of the early 1990s made it possible for their Excel spreadsheet application to take market share from Lotus. By the time Lotus responded with usable Windows products, Microsoft had begun to assemble their Office suite. By 1995, Excel was the market leader, edging out Lotus 1-2-3, and in 2013, IBM discontinued Lotus 1-2-3 altogether. Google Sheets, Online, Web-based spreadsheets In 2006 Google launched their beta release Google Sheets, a web based spreadsheet application that can be accessed by multiple users from any device type using a compatible web browser, it can be used online and offline (with or without internet connectivity). Google Sheets originated from a web-based spreadsheet application XL2Web developed by 2Web Technologies, combined with DocVerse which enabled multiple-user online collaboration of Office documents. In 2016 Collabora Online Calc was launched, notable in that the web based spreadsheet could be hosted and integrated into any environment without dependency on a 3rd party for authentication or maintenance. Collabora Online runs LibreOffice kit at its core, which grew from StarOffice that was launched in 1985. Mainframe spreadsheets The Works Records System at ICI developed in 1974 on IBM 370/145 ExecuCalc, from Parallax Systems, Inc.: Released in late 1982, ExecuCalc was the first mainframe "visi-clone" which duplicated the features of VisiCalc on IBM mainframes with 3270 display terminals. Over 150 copies were licensed (35 to Fortune 500 companies). DP managers were attracted to compatibility and avoiding then-expensive PC purchases (see 1983 Computerworld magazine front page article and advertisement.) Other spreadsheets Notable current spreadsheet software: Apache OpenOffice Calc is free and open-source. Calligra Sheets (formerly KCalc) Collabora Online Calc for mobile and desktop apps are free, open-source, cross-platform enterprise-ready editions of LibreOffice. Corel Quattro Pro (WordPerfect Office) Gnumeric is free and cross-platform, it is part of the GNOME Free Software Desktop Project. Kingsoft Spreadsheets LibreOffice Calc is free, open-source and cross platform. Numbers is Apple Inc.'s spreadsheet software, part of iWork. OnlyOffice Docs Spreadsheet editor is free and open source. PlanMaker (SoftMaker Office) Pyspread Sourcetable Discontinued spreadsheet software: 20/20 3D-Calc for Atari ST computers As Easy As Framework by Forefront Corporation/Ashton-Tate (1983–84) GNU Oleo – A traditional terminal mode spreadsheet for UNIX/UNIX-like systems IBM Lotus Symphony (2007) Javelin Software KCells Lucid 3-D Lotus Improv Lotus Jazz for Macintosh Lotus Symphony (1984) MultiPlan Claris' Resolve (Macintosh) NeoOffice Resolver One Borland's Quattro Pro SC IM (formerly SC - Spreadsheet Calculator) SIAG SuperCalc T/Maker Target Planner Calc for CP/M and TRS-DOS Trapeze for Macintosh Wingz for Macintosh Other products Several companies have attempted to break into the spreadsheet market with programs based on very different paradigms. Lotus introduced what is likely the most successful example, Lotus Improv, which saw some commercial success, notably in the financial world where its powerful data mining capabilities remain well respected to this day. Spreadsheet 2000 attempted to dramatically simplify formula construction, but was generally not successful. Concepts The main concepts are those of a grid of cells, called a sheet, with either raw data, called values, or formulas in the cells. Formulas say how to mechanically compute new values from existing values. Values are general numbers, but can also be pure text, dates, months, etc. Extensions of these concepts include logical spreadsheets. Various tools for programming sheets, visualizing data, remotely connecting sheets, displaying cells' dependencies, etc. are commonly provided. Cells A "cell" can be thought of as a box for holding data. A single cell is usually referenced by its column and row (C2 would represent the cell containing the value 30 in the example table below). Usually rows, representing the dependent variables, are referenced in decimal notation starting from 1, while columns representing the independent variables use 26-adic bijective numeration using the letters A-Z as numerals. Its physical size can usually be tailored to its content by dragging its height or width at box intersections (or for entire columns or rows by dragging the column- or row-headers). An array of cells is called a sheet or worksheet. It is analogous to an array of variables in a conventional computer program (although certain unchanging values, once entered, could be considered, by the same analogy, constants). In most implementations, many worksheets may be located within a single spreadsheet. A worksheet is simply a subset of the spreadsheet divided for the sake of clarity. Functionally, the spreadsheet operates as a whole and all cells operate as global variables within the spreadsheet (each variable having 'read' access only except its containing cell). A cell may contain a value or a formula, or it may simply be left empty. By convention, formulas usually begin with = sign. Values A value can be entered from the computer keyboard by directly typing into the cell itself. Alternatively, a value can be based on a formula (see below), which might perform a calculation, display the current date or time, or retrieve external data such as a stock quote or a database value. The Spreadsheet Value Rule Computer scientist Alan Kay used the term value rule to summarize a spreadsheet's operation: a cell's value relies solely on the formula the user has typed into the cell. The formula may rely on the value of other cells, but those cells are likewise restricted to user-entered data or formulas. There are no 'side effects' to calculating a formula: the only output is to display the calculated result inside its occupying cell. There is no natural mechanism for permanently modifying the contents of a cell unless the user manually modifies the cell's contents. In the context of programming languages, this yields a limited form of first-order functional programming. Automatic recalculation A standard of spreadsheets since the 1980s, this optional feature eliminates the need to manually request the spreadsheet program to recalculate values (nowadays typically the default option unless specifically 'switched off' for large spreadsheets, usually to improve performance). Some earlier spreadsheets required a manual request to recalculate since the recalculation of large or complex spreadsheets often reduced data entry speed. Many modern spreadsheets still retain this option. Recalculation generally requires that there are no circular dependencies in a spreadsheet. A dependency graph is a graph that has a vertex for each object to be updated, and an edge connecting two objects whenever one of them needs to be updated earlier than the other. Dependency graphs without circular dependencies form directed acyclic graphs, representations of partial orderings (in this case, across a spreadsheet) that can be relied upon to give a definite result. Real-time update This feature refers to updating a cell's contents periodically with a value from an external source—such as a cell in a "remote" spreadsheet. For shared, Web-based spreadsheets, it applies to "immediately" updating cells another user has updated. All dependent cells must be updated also. Locked cell Once entered, selected cells (or the entire spreadsheet) can optionally be "locked" to prevent accidental overwriting. Typically this would apply to cells containing formulas but might apply to cells containing "constants" such as a kilogram/pounds conversion factor (2.20462262 to eight decimal places). Even though individual cells are marked as locked, the spreadsheet data are not protected until the feature is activated in the file preferences. Data format A cell or range can optionally be defined to specify how the value is displayed. The default display format is usually set by its initial content if not specifically previously set, so that for example "31/12/2007" or "31 Dec 2007" would default to the cell format of date. Similarly adding a % sign after a numeric value would tag the cell as a percentage cell format. The cell contents are not changed by this format, only the displayed value. Some cell formats such as "numeric" or "currency" can also specify the number of decimal places. This can allow invalid operations (such as doing multiplication on a cell containing a date), resulting in illogical results without an appropriate warning. Cell formatting Depending on the capability of the spreadsheet application, each cell (like its counterpart the "style" in a word processor) can be separately formatted using the attributes of either the content (point size, color, bold or italic) or the cell (border thickness, background shading, color). To aid the readability of a spreadsheet, cell formatting may be conditionally applied to data; for example, a negative number may be displayed in red. A cell's formatting does not typically affect its content and depending on how cells are referenced or copied to other worksheets or applications, the formatting may not be carried with the content. Named cells In most implementations, a cell, or group of cells in a column or row, can be "named" enabling the user to refer to those cells by a name rather than by a grid reference. Names must be unique within the spreadsheet, but when using multiple sheets in a spreadsheet file, an identically named cell range on each sheet can be used if it is distinguished by adding the sheet name. One reason for this usage is for creating or running macros that repeat a command across many sheets. Another reason is that formulas with named variables are readily checked against the algebra they are intended to implement (they resemble Fortran expressions). The use of named variables and named functions also makes the spreadsheet structure more transparent. Cell reference In place of a named cell, an alternative approach is to use a cell (or grid) reference. Most cell references indicate another cell in the same spreadsheet, but a cell reference can also refer to a cell in a different sheet within the same spreadsheet, or (depending on the implementation) to a cell in another spreadsheet entirely, or a value from a remote application. A typical cell reference in "A1" style consists of one or two case-insensitive letters to identify the column (if there are up to 256 columns: A–Z and AA–IV) followed by a row number (e.g., in the range 1–65536). Either part can be relative (it changes when the formula it is in is moved or copied), or absolute (indicated with $ in front of the part concerned of the cell reference). The alternative "R1C1" reference style consists of the letter R, the row number, the letter C, and the column number; relative row or column numbers are indicated by enclosing the number in square brackets. Most current spreadsheets use the A1 style, some providing the R1C1 style as a compatibility option. When the computer calculates a formula in one cell to update the displayed value of that cell, cell reference(s) in that cell, naming some other cell(s), causes the computer to fetch the value of the named cell(s). A cell on the same "sheet" is usually addressed as: =A1 A cell on a different sheet of the same spreadsheet is usually addressed as: =SHEET2!A1 (that is; the first cell in sheet 2 of the same spreadsheet). Some spreadsheet implementations in Excel allow cell references to another spreadsheet (not the currently open and active file) on the same computer or a local network. It may also refer to a cell in another open and active spreadsheet on the same computer or network that is defined as shareable. These references contain the complete filename, such as: ='C:\Documents and Settings\Username\My spreadsheets\[main sheet]Sheet1!A1 In a spreadsheet, references to cells automatically update when new rows or columns are inserted or deleted. Care must be taken, however, when adding a row immediately before a set of column totals to ensure that the totals reflect the values of the additional rows—which they often do not. A circular reference occurs when the formula in one cell refers—directly, or indirectly through a chain of cell references—to another cell that refers back to the first cell. Many common errors cause circular references. However, some valid techniques use circular references. These techniques, after many spreadsheet recalculations, (usually) converge on the correct values for those cells. Cell ranges Likewise, instead of using a named range of cells, a range reference can be used. Reference to a range of cells is typical of the form (A1:A6), which specifies all the cells in the range A1 through to A6. A formula such as "=SUM(A1:A6)" would add all the cells specified and put the result in the cell containing the formula itself. Sheets In the earliest spreadsheets, cells were a simple two-dimensional grid. Over time, the model has expanded to include a third dimension, and in some cases a series of named grids, called sheets. The most advanced examples allow inversion and rotation operations which can slice and project the data set in various ways. Formulas A formula identifies the calculation needed to place the result in the cell it is contained within. A cell containing a formula, therefore, has two display components; the formula itself and the resulting value. The formula is normally only shown when the cell is selected by "clicking" the mouse over a particular cell; otherwise, it contains the result of the calculation. A formula assigns values to a cell or range of cells, and typically has the format: where the expression consists of: values, such as 2, 9.14 or 6.67E-11; references to other cells, such as, e.g., A1 for a single cell or B1:B3 for a range; arithmetic operators, such as +, -, *, /, and others; relational operators, such as >=, <, and others; and, functions, such as SUM(), TAN(), and many others. When a cell contains a formula, it often contains references to other cells. Such a cell reference is a type of variable. Its value is the value of the referenced cell or some derivation of it. If that cell in turn references other cells, the value depends on the values of those.
Technology
Computer software
null
27692
https://en.wikipedia.org/wiki/Steam%20engine
Steam engine
A steam engine is a heat engine that performs mechanical work using steam as its working fluid. The steam engine uses the force produced by steam pressure to push a piston back and forth inside a cylinder. This pushing force can be transformed by a connecting rod and crank into rotational force for work. The term "steam engine" is most commonly applied to reciprocating engines as just described, although some authorities have also referred to the steam turbine and devices such as Hero's aeolipile as "steam engines". The essential feature of steam engines is that they are external combustion engines, where the working fluid is separated from the combustion products. The ideal thermodynamic cycle used to analyze this process is called the Rankine cycle. In general usage, the term steam engine can refer to either complete steam plants (including boilers etc.), such as railway steam locomotives and portable engines, or may refer to the piston or turbine machinery alone, as in the beam engine and stationary steam engine. As noted, steam-driven devices such as the aeolipile were known in the first century AD, and there were a few other uses recorded in the 16th century. In 1606 Jerónimo de Ayanz y Beaumont patented his invention of the first steam-powered water pump for draining mines. Thomas Savery is considered the inventor of the first commercially used steam powered device, a steam pump that used steam pressure operating directly on the water. The first commercially successful engine that could transmit continuous power to a machine was developed in 1712 by Thomas Newcomen. James Watt made a critical improvement in 1764, by removing spent steam to a separate vessel for condensation, greatly improving the amount of work obtained per unit of fuel consumed. By the 19th century, stationary steam engines powered the factories of the Industrial Revolution. Steam engines replaced sails for ships on paddle steamers, and steam locomotives operated on the railways. Reciprocating piston type steam engines were the dominant source of power until the early 20th century. The efficiency of stationary steam engine increased dramatically until about 1922. The highest Rankine Cycle Efficiency of 91% and combined thermal efficiency of 31% was demonstrated and published in 1921 and 1928. Advances in the design of electric motors and internal combustion engines resulted in the gradual replacement of steam engines in commercial usage. Steam turbines replaced reciprocating engines in power generation, due to lower cost, higher operating speed, and higher efficiency. Note that small scale steam turbines are much less efficient than large ones. , large reciprocating piston steam engines are still being manufactured in Germany. History Early experiments As noted, one recorded rudimentary steam-powered engine was the aeolipile described by Hero of Alexandria, a Hellenistic mathematician and engineer in Roman Egypt during the first century AD. In the following centuries, the few steam-powered engines known were, like the aeolipile, essentially experimental devices used by inventors to demonstrate the properties of steam. A rudimentary steam turbine device was described by Taqi al-Din in Ottoman Egypt in 1551 and by Giovanni Branca in Italy in 1629. The Spanish inventor Jerónimo de Ayanz y Beaumont received patents in 1606 for 50 steam-powered inventions, including a water pump for draining inundated mines. Frenchman Denis Papin did some useful work on the steam digester in 1679, and first used a piston to raise weights in 1690. Pumping engines The first commercial steam-powered device was a water pump, developed in 1698 by Thomas Savery. It used condensing steam to create a vacuum which raised water from below and then used steam pressure to raise it higher. Small engines were effective though larger models were problematic. They had a very limited lift height and were prone to boiler explosions. Savery's engine was used in mines, pumping stations and supplying water to water wheels powering textile machinery. One advantage of Savery's engine was its low cost. Bento de Moura Portugal introduced an improvement of Savery's construction "to render it capable of working itself", as described by John Smeaton in the Philosophical Transactions published in 1751. It continued to be manufactured until the late 18th century. At least one engine was still known to be operating in 1820. Piston steam engines The first commercially successful engine that could transmit continuous power to a machine was the atmospheric engine, invented by Thomas Newcomen around 1712. It improved on Savery's steam pump, using a piston as proposed by Papin. Newcomen's engine was relatively inefficient, and mostly used for pumping water. It worked by creating a partial vacuum by condensing steam under a piston within a cylinder. It was employed for draining mine workings at depths originally impractical using traditional means, and for providing reusable water for driving waterwheels at factories sited away from a suitable "head". Water that passed over the wheel was pumped up into a storage reservoir above the wheel. In 1780 James Pickard patented the use of a flywheel and crankshaft to provide rotative motion from an improved Newcomen engine. In 1720, Jacob Leupold described a two-cylinder high-pressure steam engine. The invention was published in his major work "Theatri Machinarum Hydraulicarum". The engine used two heavy pistons to provide motion to a water pump. Each piston was raised by the steam pressure and returned to its original position by gravity. The two pistons shared a common four-way rotary valve connected directly to a steam boiler. The next major step occurred when James Watt developed (1763–1775) an improved version of Newcomen's engine, with a separate condenser. Boulton and Watt's early engines used half as much coal as John Smeaton's improved version of Newcomen's. Newcomen's and Watt's early engines were "atmospheric". They were powered by air pressure pushing a piston into the partial vacuum generated by condensing steam, instead of the pressure of expanding steam. The engine cylinders had to be large because the only usable force acting on them was atmospheric pressure. Watt developed his engine further, modifying it to provide a rotary motion suitable for driving machinery. This enabled factories to be sited away from rivers, and accelerated the pace of the Industrial Revolution. High-pressure engines The meaning of high pressure, together with an actual value above ambient, depends on the era in which the term was used. For early use of the term Van Reimsdijk refers to steam being at a sufficiently high pressure that it could be exhausted to atmosphere without reliance on a vacuum to enable it to perform useful work. states that Watt's condensing engines were known, at the time, as low pressure compared to high pressure, non-condensing engines of the same period. Watt's patent prevented others from making high pressure and compound engines. Shortly after Watt's patent expired in 1800, Richard Trevithick and, separately, Oliver Evans in 1801 introduced engines using high-pressure steam; Trevithick obtained his high-pressure engine patent in 1802, and Evans had made several working models before then. These were much more powerful for a given cylinder size than previous engines and could be made small enough for transport applications. Thereafter, technological developments and improvements in manufacturing techniques (partly brought about by the adoption of the steam engine as a power source) resulted in the design of more efficient engines that could be smaller, faster, or more powerful, depending on the intended application. The Cornish engine was developed by Trevithick and others in the 1810s. It was a compound cycle engine that used high-pressure steam expansively, then condensed the low-pressure steam, making it relatively efficient. The Cornish engine had irregular motion and torque through the cycle, limiting it mainly to pumping. Cornish engines were used in mines and for water supply until the late 19th century. Horizontal stationary engine Early builders of stationary steam engines considered that horizontal cylinders would be subject to excessive wear. Their engines were therefore arranged with the piston axis in vertical position. In time the horizontal arrangement became more popular, allowing compact, but powerful engines to be fitted in smaller spaces. The acme of the horizontal engine was the Corliss steam engine, patented in 1849, which was a four-valve counter flow engine with separate steam admission and exhaust valves and automatic variable steam cutoff. When Corliss was given the Rumford Medal, the committee said that "no one invention since Watt's time has so enhanced the efficiency of the steam engine". In addition to using 30% less steam, it provided more uniform speed due to variable steam cut off, making it well suited to manufacturing, especially cotton spinning. Road vehicles The first experimental road-going steam-powered vehicles were built in the late 18th century, but it was not until after Richard Trevithick had developed the use of high-pressure steam, around 1800, that mobile steam engines became a practical proposition. The first half of the 19th century saw great progress in steam vehicle design, and by the 1850s it was becoming viable to produce them on a commercial basis. This progress was dampened by legislation which limited or prohibited the use of steam-powered vehicles on roads. Improvements in vehicle technology continued from the 1860s to the 1920s. Steam road vehicles were used for many applications. In the 20th century, the rapid development of internal combustion engine technology led to the demise of the steam engine as a source of propulsion of vehicles on a commercial basis, with relatively few remaining in use beyond the Second World War. Many of these vehicles were acquired by enthusiasts for preservation, and numerous examples are still in existence. In the 1960s, the air pollution problems in California gave rise to a brief period of interest in developing and studying steam-powered vehicles as a possible means of reducing the pollution. Apart from interest by steam enthusiasts, the occasional replica vehicle, and experimental technology, no steam vehicles are in production at present. Marine engines Near the end of the 19th century, compound engines came into widespread use. Compound engines exhausted steam into successively larger cylinders to accommodate the higher volumes at reduced pressures, giving improved efficiency. These stages were called expansions, with double- and triple-expansion engines being common, especially in shipping where efficiency was important to reduce the weight of coal carried. Steam engines remained the dominant source of power until the early 20th century, when advances in the design of the steam turbine, electric motors, and internal combustion engines gradually resulted in the replacement of reciprocating (piston) steam engines, with merchant shipping relying increasingly upon diesel engines, and warships on the steam turbine. Steam locomotives As the development of steam engines progressed through the 18th century, various attempts were made to apply them to road and railway use. In 1784, William Murdoch, a Scottish inventor, built a model steam road locomotive. An early working model of a steam rail locomotive was designed and constructed by steamboat pioneer John Fitch in the United States probably during the 1780s or 1790s. His steam locomotive used interior bladed wheels guided by rails or tracks. The first full-scale working railway steam locomotive was built by Richard Trevithick in the United Kingdom and, on 21 February 1804, the world's first railway journey took place as Trevithick's steam locomotive hauled 10 tones of iron, 70 passengers and five wagons along the tramway from the Pen-y-darren ironworks, near Merthyr Tydfil to Abercynon in south Wales. The design incorporated a number of important innovations that included using high-pressure steam which reduced the weight of the engine and increased its efficiency. Trevithick visited the Newcastle area later in 1804 and the colliery railways in north-east England became the leading centre for experimentation and development of steam locomotives. Trevithick continued his own experiments using a trio of locomotives, concluding with the Catch Me Who Can in 1808. Only four years later, the successful twin-cylinder locomotive Salamanca by Matthew Murray was used by the edge railed rack and pinion Middleton Railway. In 1825 George Stephenson built the Locomotion for the Stockton and Darlington Railway. This was the first public steam railway in the world and then in 1829, he built The Rocket which was entered in and won the Rainhill Trials. The Liverpool and Manchester Railway opened in 1830 making exclusive use of steam power for both passenger and freight trains. Steam locomotives continued to be manufactured until the late twentieth century in places such as China and the former East Germany (where the DR Class 52.80 was produced). Steam turbines The final major evolution of the steam engine design was the use of steam turbines starting in the late part of the 19th century. Steam turbines are generally more efficient than reciprocating piston type steam engines (for outputs above several hundred horsepower), have fewer moving parts, and provide rotary power directly instead of through a connecting rod system or similar means. Steam turbines virtually replaced reciprocating engines in electricity generating stations early in the 20th century, where their efficiency, higher speed appropriate to generator service, and smooth rotation were advantages. Today most electric power is provided by steam turbines. In the United States, 90% of the electric power is produced in this way using a variety of heat sources. Steam turbines were extensively applied for propulsion of large ships throughout most of the 20th century. Present development Although the reciprocating steam engine is no longer in widespread commercial use, various companies are exploring or exploiting the potential of the engine as an alternative to internal combustion engines. Components and accessories of steam engines There are two fundamental components of a steam plant: the boiler or steam generator, and the "motor unit", referred to itself as a "steam engine". Stationary steam engines in fixed buildings may have the boiler and engine in separate buildings some distance apart. For portable or mobile use, such as steam locomotives, the two are mounted together. The widely used reciprocating engine typically consisted of a cast-iron cylinder, piston, connecting rod and beam or a crank and flywheel, and miscellaneous linkages. Steam was alternately supplied and exhausted by one or more valves. Speed control was either automatic, using a governor, or by a manual valve. The cylinder casting contained steam supply and exhaust ports. Engines equipped with a condenser are a separate type than those that exhaust to the atmosphere. Other components are often present; pumps (such as an injector) to supply water to the boiler during operation, condensers to recirculate the water and recover the latent heat of vaporisation, and superheaters to raise the temperature of the steam above its saturated vapour point, and various mechanisms to increase the draft for fireboxes. When coal is used, a chain or screw stoking mechanism and its drive engine or motor may be included to move the fuel from a supply bin (bunker) to the firebox. Heat source The heat required for boiling the water and raising the temperature of the steam can be derived from various sources, most commonly from burning combustible materials with an appropriate supply of air in a closed space (e.g., combustion chamber, firebox, furnace). In the case of model or toy steam engines and a few full scale cases, the heat source can be an electric heating element. Boilers Boilers are pressure vessels that contain water to be boiled, and features that transfer the heat to the water as effectively as possible. The two most common types are: Water-tube boiler Water is passed through tubes surrounded by hot gas. Fire-tube boiler Hot gas is passed through tubes immersed in water, the same water also circulates in a water jacket surrounding the firebox and, in high-output locomotive boilers, also passes through tubes in the firebox itself (thermic syphons and security circulators). Fire-tube boilers were the main type used for early high-pressure steam (typical steam locomotive practice), but they were to a large extent displaced by more economical water tube boilers in the late 19th century for marine propulsion and large stationary applications. Many boilers raise the temperature of the steam after it has left that part of the boiler where it is in contact with the water. Known as superheating it turns 'wet steam' into 'superheated steam'. It avoids the steam condensing in the engine cylinders, and gives a significantly higher efficiency. Motor units In a steam engine, a piston or steam turbine or any other similar device for doing mechanical work takes a supply of steam at high pressure and temperature and gives out a supply of steam at lower pressure and temperature, using as much of the difference in steam energy as possible to do mechanical work. These "motor units" are often called 'steam engines' in their own right. Engines using compressed air or other gases differ from steam engines only in details that depend on the nature of the gas although compressed air has been used in steam engines without change. Cold sink As with all heat engines, the majority of primary energy must be emitted as waste heat at relatively low temperature. The simplest cold sink is to vent the steam to the environment. This is often used on steam locomotives to avoid the weight and bulk of condensers. Some of the released steam is vented up the chimney so as to increase the draw on the fire, which greatly increases engine power, but reduces efficiency. Sometimes the waste heat from the engine is useful itself, and in those cases, very high overall efficiency can be obtained. Steam engines in stationary power plants use surface condensers as a cold sink. The condensers are cooled by water flow from oceans, rivers, lakes, and often by cooling towers which evaporate water to provide cooling energy removal. The resulting condensed hot water (condensate), is then pumped back up to pressure and sent back to the boiler. A dry-type cooling tower is similar to an automobile radiator and is used in locations where water is costly. Waste heat can also be ejected by evaporative (wet) cooling towers, which use a secondary external water circuit that evaporates some of flow to the air. River boats initially used a jet condenser in which cold water from the river is injected into the exhaust steam from the engine. Cooling water and condensate mix. While this was also applied for sea-going vessels, generally after only a few days of operation the boiler would become coated with deposited salt, reducing performance and increasing the risk of a boiler explosion. Starting about 1834, the use of surface condensers on ships eliminated fouling of the boilers, and improved engine efficiency. Evaporated water cannot be used for subsequent purposes (other than rain somewhere), whereas river water can be re-used. In all cases, the steam plant boiler feed water, which must be kept pure, is kept separate from the cooling water or air. Water pump Most steam boilers have a means to supply water whilst at pressure, so that they may be run continuously. Utility and industrial boilers commonly use multi-stage centrifugal pumps; however, other types are used. Another means of supplying lower-pressure boiler feed water is an injector, which uses a steam jet usually supplied from the boiler. Injectors became popular in the 1850s but are no longer widely used, except in applications such as steam locomotives. It is the pressurization of the water that circulates through the steam boiler that allows the water to be raised to temperatures well above boiling point of water at one atmospheric pressure, and by that means to increase the efficiency of the steam cycle. Monitoring and control For safety reasons, nearly all steam engines are equipped with mechanisms to monitor the boiler, such as a pressure gauge and a sight glass to monitor the water level. Many engines, stationary and mobile, are also fitted with a governor to regulate the speed of the engine without the need for human interference. The most useful instrument for analyzing the performance of steam engines is the steam engine indicator. Early versions were in use by 1851, but the most successful indicator was developed for the high speed engine inventor and manufacturer Charles Porter by Charles Richard and exhibited at London Exhibition in 1862. The steam engine indicator traces on paper the pressure in the cylinder throughout the cycle, which can be used to spot various problems and calculate developed horsepower. It was routinely used by engineers, mechanics and insurance inspectors. The engine indicator can also be used on internal combustion engines. See image of indicator diagram below (in Types of motor units section). Governor The centrifugal governor was adopted by James Watt for use on a steam engine in 1788 after Watt's partner Boulton saw one on the equipment of a flour mill Boulton & Watt were building. The governor could not actually hold a set speed, because it would assume a new constant speed in response to load changes. The governor was able to handle smaller variations such as those caused by fluctuating heat load to the boiler. Also, there was a tendency for oscillation whenever there was a speed change. As a consequence, engines equipped only with this governor were not suitable for operations requiring constant speed, such as cotton spinning. The governor was improved over time and coupled with variable steam cut off, good speed control in response to changes in load was attainable near the end of the 19th century. Engine configuration Simple engine In a simple engine, or "single expansion engine" the charge of steam passes through the entire expansion process in an individual cylinder, although a simple engine may have one or more individual cylinders. It is then exhausted directly into the atmosphere or into a condenser. As steam expands in passing through a high-pressure engine, its temperature drops because no heat is being added to the system; this is known as adiabatic expansion and results in steam entering the cylinder at high temperature and leaving at lower temperature. This causes a cycle of heating and cooling of the cylinder with every stroke, which is a source of inefficiency. The dominant efficiency loss in reciprocating steam engines is cylinder condensation and re-evaporation. The steam cylinder and adjacent metal parts/ports operate at a temperature about halfway between the steam admission saturation temperature and the saturation temperature corresponding to the exhaust pressure. As high-pressure steam is admitted into the working cylinder, much of the high-temperature steam is condensed as water droplets onto the metal surfaces, significantly reducing the steam available for expansive work. When the expanding steam reaches low pressure (especially during the exhaust stroke), the previously deposited water droplets that had just been formed within the cylinder/ports now boil away (re-evaporation) and this steam does no further work in the cylinder. There are practical limits on the expansion ratio of a steam engine cylinder, as increasing cylinder surface area tends to exacerbate the cylinder condensation and re-evaporation issues. This negates the theoretical advantages associated with a high ratio of expansion in an individual cylinder. Compound engines A method to lessen the magnitude of energy loss to a very long cylinder was invented in 1804 by British engineer Arthur Woolf, who patented his Woolf high-pressure compound engine in 1805. In the compound engine, high-pressure steam from the boiler expands in a high-pressure (HP) cylinder and then enters one or more subsequent lower-pressure (LP) cylinders. The complete expansion of the steam now occurs across multiple cylinders, with the overall temperature drop within each cylinder reduced considerably. By expanding the steam in steps with smaller temperature range (within each cylinder) the condensation and re-evaporation efficiency issue (described above) is reduced. This reduces the magnitude of cylinder heating and cooling, increasing the efficiency of the engine. By staging the expansion in multiple cylinders, variations of torque can be reduced. To derive equal work from lower-pressure cylinder requires a larger cylinder volume as this steam occupies a greater volume. Therefore, the bore, and in rare cases the stroke, are increased in low-pressure cylinders, resulting in larger cylinders. Double-expansion (usually known as compound) engines expanded the steam in two stages. The pairs may be duplicated or the work of the large low-pressure cylinder can be split with one high-pressure cylinder exhausting into one or the other, giving a three-cylinder layout where cylinder and piston diameter are about the same, making the reciprocating masses easier to balance. Two-cylinder compounds can be arranged as: Cross compounds: The cylinders are side by side. Tandem compounds: The cylinders are end to end, driving a common connecting rod Angle compounds: The cylinders are arranged in a V (usually at a 90° angle) and drive a common crank. With two-cylinder compounds used in railway work, the pistons are connected to the cranks as with a two-cylinder simple at 90° out of phase with each other (quartered). When the double-expansion group is duplicated, producing a four-cylinder compound, the individual pistons within the group are usually balanced at 180°, the groups being set at 90° to each other. In one case (the first type of Vauclain compound), the pistons worked in the same phase driving a common crosshead and crank, again set at 90° as for a two-cylinder engine. With the three-cylinder compound arrangement, the LP cranks were either set at 90° with the HP one at 135° to the other two, or in some cases, all three cranks were set at 120°. The adoption of compounding was common for industrial units, for road engines and almost universal for marine engines after 1880; it was not universally popular in railway locomotives where it was often perceived as complicated. This is partly due to the harsh railway operating environment and limited space afforded by the loading gauge (particularly in Britain, where compounding was never common and not employed after 1930). However, although never in the majority, it was popular in many other countries. Multiple-expansion engines It is a logical extension of the compound engine (described above) to split the expansion into yet more stages to increase efficiency. The result is the multiple-expansion engine. Such engines use either three or four expansion stages and are known as triple- and quadruple-expansion engines respectively. These engines use a series of cylinders of progressively increasing diameter. These cylinders are designed to divide the work into equal shares for each expansion stage. As with the double-expansion engine, if space is at a premium, then two smaller cylinders may be used for the low-pressure stage. Multiple-expansion engines typically had the cylinders arranged inline, but various other formations were used. In the late 19th century, the Yarrow-Schlick-Tweedy balancing "system" was used on some marine triple-expansion engines. Y-S-T engines divided the low-pressure expansion stages between two cylinders, one at each end of the engine. This allowed the crankshaft to be better balanced, resulting in a smoother, faster-responding engine which ran with less vibration. This made the four-cylinder triple-expansion engine popular with large passenger liners (such as the Olympic class), but this was ultimately replaced by the virtually vibration-free turbine engine. It is noted, however, that triple-expansion reciprocating steam engines were used to drive the World War II Liberty ships, by far the largest number of identical ships ever built. Over 2700 ships were built, in the United States, from a British original design. The image in this section shows an animation of a triple-expansion engine. The steam travels through the engine from left to right. The valve chest for each of the cylinders is to the left of the corresponding cylinder. Land-based steam engines could exhaust their steam to atmosphere, as feed water was usually readily available. Prior to and during World War I, the expansion engine dominated marine applications, where high vessel speed was not essential. It was, however, superseded by the British invention steam turbine where speed was required, for instance in warships, such as the dreadnought battleships, and ocean liners. of 1905 was the first major warship to replace the proven technology of the reciprocating engine with the then-novel steam turbine. Types of motor units Reciprocating piston In most reciprocating piston engines, the steam reverses its direction of flow at each stroke (counterflow), entering and exhausting from the same end of the cylinder. The complete engine cycle occupies one rotation of the crank and two piston strokes; the cycle also comprises four events – admission, expansion, exhaust, compression. These events are controlled by valves often working inside a steam chest adjacent to the cylinder; the valves distribute the steam by opening and closing steam ports communicating with the cylinder end(s) and are driven by valve gear, of which there are many types. The simplest valve gears give events of fixed length during the engine cycle and often make the engine rotate in only one direction. Many however have a reversing mechanism which additionally can provide means for saving steam as speed and momentum are gained by gradually "shortening the cutoff" or rather, shortening the admission event; this in turn proportionately lengthens the expansion period. However, as one and the same valve usually controls both steam flows, a short cutoff at admission adversely affects the exhaust and compression periods which should ideally always be kept fairly constant; if the exhaust event is too brief, the totality of the exhaust steam cannot evacuate the cylinder, choking it and giving excessive compression ("kick back"). In the 1840s and 1850s, there were attempts to overcome this problem by means of various patent valve gears with a separate, variable cutoff expansion valve riding on the back of the main slide valve; the latter usually had fixed or limited cutoff. The combined setup gave a fair approximation of the ideal events, at the expense of increased friction and wear, and the mechanism tended to be complicated. The usual compromise solution has been to provide lap by lengthening rubbing surfaces of the valve in such a way as to overlap the port on the admission side, with the effect that the exhaust side remains open for a longer period after cut-off on the admission side has occurred. This expedient has since been generally considered satisfactory for most purposes and makes possible the use of the simpler Stephenson, Joy, and Walschaerts motions. Corliss, and later, poppet valve gears had separate admission and exhaust valves driven by trip mechanisms or cams profiled so as to give ideal events; most of these gears never succeeded outside of the stationary marketplace due to various other issues including leakage and more delicate mechanisms. Compression Before the exhaust phase is quite complete, the exhaust side of the valve closes, shutting a portion of the exhaust steam inside the cylinder. This determines the compression phase where a cushion of steam is formed against which the piston does work whilst its velocity is rapidly decreasing; it moreover obviates the pressure and temperature shock, which would otherwise be caused by the sudden admission of the high-pressure steam at the beginning of the following cycle. Lead in the valve timing The above effects are further enhanced by providing lead: as was later discovered with the internal combustion engine, it has been found advantageous since the late 1830s to advance the admission phase, giving the valve lead so that admission occurs a little before the end of the exhaust stroke in order to fill the clearance volume comprising the ports and the cylinder ends (not part of the piston-swept volume) before the steam begins to exert effort on the piston. Uniflow (or unaflow) engine Uniflow engines attempt to remedy the difficulties arising from the usual counterflow cycle where, during each stroke, the port and the cylinder walls will be cooled by the passing exhaust steam, whilst the hotter incoming admission steam will waste some of its energy in restoring the working temperature. The aim of the uniflow is to remedy this defect and improve efficiency by providing an additional port uncovered by the piston at the end of each stroke making the steam flow only in one direction. By this means, the simple-expansion uniflow engine gives efficiency equivalent to that of classic compound systems with the added advantage of superior part-load performance, and comparable efficiency to turbines for smaller engines below one thousand horsepower. However, the thermal expansion gradient uniflow engines produce along the cylinder wall gives practical difficulties.. Turbine engines A steam turbine consists of one or more rotors (rotating discs) mounted on a drive shaft, alternating with a series of stators (static discs) fixed to the turbine casing. The rotors have a propeller-like arrangement of blades at the outer edge. Steam acts upon these blades, producing rotary motion. The stator consists of a similar, but fixed, series of blades that serve to redirect the steam flow onto the next rotor stage. A steam turbine often exhausts into a surface condenser that provides a vacuum. The stages of a steam turbine are typically arranged to extract the maximum potential work from a specific velocity and pressure of steam, giving rise to a series of variably sized high- and low-pressure stages. Turbines are only efficient if they rotate at relatively high speed, therefore they are usually connected to reduction gearing to drive lower speed applications, such as a ship's propeller. In the vast majority of large electric generating stations, turbines are directly connected to generators with no reduction gearing. Typical speeds are 3600 revolutions per minute (RPM) in the United States with 60 Hertz power, and 3000 RPM in Europe and other countries with 50 Hertz electric power systems. In nuclear power applications, due to enormous size, the turbines typically run at half these speeds, 1800 RPM and 1500 RPM. A turbine rotor is also only capable of providing power when rotating in one direction. Therefore, a reversing stage or gearbox is usually required where power is required in the opposite direction. Steam turbines provide direct rotational force and therefore do not require a linkage mechanism to convert reciprocating to rotary motion. Thus, they produce smoother rotational forces on the output shaft. This contributes to a lower maintenance requirement and less wear on the machinery they power than a comparable reciprocating engine. The main use for steam turbines is in electricity generation (in the 1990s about 90% of the world's electric production was by use of steam turbines) however the recent widespread application of large gas turbine units and typical combined cycle power plants has resulted in reduction of this percentage to the 80% regime for steam turbines. In electricity production, the high speed of turbine rotation matches well with the speed of modern electric generators, which are typically direct connected to their driving turbines. In marine service, (pioneered on the Turbinia), steam turbines with reduction gearing (although the Turbinia has direct turbines to propellers with no reduction gearbox) dominated large ship propulsion throughout the late 20th century, being more efficient (and requiring far less maintenance) than reciprocating steam engines. In recent decades, reciprocating Diesel engines, and gas turbines, have almost entirely supplanted steam propulsion for marine applications. Virtually all nuclear power plants generate electricity by heating water to provide steam that drives a turbine connected to an electrical generator. Nuclear-powered ships and submarines either use a steam turbine directly for main propulsion, with generators providing auxiliary power, or else employ turbo-electric transmission, where the steam drives a turbo generator set with propulsion provided by electric motors. A limited number of steam turbine railroad locomotives were manufactured. Some non-condensing direct-drive locomotives did meet with some success for long haul freight operations in Sweden and for express passenger work in Britain, but were not repeated. Elsewhere, notably in the United States, more advanced designs with electric transmission were built experimentally, but not reproduced. It was found that steam turbines were not ideally suited to the railroad environment and these locomotives failed to oust the classic reciprocating steam unit in the way that modern diesel and electric traction has done. Oscillating cylinder steam engines An oscillating cylinder steam engine is a variant of the simple expansion steam engine which does not require valves to direct steam into and out of the cylinder. Instead of valves, the entire cylinder rocks, or oscillates, such that one or more holes in the cylinder line up with holes in a fixed port face or in the pivot mounting (trunnion). These engines are mainly used in toys and models because of their simplicity, but have also been used in full-size working engines, mainly on ships where their compactness is valued. Rotary steam engines It is possible to use a mechanism based on a pistonless rotary engine such as the Wankel engine in place of the cylinders and valve gear of a conventional reciprocating steam engine. Many such engines have been designed, from the time of James Watt to the present day, but relatively few were actually built and even fewer went into quantity production; see link at bottom of article for more details. The major problem is the difficulty of sealing the rotors to make them steam-tight in the face of wear and thermal expansion; the resulting leakage made them very inefficient. Lack of expansive working, or any means of control of the cutoff, is also a serious problem with many such designs. By the 1840s, it was clear that the concept had inherent problems and rotary engines were treated with some derision in the technical press. However, the arrival of electricity on the scene, and the obvious advantages of driving a dynamo directly from a high-speed engine, led to something of a revival in interest in the 1880s and 1890s, and a few designs had some limited success.. Of the few designs that were manufactured in quantity, those of the Hult Brothers Rotary Steam Engine Company of Stockholm, Sweden, and the spherical engine of Beauchamp Tower are notable. Tower's engines were used by the Great Eastern Railway to drive lighting dynamos on their locomotives, and by the Admiralty for driving dynamos on board the ships of the Royal Navy. They were eventually replaced in these niche applications by steam turbines. Rocket type The aeolipile represents the use of steam by the rocket-reaction principle, although not for direct propulsion. In more modern times there has been limited use of steam for rocketry – particularly for rocket cars. Steam rocketry works by filling a pressure vessel with hot water at high pressure and opening a valve leading to a suitable nozzle. The drop in pressure immediately boils some of the water and the steam leaves through a nozzle, creating a propulsive force. Ferdinand Verbiest's carriage was powered by an aeolipile in 1679. Safety Steam engines possess boilers and other components that are pressure vessels that contain a great deal of potential energy. Steam escapes and boiler explosions (typically BLEVEs) can and have in the past caused great loss of life. While variations in standards may exist in different countries, stringent legal, testing, training, care with manufacture, operation and certification is applied to ensure safety. Failure modes may include: over-pressurisation of the boiler insufficient water in the boiler causing overheating and vessel failure buildup of sediment and scale which cause local hot spots, especially in riverboats using dirty feed water pressure vessel failure of the boiler due to inadequate construction or maintenance. escape of steam from pipework/boiler causing scalding Steam engines frequently possess two independent mechanisms for ensuring that the pressure in the boiler does not go too high; one may be adjusted by the user, the second is typically designed as an ultimate fail-safe. Such safety valves traditionally used a simple lever to restrain a plug valve in the top of a boiler. One end of the lever carried a weight or spring that restrained the valve against steam pressure. Early valves could be adjusted by engine drivers, leading to many accidents when a driver fastened the valve down to allow greater steam pressure and more power from the engine. The more recent type of safety valve uses an adjustable spring-loaded valve, which is locked such that operators may not tamper with its adjustment unless a seal is illegally broken. This arrangement is considerably safer. Lead fusible plugs may be present in the crown of the boiler's firebox. If the water level drops, such that the temperature of the firebox crown increases significantly, the lead melts and the steam escapes, warning the operators, who may then manually suppress the fire. Except in the smallest of boilers the steam escape has little effect on dampening the fire. The plugs are also too small in area to lower steam pressure significantly, depressurizing the boiler. If they were any larger, the volume of escaping steam would itself endanger the crew. Steam cycle The Rankine cycle is the fundamental thermodynamic underpinning of the steam engine. The cycle is an arrangement of components as is typically used for simple power production, and uses the phase change of water (boiling water producing steam, condensing exhaust steam, producing liquid water)) to provide a practical heat/power conversion system. The heat is supplied externally to a closed loop with some of the heat added being converted to work and the waste heat being removed in a condenser. The Rankine cycle is used in virtually all steam power production applications. In the 1990s, Rankine steam cycles generated about 90% of all electric power used throughout the world, including virtually all solar, biomass, coal, and nuclear power plants. It is named after William John Macquorn Rankine, a Scottish polymath. The Rankine cycle is sometimes referred to as a practical Carnot cycle because, when an efficient turbine is used, the TS diagram begins to resemble the Carnot cycle. The main difference is that heat addition (in the boiler) and rejection (in the condenser) are isobaric (constant pressure) processes in the Rankine cycle and isothermal (constant temperature) processes in the theoretical Carnot cycle. In this cycle, a pump is used to pressurize the working fluid which is received from the condenser as a liquid not as a gas. Pumping the working fluid in liquid form during the cycle requires a small fraction of the energy to transport it compared to the energy needed to compress the working fluid in gaseous form in a compressor (as in the Carnot cycle). The cycle of a reciprocating steam engine differs from that of turbines because of condensation and re-evaporation occurring in the cylinder or in the steam inlet passages. The working fluid in a Rankine cycle can operate as a closed loop system, where the working fluid is recycled continuously, or may be an "open loop" system, where the exhaust steam is directly released to the atmosphere, and a separate source of water feeding the boiler is supplied. Normally water is the fluid of choice due to its favourable properties, such as non-toxic and unreactive chemistry, abundance, low cost, and its thermodynamic properties. Mercury is the working fluid in the mercury vapor turbine. Low boiling hydrocarbons can be used in a binary cycle. The steam engine contributed much to the development of thermodynamic theory; however, the only applications of scientific theory that influenced the steam engine were the original concepts of harnessing the power of steam and atmospheric pressure and knowledge of properties of heat and steam. The experimental measurements made by Watt on a model steam engine led to the development of the separate condenser. Watt independently discovered latent heat, which was confirmed by the original discoverer Joseph Black, who also advised Watt on experimental procedures. Watt was also aware of the change in the boiling point of water with pressure. Otherwise, the improvements to the engine itself were more mechanical in nature. The thermodynamic concepts of the Rankine cycle did give engineers the understanding needed to calculate efficiency which aided the development of modern high-pressure and -temperature boilers and the steam turbine. Efficiency The efficiency of an engine cycle can be calculated by dividing the energy output of mechanical work that the engine produces by the energy put into the engine. The historical measure of a steam engine's energy efficiency was its "duty". The concept of duty was first introduced by Watt in order to illustrate how much more efficient his engines were over the earlier Newcomen designs. Duty is the number of foot-pounds of work delivered by burning one bushel (94 pounds) of coal. The best examples of Newcomen designs had a duty of about 7 million, but most were closer to 5 million. Watt's original low-pressure designs were able to deliver duty as high as 25 million, but averaged about 17. This was a three-fold improvement over the average Newcomen design. Early Watt engines equipped with high-pressure steam improved this to 65 million. No heat engine can be more efficient than the Carnot cycle, in which heat is moved from a high-temperature reservoir to one at a low temperature, and the efficiency depends on the temperature difference. For the greatest efficiency, steam engines should be operated at the highest steam temperature possible (superheated steam), and release the waste heat at the lowest temperature possible. The efficiency of a Rankine cycle is usually limited by the working fluid. Without the pressure reaching supercritical levels for the working fluid, the temperature range over which the cycle can operate is small; in steam turbines, turbine entry temperatures are typically 565 °C (the creep limit of stainless steel) and condenser temperatures are around 30 °C. This gives a theoretical Carnot efficiency of about 64% compared with an actual efficiency of 42% for a modern coal-fired power station. This low turbine entry temperature (compared with a gas turbine) is why the Rankine cycle is often used as a bottoming cycle in combined-cycle gas turbine power stations. One principal advantage the Rankine cycle holds over others is that during the compression stage relatively little work is required to drive the pump, the working fluid being in its liquid phase at this point. By condensing the fluid, the work required by the pump consumes only 1% to 3% of the turbine (or reciprocating engine) power and contributes to a much higher efficiency for a real cycle. The benefit of this is lost somewhat due to the lower heat addition temperature. Gas turbines, for instance, have turbine entry temperatures approaching 1500 °C. Nonetheless, the efficiencies of actual large steam cycles and large modern simple cycle gas turbines are fairly well matched. In practice, a reciprocating steam engine cycle exhausting the steam to atmosphere will typically have an efficiency (including the boiler) in the range of 1–10%. However, with the addition of a condenser, Corliss valves, multiple expansion, and high steam pressure/temperature, it may be greatly improved. Historically into the range of 10–20%, and very rarely slightly higher. A modern, large electrical power station (producing several hundred megawatts of electrical output) with steam reheat, economizer etc. will achieve efficiency in the mid 40% range, with the most efficient units approaching 50% thermal efficiency. It is also possible to capture the waste heat using cogeneration in which the waste heat is used for heating a lower boiling point working fluid or as a heat source for district heating via saturated low-pressure steam.
Technology
Tools and machinery
null
27695
https://en.wikipedia.org/wiki/Structured%20programming
Structured programming
Structured programming is a programming paradigm aimed at improving the clarity, quality, and development time of a computer program by making specific disciplined use of the structured control flow constructs of selection (if/then/else) and repetition (while and for), block structures, and subroutines. It emerged in the late 1950s with the appearance of the ALGOL 58 and ALGOL 60 programming languages, with the latter including support for block structures. Contributing factors to its popularity and widespread acceptance, at first in academia and later among practitioners, include the discovery of what is now known as the structured program theorem in 1966, and the publication of the influential "Go To Statement Considered Harmful" open letter in 1968 by Dutch computer scientist Edsger W. Dijkstra, who coined the term "structured programming". Structured programming is most frequently used with deviations that allow for clearer programs in some particular cases, such as when exception handling has to be performed. Elements Control structures Following the structured program theorem, all programs are seen as composed of three control structures: "Sequence"; ordered statements or subroutines executed in sequence. "Selection"; one of a number of statements is executed depending on the state of the program. This is usually expressed with keywords such as if..then..else..endif. The conditional statement should have at least one true condition and each condition should have one exit point at max. "Iteration"; a statement or block is executed until the program reaches a certain state, or operations have been applied to every element of a collection. This is usually expressed with keywords such as while, repeat, for or do..until. Often it is recommended that each loop should only have one entry point (and in the original structural programming, also only one exit point, and a few languages enforce this). Subroutines Subroutines; callable units such as procedures, functions, methods, or subprograms are used to allow a sequence to be referred to by a single statement. Blocks Blocks are used to enable groups of statements to be treated as if they were one statement. Block-structured languages have a syntax for enclosing structures in some formal way, such as an if-statement bracketed by if..fi as in ALGOL 68, or a code section bracketed by BEGIN..END, as in PL/I and Pascal, whitespace indentation as in Python, or the curly braces {...} of C and many later languages. Structured programming languages It is possible to do structured programming in any programming language, though it is preferable to use something like a procedural programming language. Some of the languages initially used for structured programming include: ALGOL, Pascal, PL/I, Ada and RPL but most new procedural programming languages since that time have included features to encourage structured programming, and sometimes deliberately left out features – notably GOTO – in an effort to make unstructured programming more difficult. Structured programming (sometimes known as modular programming) enforces a logical structure on the program being written to make it more efficient and easier to understand and modify. History Theoretical foundation The structured program theorem provides the theoretical basis of structured programming. It states that three ways of combining programs—sequencing, selection, and iteration—are sufficient to express any computable function. This observation did not originate with the structured programming movement; these structures are sufficient to describe the instruction cycle of a central processing unit, as well as the operation of a Turing machine. Therefore, a processor is always executing a "structured program" in this sense, even if the instructions it reads from memory are not part of a structured program. However, authors usually credit the result to a 1966 paper by Böhm and Jacopini, possibly because Dijkstra cited this paper himself. The structured program theorem does not address how to write and analyze a usefully structured program. These issues were addressed during the late 1960s and early 1970s, with major contributions by Dijkstra, Robert W. Floyd, Tony Hoare, Ole-Johan Dahl, and David Gries. Debate P. J. Plauger, an early adopter of structured programming, described his reaction to the structured program theorem: Donald Knuth accepted the principle that programs must be written with provability in mind, but he disagreed with abolishing the GOTO statement, and has continued to use it in his programs. In his 1974 paper, "Structured Programming with Goto Statements", he gave examples where he believed that a direct jump leads to clearer and more efficient code without sacrificing provability. Knuth proposed a looser structural constraint: It should be possible to draw a program's flow chart with all forward branches on the left, all backward branches on the right, and no branches crossing each other. Many of those knowledgeable in compilers and graph theory have advocated allowing only reducible flow graphs. Structured programming theorists gained a major ally in the 1970s after IBM researcher Harlan Mills applied his interpretation of structured programming theory to the development of an indexing system for The New York Times research file. The project was a great engineering success, and managers at other companies cited it in support of adopting structured programming, although Dijkstra criticized the ways that Mills's interpretation differed from the published work. As late as 1987 it was still possible to raise the question of structured programming in a computer science journal. Frank Rubin did so in that year with an open letter titled "'GOTO Considered Harmful' Considered Harmful". Numerous objections followed, including a response from Dijkstra that sharply criticized both Rubin and the concessions other writers made when responding to him. Outcome By the end of the 20th century, nearly all computer scientists were convinced that it is useful to learn and apply the concepts of structured programming. High-level programming languages that originally lacked programming structures, such as FORTRAN, COBOL, and BASIC, now have them. Common deviations While goto has now largely been replaced by the structured constructs of selection (if/then/else) and repetition (while and for), few languages are purely structured. The most common deviation, found in many languages, is the use of a return statement for early exit from a subroutine. This results in multiple exit points, instead of the single exit point required by structured programming. There are other constructions to handle cases that are awkward in purely structured programming. Early exit The most common deviation from structured programming is early exit from a function or loop. At the level of functions, this is a return statement. At the level of loops, this is a break statement (terminate the loop) or continue statement (terminate the current iteration, proceed with next iteration). In structured programming, these can be replicated by adding additional branches or tests, but for returns from nested code this can add significant complexity. C is an early and prominent example of these constructs. Some newer languages also have "labeled breaks", which allow breaking out of more than just the innermost loop. Exceptions also allow early exit, but have further consequences, and thus are treated below. Multiple exits can arise for a variety of reasons, most often either that the subroutine has no more work to do (if returning a value, it has completed the calculation), or has encountered "exceptional" circumstances that prevent it from continuing, hence needing exception handling. The most common problem in early exit is that cleanup or final statements are not executed – for example, allocated memory is not deallocated, or open files are not closed, causing memory leaks or resource leaks. These must be done at each return site, which is brittle and can easily result in bugs. For instance, in later development, a return statement could be overlooked by a developer, and an action that should be performed at the end of a subroutine (e.g., a trace statement) might not be performed in all cases. Languages without a return statement, such as standard Pascal and Seed7, do not have this problem. Most modern languages provide language-level support to prevent such leaks; see detailed discussion at resource management. Most commonly this is done via unwind protection, which ensures that certain code is guaranteed to be run when execution exits a block; this is a structured alternative to having a cleanup block and a goto. This is most often known as try...finally, and considered a part of exception handling. In case of multiple return statements introducing try...finally, without exceptions might look strange. Various techniques exist to encapsulate resource management. An alternative approach, found primarily in C++, is Resource Acquisition Is Initialization, which uses normal stack unwinding (variable deallocation) at function exit to call destructors on local variables to deallocate resources. Kent Beck, Martin Fowler and co-authors have argued in their refactoring books that nested conditionals may be harder to understand than a certain type of flatter structure using multiple exits predicated by guard clauses. Their 2009 book flatly states that "one exit point is really not a useful rule. Clarity is the key principle: If the method is clearer with one exit point, use one exit point; otherwise don’t". They offer a cookbook solution for transforming a function consisting only of nested conditionals into a sequence of guarded return (or throw) statements, followed by a single unguarded block, which is intended to contain the code for the common case, while the guarded statements are supposed to deal with the less common ones (or with errors). Herb Sutter and Andrei Alexandrescu also argue in their 2004 C++ tips book that the single-exit point is an obsolete requirement. In his 2004 textbook, David Watt writes that "single-entry multi-exit control flows are often desirable". Using Tennent's framework notion of sequencer, Watt uniformly describes the control flow constructs found in contemporary programming languages and attempts to explain why certain types of sequencers are preferable to others in the context of multi-exit control flows. Watt writes that unrestricted gotos (jump sequencers) are bad because the destination of the jump is not self-explanatory to the reader of a program until the reader finds and examines the actual label or address that is the target of the jump. In contrast, Watt argues that the conceptual intent of a return sequencer is clear from its own context, without having to examine its destination. Watt writes that a class of sequencers known as escape sequencers, defined as a "sequencer that terminates execution of a textually enclosing command or procedure", encompasses both breaks from loops (including multi-level breaks) and return statements. Watt also notes that while jump sequencers (gotos) have been somewhat restricted in languages like C, where the target must be an inside the local block or an encompassing outer block, that restriction alone is not sufficient to make the intent of gotos in C self-describing and so they can still produce "spaghetti code". Watt also examines how exception sequencers differ from escape and jump sequencers; this is explained in the next section of this article. In contrast to the above, Bertrand Meyer wrote in his 2009 textbook that instructions like break and continue "are just the old goto in sheep's clothing" and strongly advised against their use. Exception handling Based on the coding error from the Ariane 501 disaster, software developer Jim Bonang argues that any exceptions thrown from a function violate the single-exit paradigm, and proposes that all inter-procedural exceptions should be forbidden. Bonang proposes that all single-exit conforming C++ should be written along the lines of: bool MyCheck1() throw() { bool success = false; try { // Do something that may throw exceptions. if (!MyCheck2()) { throw SomeInternalException(); } // Other code similar to the above. success = true; } catch (...) { // All exceptions caught and logged. } return success; } Peter Ritchie also notes that, in principle, even a single throw right before the return in a function constitutes a violation of the single-exit principle, but argues that Dijkstra's rules were written in a time before exception handling became a paradigm in programming languages, so he proposes to allow any number of throw points in addition to a single return point. He notes that solutions that wrap exceptions for the sake of creating a single-exit have higher nesting depth and thus are more difficult to comprehend, and even accuses those who propose to apply such solutions to programming languages that support exceptions of engaging in cargo cult thinking. David Watt also analyzes exception handling in the framework of sequencers (introduced in this article in the previous section on early exits.) Watt notes that an abnormal situation (generally exemplified with arithmetic overflows or input/output failures like file not found) is a kind of error that "is detected in some low-level program unit, but [for which] a handler is more naturally located in a high-level program unit". For example, a program might contain several calls to read files, but the action to perform when a file is not found depends on the meaning (purpose) of the file in question to the program and thus a handling routine for this abnormal situation cannot be located in low-level system code. Watts further notes that introducing status flags testing in the caller, as single-exit structured programming or even (multi-exit) return sequencers would entail, results in a situation where "the application code tends to get cluttered by tests of status flags" and that "the programmer might forgetfully or lazily omit to test a status flag. In fact, abnormal situations represented by status flags are by default ignored!" He notes that in contrast to status flags testing, exceptions have the opposite default behavior, causing the program to terminate unless the programmer explicitly deals with the exception in some way, possibly by adding code to willfully ignore it. Based on these arguments, Watt concludes that jump sequencers or escape sequencers (discussed in the previous section) are not as suitable as a dedicated exception sequencer with the semantics discussed above. The textbook by Louden and Lambert emphasizes that exception handling differs from structured programming constructs like while loops because the transfer of control "is set up at a different point in the program than that where the actual transfer takes place. At the point where the transfer actually occurs, there may be no syntactic indication that control will in fact be transferred." Computer science professor Arvind Kumar Bansal also notes that in languages which implement exception handling, even control structures like for, which have the single-exit property in absence of exceptions, no longer have it in presence of exceptions, because an exception can prematurely cause an early exit in any part of the control structure; for instance if init() throws an exception in for (init(); check(); increm()), then the usual exit point after check() is not reached. Citing multiple prior studies by others (1999–2004) and their own results, Westley Weimer and George Necula wrote that a significant problem with exceptions is that they "create hidden control-flow paths that are difficult for programmers to reason about". The necessity to limit code to single-exit points appears in some contemporary programming environments focused on parallel computing, such as OpenMP. The various parallel constructs from OpenMP, like parallel do, do not allow early exits from inside to the outside of the parallel construct; this restriction includes all manner of exits, from break to C++ exceptions, but all of these are permitted inside the parallel construct if the jump target is also inside it. Multiple entry More rarely, subprograms allow multiple entry. This is most commonly only re-entry into a coroutine (or generator/semicoroutine), where a subprogram yields control (and possibly a value), but can then be resumed where it left off. There are a number of common uses of such programming, notably for streams (particularly input/output), state machines, and concurrency. From a code execution point of view, yielding from a coroutine is closer to structured programming than returning from a subroutine, as the subprogram has not actually terminated, and will continue when called again – it is not an early exit. However, coroutines mean that multiple subprograms have execution state – rather than a single call stack of subroutines – and thus introduce a different form of complexity. It is very rare for subprograms to allow entry to an arbitrary position in the subprogram, as in this case the program state (such as variable values) is uninitialized or ambiguous, and this is very similar to a goto. State machines Some programs, particularly parsers and communications protocols, have a number of states that follow each other in a way that is not easily reduced to the basic structures, and some programmers implement the state-changes with a jump to the new state. This type of state-switching is often used in the Linux kernel. However, it is possible to structure these systems by making each state-change a separate subprogram and using a variable to indicate the active state (see trampoline). Alternatively, these can be implemented via coroutines, which dispense with the trampoline.
Technology
Programming
null
27709
https://en.wikipedia.org/wiki/Semiconductor
Semiconductor
A semiconductor is a material that is between the conductor and insulator in ability to conduct electrical current. In many cases their conducting properties may be altered in useful ways by introducing impurities ("doping") into the crystal structure. When two differently doped regions exist in the same crystal, a semiconductor junction is created. The behavior of charge carriers, which include electrons, ions, and electron holes, at these junctions is the basis of diodes, transistors, and most modern electronics. Some examples of semiconductors are silicon, germanium, gallium arsenide, and elements near the so-called "metalloid staircase" on the periodic table. After silicon, gallium arsenide is the second-most common semiconductor and is used in laser diodes, solar cells, microwave-frequency integrated circuits, and others. Silicon is a critical element for fabricating most electronic circuits. Semiconductor devices can display a range of different useful properties, such as passing current more easily in one direction than the other, showing variable resistance, and having sensitivity to light or heat. Because the electrical properties of a semiconductor material can be modified by doping and by the application of electrical fields or light, devices made from semiconductors can be used for amplification, switching, and energy conversion. The term semiconductor is also used to describe materials used in high capacity, medium- to high-voltage cables as part of their insulation, and these materials are often plastic XLPE (Cross-linked polyethylene) with carbon black. The conductivity of silicon is increased by adding a small amount (of the order of 1 in 108) of pentavalent (antimony, phosphorus, or arsenic) or trivalent (boron, gallium, indium) atoms. This process is known as doping, and the resulting semiconductors are known as doped or extrinsic semiconductors. Apart from doping, the conductivity of a semiconductor can be improved by increasing its temperature. This is contrary to the behavior of a metal, in which conductivity decreases with an increase in temperature. The modern understanding of the properties of a semiconductor relies on quantum physics to explain the movement of charge carriers in a crystal lattice. Doping greatly increases the number of charge carriers within the crystal. When a semiconductor is doped by Group V elements, they will behave like donors creating free electrons, known as "n-type" doping. When a semiconductor is doped by Group III elements, they will behave like acceptors creating free holes, known as "p-type" doping. The semiconductor materials used in electronic devices are doped under precise conditions to control the concentration and regions of p- and n-type dopants. A single semiconductor device crystal can have many p- and n-type regions; the p–n junctions between these regions are responsible for the useful electronic behavior. Using a hot-point probe, one can determine quickly whether a semiconductor sample is p- or n-type. A few of the properties of semiconductor materials were observed throughout the mid-19th and first decades of the 20th century. The first practical application of semiconductors in electronics was the 1904 development of the cat's-whisker detector, a primitive semiconductor diode used in early radio receivers. Developments in quantum physics led in turn to the invention of the transistor in 1947 and the integrated circuit in 1958. Properties Variable electrical conductivity Semiconductors in their natural state are poor conductors because a current requires the flow of electrons, and semiconductors have their valence bands filled, preventing the entire flow of new electrons. Several developed techniques allow semiconducting materials to behave like conducting materials, such as doping or gating. These modifications have two outcomes: n-type and p-type. These refer to the excess or shortage of electrons, respectively. A balanced number of electrons would cause a current to flow throughout the material. Homojunctions Homojunctions occur when two differently doped semiconducting materials are joined. For example, a configuration could consist of p-doped and n-doped germanium. This results in an exchange of electrons and holes between the differently doped semiconducting materials. The n-doped germanium would have an excess of electrons, and the p-doped germanium would have an excess of holes. The transfer occurs until an equilibrium is reached by a process called recombination, which causes the migrating electrons from the n-type to come in contact with the migrating holes from the p-type. The result of this process is a narrow strip of immobile ions, which causes an electric field across the junction. Excited electrons A difference in electric potential on a semiconducting material would cause it to leave thermal equilibrium and create a non-equilibrium situation. This introduces electrons and holes to the system, which interact via a process called ambipolar diffusion. Whenever thermal equilibrium is disturbed in a semiconducting material, the number of holes and electrons changes. Such disruptions can occur as a result of a temperature difference or photons, which can enter the system and create electrons and holes. The processes that create or annihilate electrons and holes are called generation and recombination, respectively. Light emission In certain semiconductors, excited electrons can relax by emitting light instead of producing heat. Controlling the semiconductor composition and electrical current allows for the manipulation of the emitted light's properties. These semiconductors are used in the construction of light-emitting diodes and fluorescent quantum dots. High thermal conductivity Semiconductors with high thermal conductivity can be used for heat dissipation and improving thermal management of electronics. They play a crucial role in electric vehicles, high-brightness LEDs and power modules, among other applications. Thermal energy conversion Semiconductors have large thermoelectric power factors making them useful in thermoelectric generators, as well as high thermoelectric figures of merit making them useful in thermoelectric coolers. Materials A large number of elements and compounds have semiconducting properties, including: Certain pure elements are found in group 14 of the periodic table; the most commercially important of these elements are silicon and germanium. Silicon and germanium are used here effectively because they have 4 valence electrons in their outermost shell, which gives them the ability to gain or lose electrons equally at the same time. Binary compounds, particularly between elements in groups 13 and 15, such as gallium arsenide, groups 12 and 16, groups 14 and 16, and between different group-14 elements, e.g. silicon carbide. Certain ternary compounds, oxides, and alloys. Organic semiconductors, made of organic compounds. Semiconducting metal–organic frameworks. The most common semiconducting materials are crystalline solids, but amorphous and liquid semiconductors are also known. These include hydrogenated amorphous silicon and mixtures of arsenic, selenium, and tellurium in a variety of proportions. These compounds share with better-known semiconductors the properties of intermediate conductivity and a rapid variation of conductivity with temperature, as well as occasional negative resistance. Such disordered materials lack the rigid crystalline structure of conventional semiconductors such as silicon. They are generally used in thin film structures, which do not require material of higher electronic quality, being relatively insensitive to impurities and radiation damage. Preparation of semiconductor materials Almost all of today's electronic technology involves the use of semiconductors, with the most important aspect being the integrated circuit (IC), which are found in desktops, laptops, scanners, cell-phones, and other electronic devices. Semiconductors for ICs are mass-produced. To create an ideal semiconducting material, chemical purity is paramount. Any small imperfection can have a drastic effect on how the semiconducting material behaves due to the scale at which the materials are used. A high degree of crystalline perfection is also required, since faults in the crystal structure (such as dislocations, twins, and stacking faults) interfere with the semiconducting properties of the material. Crystalline faults are a major cause of defective semiconductor devices. The larger the crystal, the more difficult it is to achieve the necessary perfection. Current mass production processes use crystal ingots between in diameter, grown as cylinders and sliced into wafers. The round shape characteristic of these wafers comes from single-crystal ingots usually produced using the Czochralski method. Silicon wafers were first introduced in the 1940s. There is a combination of processes that are used to prepare semiconducting materials for ICs. One process is called thermal oxidation, which forms silicon dioxide on the surface of the silicon. This is used as a gate insulator and field oxide. Other processes are called photomasks and photolithography. This process is what creates the patterns on the circuit in the integrated circuit. Ultraviolet light is used along with a photoresist layer to create a chemical change that generates the patterns for the circuit. The etching is the next process that is required. The part of the silicon that was not covered by the photoresist layer from the previous step can now be etched. The main process typically used today is called plasma etching. Plasma etching usually involves an etch gas pumped in a low-pressure chamber to create plasma. A common etch gas is chlorofluorocarbon, or more commonly known Freon. A high radio-frequency voltage between the cathode and anode is what creates the plasma in the chamber. The silicon wafer is located on the cathode, which causes it to be hit by the positively charged ions that are released from the plasma. The result is silicon that is etched anisotropically. The last process is called diffusion. This is the process that gives the semiconducting material its desired semiconducting properties. It is also known as doping. The process introduces an impure atom to the system, which creates the p–n junction. To get the impure atoms embedded in the silicon wafer, the wafer is first put in a 1,100 degree Celsius chamber. The atoms are injected in and eventually diffuse with the silicon. After the process is completed and the silicon has reached room temperature, the doping process is done and the semiconducting wafer is almost prepared. Physics of semiconductors Energy bands and electrical conduction Semiconductors are defined by their unique electric conductive behavior, somewhere between that of a conductor and an insulator. The differences between these materials can be understood in terms of the quantum states for electrons, each of which may contain zero or one electron (by the Pauli exclusion principle). These states are associated with the electronic band structure of the material. Electrical conductivity arises due to the presence of electrons in states that are delocalized (extending through the material), however in order to transport electrons a state must be partially filled, containing an electron only part of the time. If the state is always occupied with an electron, then it is inert, blocking the passage of other electrons via that state. The energies of these quantum states are critical since a state is partially filled only if its energy is near the Fermi level (see Fermi–Dirac statistics). High conductivity in material comes from it having many partially filled states and much state delocalization. Metals are good electrical conductors and have many partially filled states with energies near their Fermi level. Insulators, by contrast, have few partially filled states, their Fermi levels sit within band gaps with few energy states to occupy. Importantly, an insulator can be made to conduct by increasing its temperature: heating provides energy to promote some electrons across the band gap, inducing partially filled states in both the band of states beneath the band gap (valence band) and the band of states above the band gap (conduction band). An (intrinsic) semiconductor has a band gap that is smaller than that of an insulator and at room temperature, significant numbers of electrons can be excited to cross the band gap. A pure semiconductor, however, is not very useful, as it is neither a very good insulator nor a very good conductor. However, one important feature of semiconductors (and some insulators, known as semi-insulators) is that their conductivity can be increased and controlled by doping with impurities and gating with electric fields. Doping and gating move either the conduction or valence band much closer to the Fermi level and greatly increase the number of partially filled states. Some wider-bandgap semiconductor materials are sometimes referred to as semi-insulators. When undoped, these have electrical conductivity nearer to that of electrical insulators, however they can be doped (making them as useful as semiconductors). Semi-insulators find niche applications in micro-electronics, such as substrates for HEMT. An example of a common semi-insulator is gallium arsenide. Some materials, such as titanium dioxide, can even be used as insulating materials for some applications, while being treated as wide-gap semiconductors for other applications. Charge carriers (electrons and holes) The partial filling of the states at the bottom of the conduction band can be understood as adding electrons to that band. The electrons do not stay indefinitely (due to the natural thermal recombination) but they can move around for some time. The actual concentration of electrons is typically very dilute, and so (unlike in metals) it is possible to think of the electrons in the conduction band of a semiconductor as a sort of classical ideal gas, where the electrons fly around freely without being subject to the Pauli exclusion principle. In most semiconductors, the conduction bands have a parabolic dispersion relation, and so these electrons respond to forces (electric field, magnetic field, etc.) much as they would in a vacuum, though with a different effective mass. Because the electrons behave like an ideal gas, one may also think about conduction in very simplistic terms such as the Drude model, and introduce concepts such as electron mobility. For partial filling at the top of the valence band, it is helpful to introduce the concept of an electron hole. Although the electrons in the valence band are always moving around, a completely full valence band is inert, not conducting any current. If an electron is taken out of the valence band, then the trajectory that the electron would normally have taken is now missing its charge. For the purposes of electric current, this combination of the full valence band, minus the electron, can be converted into a picture of a completely empty band containing a positively charged particle that moves in the same way as the electron. Combined with the negative effective mass of the electrons at the top of the valence band, we arrive at a picture of a positively charged particle that responds to electric and magnetic fields just as a normal positively charged particle would do in a vacuum, again with some positive effective mass. This particle is called a hole, and the collection of holes in the valence band can again be understood in simple classical terms (as with the electrons in the conduction band). Carrier generation and recombination When ionizing radiation strikes a semiconductor, it may excite an electron out of its energy level and consequently leave a hole. This process is known as electron-hole pair generation. Electron-hole pairs are constantly generated from thermal energy as well, in the absence of any external energy source. Electron-hole pairs are also apt to recombine. Conservation of energy demands that these recombination events, in which an electron loses an amount of energy larger than the band gap, be accompanied by the emission of thermal energy (in the form of phonons) or radiation (in the form of photons). In some states, the generation and recombination of electron–hole pairs are in equipoise. The number of electron-hole pairs in the steady state at a given temperature is determined by quantum statistical mechanics. The precise quantum mechanical mechanisms of generation and recombination are governed by the conservation of energy and conservation of momentum. As the probability that electrons and holes meet together is proportional to the product of their numbers, the product is in the steady-state nearly constant at a given temperature, providing that there is no significant electric field (which might "flush" carriers of both types, or move them from neighbor regions containing more of them to meet together) or externally driven pair generation. The product is a function of the temperature, as the probability of getting enough thermal energy to produce a pair increases with temperature, being approximately , where k is the Boltzmann constant, T is the absolute temperature and EG is bandgap. The probability of meeting is increased by carrier traps – impurities or dislocations which can trap an electron or hole and hold it until a pair is completed. Such carrier traps are sometimes purposely added to reduce the time needed to reach the steady-state. Doping The conductivity of semiconductors may easily be modified by introducing impurities into their crystal lattice. The process of adding controlled impurities to a semiconductor is known as doping. The amount of impurity, or dopant, added to an intrinsic (pure) semiconductor varies its level of conductivity. Doped semiconductors are referred to as extrinsic. By adding impurity to the pure semiconductors, the electrical conductivity may be varied by factors of thousands or millions. A 1 cm3 specimen of a metal or semiconductor has the order of 1022 atoms. In a metal, every atom donates at least one free electron for conduction, thus 1 cm3 of metal contains on the order of 1022 free electrons, whereas a 1 cm3 sample of pure germanium at 20°C contains about atoms, but only free electrons and holes. The addition of 0.001% of arsenic (an impurity) donates an extra 1017 free electrons in the same volume and the electrical conductivity is increased by a factor of 10,000. The materials chosen as suitable dopants depend on the atomic properties of both the dopant and the material to be doped. In general, dopants that produce the desired controlled changes are classified as either electron acceptors or donors. Semiconductors doped with donor impurities are called n-type, while those doped with acceptor impurities are known as p-type. The n and p type designations indicate which charge carrier acts as the material's majority carrier. The opposite carrier is called the minority carrier, which exists due to thermal excitation at a much lower concentration compared to the majority carrier. For example, the pure semiconductor silicon has four valence electrons that bond each silicon atom to its neighbors. In silicon, the most common dopants are group III and group V elements. Group III elements all contain three valence electrons, causing them to function as acceptors when used to dope silicon. When an acceptor atom replaces a silicon atom in the crystal, a vacant state (an electron "hole") is created, which can move around the lattice and function as a charge carrier. Group V elements have five valence electrons, which allows them to act as a donor; substitution of these atoms for silicon creates an extra free electron. Therefore, a silicon crystal doped with boron creates a p-type semiconductor whereas one doped with phosphorus results in an n-type material. During manufacture, dopants can be diffused into the semiconductor body by contact with gaseous compounds of the desired element, or ion implantation can be used to accurately position the doped regions. Amorphous semiconductors Some materials, when rapidly cooled to a glassy amorphous state, have semiconducting properties. These include B, Si, Ge, Se, and Te, and there are multiple theories to explain them. Early history of semiconductors The history of the understanding of semiconductors begins with experiments on the electrical properties of materials. The properties of the time-temperature coefficient of resistance, rectification, and light-sensitivity were observed starting in the early 19th century. Thomas Johann Seebeck was the first to notice that semiconductors exhibit special feature such that experiment concerning an Seebeck effect emerged with much stronger result when applying semiconductors, in 1821. In 1833, Michael Faraday reported that the resistance of specimens of silver sulfide decreases when they are heated. This is contrary to the behavior of metallic substances such as copper. In 1839, Alexandre Edmond Becquerel reported observation of a voltage between a solid and a liquid electrolyte, when struck by light, the photovoltaic effect. In 1873, Willoughby Smith observed that selenium resistors exhibit decreasing resistance when light falls on them. In 1874, Karl Ferdinand Braun observed conduction and rectification in metallic sulfides, although this effect had been discovered earlier by Peter Munck af Rosenschöld (sv) writing for the Annalen der Physik und Chemie in 1835; Rosenschöld's findings were ignored. Simon Sze stated that Braun's research was the earliest systematic study of semiconductor devices. Also in 1874, Arthur Schuster found that a copper oxide layer on wires had rectification properties that ceased when the wires are cleaned. William Grylls Adams and Richard Evans Day observed the photovoltaic effect in selenium in 1876. A unified explanation of these phenomena required a theory of solid-state physics, which developed greatly in the first half of the 20th century. In 1878 Edwin Herbert Hall demonstrated the deflection of flowing charge carriers by an applied magnetic field, the Hall effect. The discovery of the electron by J.J. Thomson in 1897 prompted theories of electron-based conduction in solids. Karl Baedeker, by observing a Hall effect with the reverse sign to that in metals, theorized that copper iodide had positive charge carriers. classified solid materials like metals, insulators, and "variable conductors" in 1914 although his student Josef Weiss already introduced the term Halbleiter (a semiconductor in modern meaning) in his Ph.D. thesis in 1910. Felix Bloch published a theory of the movement of electrons through atomic lattices in 1928. In 1930, stated that conductivity in semiconductors was due to minor concentrations of impurities. By 1931, the band theory of conduction had been established by Alan Herries Wilson and the concept of band gaps had been developed. Walter H. Schottky and Nevill Francis Mott developed models of the potential barrier and of the characteristics of a metal–semiconductor junction. By 1938, Boris Davydov had developed a theory of the copper-oxide rectifier, identifying the effect of the p–n junction and the importance of minority carriers and surface states. Agreement between theoretical predictions (based on developing quantum mechanics) and experimental results was sometimes poor. This was later explained by John Bardeen as due to the extreme "structure sensitive" behavior of semiconductors, whose properties change dramatically based on tiny amounts of impurities. Commercially pure materials of the 1920s containing varying proportions of trace contaminants produced differing experimental results. This spurred the development of improved material refining techniques, culminating in modern semiconductor refineries producing materials with parts-per-trillion purity. Devices using semiconductors were at first constructed based on empirical knowledge before semiconductor theory provided a guide to the construction of more capable and reliable devices. Alexander Graham Bell used the light-sensitive property of selenium to transmit sound over a beam of light in 1880. A working solar cell, of low efficiency, was constructed by Charles Fritts in 1883, using a metal plate coated with selenium and a thin layer of gold; the device became commercially useful in photographic light meters in the 1930s. Point-contact microwave detector rectifiers made of lead sulfide were used by Jagadish Chandra Bose in 1904; the cat's-whisker detector using natural galena or other materials became a common device in the development of radio. However, it was somewhat unpredictable in operation and required manual adjustment for best performance. In 1906, H.J. Round observed light emission when electric current passed through silicon carbide crystals, the principle behind the light-emitting diode. Oleg Losev observed similar light emission in 1922, but at the time the effect had no practical use. Power rectifiers, using copper oxide and selenium, were developed in the 1920s and became commercially important as an alternative to vacuum tube rectifiers. The first semiconductor devices used galena, including German physicist Ferdinand Braun's crystal detector in 1874 and Indian physicist Jagadish Chandra Bose's radio crystal detector in 1901. In the years preceding World War II, infrared detection and communications devices prompted research into lead-sulfide and lead-selenide materials. These devices were used for detecting ships and aircraft, for infrared rangefinders, and for voice communication systems. The point-contact crystal detector became vital for microwave radio systems since available vacuum tube devices could not serve as detectors above about 4000 MHz; advanced radar systems relied on the fast response of crystal detectors. Considerable research and development of silicon materials occurred during the war to develop detectors of consistent quality. Early transistors Detector and power rectifiers could not amplify a signal. Many efforts were made to develop a solid-state amplifier and were successful in developing a device called the point contact transistor which could amplify 20 dB or more. In 1922, Oleg Losev developed two-terminal, negative resistance amplifiers for radio, but he died in the Siege of Leningrad after successful completion. In 1926, Julius Edgar Lilienfeld patented a device resembling a field-effect transistor, but it was not practical. and in 1938 demonstrated a solid-state amplifier using a structure resembling the control grid of a vacuum tube; although the device displayed power gain, it had a cut-off frequency of one cycle per second, too low for any practical applications, but an effective application of the available theory. At Bell Labs, William Shockley and A. Holden started investigating solid-state amplifiers in 1938. The first p–n junction in silicon was observed by Russell Ohl about 1941 when a specimen was found to be light-sensitive, with a sharp boundary between p-type impurity at one end and n-type at the other. A slice cut from the specimen at the p–n boundary developed a voltage when exposed to light. The first working transistor was a point-contact transistor invented by John Bardeen, Walter Houser Brattain, and William Shockley at Bell Labs in 1947. Shockley had earlier theorized a field-effect amplifier made from germanium and silicon, but he failed to build such a working device, before eventually using germanium to invent the point-contact transistor. In France, during the war, Herbert Mataré had observed amplification between adjacent point contacts on a germanium base. After the war, Mataré's group announced their "Transistron" amplifier only shortly after Bell Labs announced the "transistor". In 1954, physical chemist Morris Tanenbaum fabricated the first silicon junction transistor at Bell Labs. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.
Physical sciences
Electrical circuits
null
27711
https://en.wikipedia.org/wiki/Starch
Starch
Starch or amylum is a polymeric carbohydrate consisting of numerous glucose units joined by glycosidic bonds. This polysaccharide is produced by most green plants for energy storage. Worldwide, it is the most common carbohydrate in human diets, and is contained in large amounts in staple foods such as wheat, potatoes, maize (corn), rice, and cassava (manioc). Pure starch is a white, tasteless and odorless powder that is insoluble in cold water or alcohol. It consists of two types of molecules: the linear and helical amylose and the branched amylopectin. Depending on the plant, starch generally contains 20 to 25% amylose and 75 to 80% amylopectin by weight. Glycogen, the energy reserve of animals, is a more highly branched version of amylopectin. In industry, starch is often converted into sugars, for example by malting. These sugars may be fermented to produce ethanol in the manufacture of beer, whisky and biofuel. In addition, sugars produced from processed starch are used in many processed foods. Mixing most starches in warm water produces a paste, such as wheatpaste, which can be used as a thickening, stiffening or gluing agent. The principal non-food, industrial use of starch is as an adhesive in the papermaking process. A similar paste, clothing or laundry starch, can be applied to certain textile goods before ironing to stiffen them. Etymology The word "starch" is from a Germanic root with the meanings "strong, stiff, strengthen, stiffen". Modern German Stärke (strength, starch) is related and refers to the main historical applications, its uses in textiles: sizing yarn for weaving, and starching linen. The Greek term for starch, "amylon" (ἄμυλον), which means "not milled", is also related. It provides the root amyl, which is used as a prefix for several carbon compounds related to or derived from starch (e.g. amyl alcohol, amylose, amylopectin). History Starch grains from the rhizomes of Typha (cattails, bullrushes) as flour have been identified from grinding stones in Europe dating back to 30,000 years ago. Starch grains from sorghum were found on grind stones in caves in Ngalue, Mozambique dating up to 100,000 years ago. Pure extracted wheat starch paste was used in Ancient Egypt, possibly to glue papyrus. The extraction of starch is first described in the Natural History of Pliny the Elder around 77–79 CE. Romans used it also in cosmetic creams, to powder the hair and to thicken sauces. Persians and Indians used it to make dishes similar to gothumai wheat halva. Rice starch as surface treatment of paper has been used in paper production in China since 700 CE. In the mid eighth century production of paper that was sized with wheat starch started in the Arabic world. Laundry starch was first described in England in the beginning of the 15th century and was essential to make 16th century ruffed collars. Energy store of plants Plants produce glucose from carbon dioxide and water by photosynthesis. The glucose is used to generate the chemical energy required for general metabolism as well as a precursor to myriad organic building blocks such as nucleic acids, lipids, proteins, and structural polysaccharides such as cellulose. Most green plants store any extra glucose in the form of starch, which is packed into semicrystalline granules called starch granules or amyloplasts. Toward the end of the growing season, starch accumulates in twigs of trees near the buds. Fruit, seeds, rhizomes, and tubers store starch to prepare for the next growing season. Young plants live on this stored energy in their roots, seeds, and fruits until they can find suitable soil in which to grow. The starch is also consumed at night when photosynthesis is not occurring. Green algae and land-plants store their starch in the plastids, whereas red algae, glaucophytes, cryptomonads, dinoflagellates and the parasitic apicomplexa store a similar type of polysaccharide called floridean starch in their cytosol or periplast. Especially when hydrated, glucose takes up much space and is osmotically active. Starch, on the other hand, being insoluble and therefore osmotically inactive, can be stored much more compactly. The semicrystalline granules generally consist of concentric layers of amylose and amylopectin which can be made bioavailable upon cellular demand in the plant. Amylose consists of long chains derived from glucose molecules connected by α-1,4-glycosidic linkage. Amylopectin is highly branched but also derived from glucose interconnected by α-1,6-glycosidic linkages. The same type of linkage is found in the animal reserve polysaccharide glycogen. By contrast, many structural polysaccharides such as chitin, cellulose, and peptidoglycan are linked by β-glycosidic bonds, which are more resistant to hydrolysis. Structure of starch particles Within plants, starch is stored in semi-crystalline granules. Each plant species has a distinctive starch granular size: rice starch is relatively small (about 2 μm), potato starches have larger granules (up to 100 μm) while wheat and tapioca fall in-between. Unlike other botanical sources of starch, wheat starch has a bimodal size distribution, with both smaller and larger granules ranging from 2 to 55 μm. Some cultivated plant varieties have pure amylopectin starch without amylose, known as waxy starches. The most used is waxy maize, others are glutinous rice and waxy potato starch. Waxy starches undergo less retrogradation, resulting in a more stable paste. A maize cultivar with a relatively high proportion of amylose starch, amylomaize, is cultivated for the use of its gel strength and for use as a resistant starch (a starch that resists digestion) in food products. Biosynthesis Plants synthesize starch in two types of tissues. The first type is storage tissues, for example, cereal endosperm, and storage roots and stems such as cassava and potato. The second type is green tissue, for example, leaves, where many plant species synthesize transitory starch on a daily basis. In both tissue types, starch is synthesized in a plastids (amyloplasts and chloroplasts). The biochemical pathway involves conversion of glucose 1-phosphate to ADP-glucose using the enzyme glucose-1-phosphate adenylyltransferase. This step requires energy in the form of ATP. A number of starch synthases available in plastids then adds the ADP-glucose via α-1,4-glycosidic bond to a growing chain of glucose residues, liberating ADP. The ADP-glucose is almost certainly added to the non-reducing end of the amylose polymer, as the UDP-glucose is added to the non-reducing end of glycogen during glycogen synthesis. The small glucan chain, further agglomerate to form initials of starch granules. The biosynthesis and expansion of granules represent a complex molecular event that can be subdivided into four major steps, namely, granule initiation, coalescence of small granules, phase transition, and expansion. Several proteins have been characterized for their involvement in each of these processes. For instance, a chloroplast membrane-associated protein, MFP1, determines the sites of granule initiation. Another protein named PTST2 binds to small glucan chains and agglomerates to recruit starch synthase 4 (SS4). Three other proteins, namely, PTST3, SS5, and MRC, are also known to be involved in the process of starch granule initiation. Furthermore, two proteins named ESV and LESV play a role in the aqueous-to-crystalline phase transition of glucan chains. Several catalytically active starch synthases, such as SS1, SS2, SS3, and GBSS, are critical for starch granule biosynthesis and play a catalytic role at each step of granule biogenesis and expansion. In addition to above proteins, starch branching enzymes (BEs) introduces α-1,6-glycosidic bonds between the glucose chains, creating the branched amylopectin. The starch debranching enzyme (DBE) isoamylase removes some of these branches. Several isoforms of these enzymes exist, leading to a highly complex synthesis process. Degradation The starch that is synthesized in plant leaves during the day is transitory: it serves as an energy source at night. Enzymes catalyze release of glucose from the granules. The insoluble, highly branched starch chains require phosphorylation in order to be accessible for degrading enzymes. The enzyme glucan, water dikinase (GWD) installs a phosphate at the C-6 position of glucose, close to the chain's 1,6-alpha branching bonds. A second enzyme, phosphoglucan, water dikinase (PWD) phosphorylates the glucose molecule at the C-3 position. After the second phosphorylation, the first degrading enzyme, beta-amylase (BAM) attacks the glucose chain at its non-reducing end. Maltose is the main product released. If the glucose chain consists of three or fewer molecules, BAM cannot release maltose. A second enzyme, disproportionating enzyme-1 (DPE1), combines two maltotriose molecules. From this chain, a glucose molecule is released. Now, BAM can release another maltose molecule from the remaining chain. This cycle repeats until starch is fully degraded. If BAM comes close to the phosphorylated branching point of the glucose chain, it can no longer release maltose. In order for the phosphorylated chain to be degraded, the enzyme isoamylase (ISA) is required. The products of starch degradation are predominantly maltose and smaller amounts of glucose. These molecules are exported from the plastid to the cytosol, maltose via the maltose transporter and glucose by the plastidic glucose translocator (pGlcT). These two sugars are used for sucrose synthesis. Sucrose can then be used in the oxidative pentose phosphate pathway in the mitochondria, to generate ATP at night. Starch industry In addition to starchy plants consumed directly, 66 million tonnes of starch were processed industrially in 2008. By 2011, production had increased to 73 million tons. In the EU the starch industry produced about 11 million tonnes in 2011, with around 40% being used for industrial applications and 60% for food uses, most of the latter as glucose syrups. In 2017 EU production was 11 million ton of which 9,4 million ton was consumed in the EU and of which 54% were starch sweeteners. The US produced about 27.5 million tons of starch in 2017, of which about 8.2 million tons was high fructose syrup, 6.2 million tons was glucose syrups, and 2.5 million tons were starch products. The rest of the starch was used for producing ethanol (1.6 billion gallons). Industrial processing The starch industry extracts and refines starches from crops by wet grinding, washing, sieving and drying. Today, the main commercial refined starches are cornstarch, tapioca, arrowroot, and wheat, rice, and potato starches. To a lesser extent, sources of refined starch are sweet potato, sago and mung bean. To this day, starch is extracted from more than 50 types of plants. Crude starch is processed on an industrial scale to maltodextrin and glucose syrups and fructose syrups. These massive conversions are mediated by a variety of enzymes, which break down the starch to varying extents. Here breakdown involves hydrolysis, i.e. cleavage of bonds between sugar subunits by the addition of water. Some sugars are isomerized. The processes have been described as occurring in two phases: liquefaction and saccharification. The liquefaction converts starch into dextrins. Amylase is a key enzyme for producing dextrin. The saccharification converts dextrin into maltoses and glucose. Diverse enzymes are used in this second phase, including pullanase and other amylases. Dextrinization If starch is subjected to dry heat, it breaks down to form dextrins, also called "pyrodextrins" in this context. This break down process is known as dextrinization. (Pyro)dextrins are mainly yellow to brown in color and dextrinization is partially responsible for the browning of toasted bread. Food Starch is the most common carbohydrate in the human diet and is contained in many staple foods. The major sources of starch intake worldwide are the cereals (rice, wheat, and maize) and the root vegetables (potatoes and cassava). Many other starchy foods are grown, some only in specific climates, including acorns, arrowroot, arracacha, bananas, barley, breadfruit, buckwheat, canna, colocasia, cuckoo-pint, katakuri, kudzu, malanga, millet, oats, oca, polynesian arrowroot, sago, sorghum, sweet potatoes, rye, taro, chestnuts, water chestnuts, and yams, and many kinds of beans, such as favas, lentils, mung beans, peas, and chickpeas. Before processed foods, people consumed large amounts of uncooked and unprocessed starch-containing plants, which contained high amounts of resistant starch. Microbes within the large intestine ferment or consume the starch, producing short-chain fatty acids, which are used as energy, and support the maintenance and growth of the microbes. Upon cooking, starch is transformed from an insoluble, difficult-to-digest granule into readily accessible glucose chains with very different nutritional and functional properties. In current diets, highly processed foods are more easily digested and release more glucose in the small intestine—less starch reaches the large intestine and more energy is absorbed by the body. It is thought that this shift in energy delivery (as a result of eating more processed foods) may be one of the contributing factors to the development of metabolic disorders of modern life, including obesity and diabetes. The amylose/amylopectin ratio, molecular weight and molecular fine structure influences the physicochemical properties as well as energy release of different types of starches. In addition, cooking and food processing significantly impacts starch digestibility and energy release. Starch has been classified as rapidly digestible starch, slowly digestible starch and resistant starch, depending upon its digestion profile. Raw starch granules resist digestion by human enzymes and do not break down into glucose in the small intestine - they reach the large intestine instead and function as prebiotic dietary fiber. When starch granules are fully gelatinized and cooked, the starch becomes easily digestible and releases glucose quickly within the small intestine. When starchy foods are cooked and cooled, some of the glucose chains re-crystallize and become resistant to digestion again. Slowly digestible starch can be found in raw cereals, where digestion is slow but relatively complete within the small intestine. Widely used prepared foods containing starch are bread, pancakes, cereals, noodles, pasta, porridge and tortilla. During cooking with high heat, sugars released from starch can react with amino acids via the Maillard reaction, forming advanced glycation end-products (AGEs), contributing aromas, flavors and texture to foods. One example of a dietary AGE is acrylamide. Recent evidence suggests that the intestinal fermentation of dietary AGEs may be associated with insulin resistance, atherosclerosis, diabetes and other inflammatory diseases. This may be due to the impact of AGEs on intestinal permeability. Starch gelatinization during cake baking can be impaired by sugar competing for water, preventing gelatinization and improving texture. Starch sugars Starch can be hydrolyzed into simpler carbohydrates by acids, various enzymes, or a combination of the two. The resulting fragments are known as dextrins. The extent of conversion is typically quantified by dextrose equivalent (DE), which is roughly the fraction of the glycosidic bonds in starch that have been broken. These starch sugars are by far the most common starch based food ingredient and are used as sweeteners in many drinks and foods. They include: Maltodextrin, a lightly hydrolyzed (DE 10–20) starch product used as a bland-tasting filler and thickener. Various glucose syrups (DE 30–70), also called corn syrups in the US, viscous solutions used as sweeteners and thickeners in many kinds of processed foods. Dextrose (DE 100), commercial glucose, prepared by the complete hydrolysis of starch. High fructose syrup, made by treating dextrose solutions with the enzyme glucose isomerase, until a substantial fraction of the glucose has been converted to fructose. In the U.S. high-fructose corn syrup is significantly cheaper than sugar, and is the principal sweetener used in processed foods and beverages. Fructose also has better microbiological stability. One kind of high fructose corn syrup, HFCS-55, is sweeter than sucrose because it is made with more fructose, while the sweetness of HFCS-42 is on par with sucrose. Sugar alcohols, such as maltitol, erythritol, sorbitol, mannitol and hydrogenated starch hydrolysate, are sweeteners made by reducing sugars. Modified starches The modified food starches are E coded according to European Food Safety Authority and INS coded Food Additives according to the Codex Alimentarius: 1400 Dextrin 1401 Acid-treated starch 1402 Alkaline-treated starch 1403 Bleached starch 1404 Oxidized starch 1405 Starches, enzyme-treated 1410 Monostarch phosphate 1412 Distarch phosphate 1413 Phosphated distarch phosphate 1414 Acetylated distarch phosphate 1420 Acetylated starch 1422 Acetylated distarch adipate 1440 Hydroxypropyl starch 1442 Hydroxypropyl distarch phosphate 1443 Hydroxypropyl distarch glycerol 1450 Starch sodium octenyl succinate 1451 Acetylated oxidized starch INS 1400, 1401, 1402, 1403 and 1405 are in the EU food ingredients without an E-number. Typical modified starches for technical applications are cationic starches, hydroxyethyl starch, carboxymethylated starches and thiolated starches. Use as food additive As an additive for food processing, food starches are typically used as thickeners and stabilizers in foods such as puddings, custards, soups, sauces, gravies, pie fillings, and salad dressings, and to make noodles and pastas. They function as thickeners, extenders, emulsion stabilizers and are exceptional binders in processed meats. Gummed sweets such as jelly beans and wine gums are not manufactured using a mold in the conventional sense. A tray is filled with native starch and leveled. A positive mold is then pressed into the starch leaving an impression of 1,000 or so jelly beans. The jelly mix is then poured into the impressions and put onto a stove to set. This method greatly reduces the number of molds that must be manufactured. Resistant starch Resistant starch is starch that escapes digestion in the small intestine of healthy individuals. High-amylose starch from wheat or corn has a higher gelatinization temperature than other types of starch, and retains its resistant starch content through baking, mild extrusion and other food processing techniques. It is used as an insoluble dietary fiber in processed foods such as bread, pasta, cookies, crackers, pretzels and other low moisture foods. It is also utilized as a dietary supplement for its health benefits. Published studies have shown that resistant starch helps to improve insulin sensitivity, reduces pro-inflammatory biomarkers interleukin 6 and tumor necrosis factor alpha and improves markers of colonic function. It has been suggested that resistant starch contributes to the health benefits of intact whole grains. Synthetic starch A cell-free chemoenzymatic process has been demonstrated to synthesize starch from CO2 and hydrogen.y. The chemical pathway of 11 core reactions was drafted by computational pathway design and converts CO2 to starch at a rate that is ~8.5-fold higher than starch synthesis in maize. Non-food applications Papermaking Papermaking is the largest non-food application for starches globally, consuming many millions of metric tons annually. In a typical sheet of copy paper for instance, the starch content may be as high as 8%. Both chemically modified and unmodified starches are used in papermaking. In the wet part of the papermaking process, generally called the "wet-end", the starches used are cationic and have a positive charge bound to the starch polymer. These starch derivatives associate with the anionic or negatively charged paper fibers / cellulose and inorganic fillers. Cationic starches together with other retention and internal sizing agents help to give the necessary strength properties to the paper web formed in the papermaking process (wet strength), and to provide strength to the final paper sheet (dry strength). In the dry end of the papermaking process, the paper web is rewetted with a starch based solution. The process is called surface sizing. Starches used have been chemically, or enzymatically depolymerized at the paper mill or by the starch industry (oxidized starch). The size/starch solutions are applied to the paper web by means of various mechanical presses (size presses). Together with surface sizing agents the surface starches impart additional strength to the paper web and additionally provide water hold out or "size" for superior printing properties. Starch is also used in paper coatings as one of the binders for the coating formulations which include a mixture of pigments, binders and thickeners. Coated paper has improved smoothness, hardness, whiteness and gloss and thus improves printing characteristics. Adhesives Corrugated board adhesives are the next largest application of non-food starches globally. Starch glues are mostly based on unmodified native starches, plus some additive such as borax and caustic soda. Part of the starch is gelatinized to carry the slurry of uncooked starches and prevent sedimentation. This opaque glue is called a SteinHall adhesive. The glue is applied on tips of the fluting. The fluted paper is pressed to paper called liner. This is then dried under high heat, which causes the rest of the uncooked starch in glue to swell/gelatinize. This gelatinizing makes the glue a fast and strong adhesive for corrugated board production. Starch is used in the manufacture of various adhesives or glues for book-binding, wallpaper adhesives, paper sack production, tube winding, gummed paper, envelope adhesives, school glues and bottle labeling. Starch derivatives, such as yellow dextrins, can be modified by addition of some chemicals to form a hard glue for paper work; some of those forms use borax or soda ash, which are mixed with the starch solution at to create a very good adhesive. Sodium silicate can be added to reinforce these formula. A related large non-food starch application is in the construction industry, where starch is used in the gypsum wall board manufacturing process. Chemically modified or unmodified starches are added to the stucco containing primarily gypsum. Top and bottom heavyweight sheets of paper are applied to the formulation, and the process is allowed to heat and cure to form the eventual rigid wall board. The starches act as a glue for the cured gypsum rock with the paper covering, and also provide rigidity to the board. Other Clothing or laundry starch is used in the laundering of clothes. It was widely used in Europe in the 16th and 17th centuries. Textile chemicals from starch: warp sizing agents are used to reduce breaking of yarns during weaving. Starch is mainly used to size cotton based yarns. Modified starch is also used as textile printing thickener. In oil exploration, starch is used to adjust the viscosity of drilling fluid, which is used to lubricate the drill head and suspend the grinding residue in petroleum extraction. Starch is also used to make some packing peanuts, and some drop ceiling tiles. In the printing industry, food grade starch is used in the manufacture of anti-set-off spray powder used to separate printed sheets of paper to avoid wet ink being set off. For body powder, powdered corn starch is used as a substitute for talcum powder, and similarly in other health and beauty products. Starch is used to produce various bioplastics, synthetic polymers that are biodegradable. An example is polylactic acid based on glucose from starch. Glucose from starch can be further fermented to biofuel corn ethanol using the so-called wet milling process. Today most bioethanol production plants use the dry milling process to ferment corn or other feedstock directly to ethanol. In the pharmaceutical industry, starch is also used as an excipient, as tablet disintegrant, and as binder. Synthetic amylose made from cellulose has a well-controlled degree of polymerization. Therefore, it can be used as a potential drug deliver carrier. Chemical tests A solution of triiodide (I3−) (formed by mixing iodine and potassium iodide) can be used to test for starch. The colorless solution turns dark blue in the presence of starch. The strength of the resulting blue color depends on the amount of amylose present. Waxy starches with little or no amylose present will color red. Benedict's test and Fehling's test is also done to indicate the presence of starch. Safety In the US, the Occupational Safety and Health Administration (OSHA) has set the legal limit (Permissible exposure limit) for starch exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an eight-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a Recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an eight-hour workday.
Biology and health sciences
Biochemistry and molecular biology
null
27725
https://en.wikipedia.org/wiki/Surface%20area
Surface area
The surface area (symbol A) of a solid object is a measure of the total area that the surface of the object occupies. The mathematical definition of surface area in the presence of curved surfaces is considerably more involved than the definition of arc length of one-dimensional curves, or of the surface area for polyhedra (i.e., objects with flat polygonal faces), for which the surface area is the sum of the areas of its faces. Smooth surfaces, such as a sphere, are assigned surface area using their representation as parametric surfaces. This definition of surface area is based on methods of infinitesimal calculus and involves partial derivatives and double integration. A general definition of surface area was sought by Henri Lebesgue and Hermann Minkowski at the turn of the twentieth century. Their work led to the development of geometric measure theory, which studies various notions of surface area for irregular objects of any dimension. An important example is the Minkowski content of a surface. Definition While the areas of many simple surfaces have been known since antiquity, a rigorous mathematical definition of area requires a great deal of care. This should provide a function which assigns a positive real number to a certain class of surfaces that satisfies several natural requirements. The most fundamental property of the surface area is its additivity: the area of the whole is the sum of the areas of the parts. More rigorously, if a surface S is a union of finitely many pieces S1, …, Sr which do not overlap except at their boundaries, then Surface areas of flat polygonal shapes must agree with their geometrically defined area. Since surface area is a geometric notion, areas of congruent surfaces must be the same and the area must depend only on the shape of the surface, but not on its position and orientation in space. This means that surface area is invariant under the group of Euclidean motions. These properties uniquely characterize surface area for a wide class of geometric surfaces called piecewise smooth. Such surfaces consist of finitely many pieces that can be represented in the parametric form with a continuously differentiable function The area of an individual piece is defined by the formula Thus the area of SD is obtained by integrating the length of the normal vector to the surface over the appropriate region D in the parametric uv plane. The area of the whole surface is then obtained by adding together the areas of the pieces, using additivity of surface area. The main formula can be specialized to different classes of surfaces, giving, in particular, formulas for areas of graphs z = f(x,y) and surfaces of revolution. One of the subtleties of surface area, as compared to arc length of curves, is that surface area cannot be defined simply as the limit of areas of polyhedral shapes approximating a given smooth surface. It was demonstrated by Hermann Schwarz that already for the cylinder, different choices of approximating flat surfaces can lead to different limiting values of the area; this example is known as the Schwarz lantern. Various approaches to a general definition of surface area were developed in the late nineteenth and the early twentieth century by Henri Lebesgue and Hermann Minkowski. While for piecewise smooth surfaces there is a unique natural notion of surface area, if a surface is very irregular, or rough, then it may not be possible to assign an area to it at all. A typical example is given by a surface with spikes spread throughout in a dense fashion. Many surfaces of this type occur in the study of fractals. Extensions of the notion of area which partially fulfill its function and may be defined even for very badly irregular surfaces are studied in geometric measure theory. A specific example of such an extension is the Minkowski content of the surface. Common formulas Ratio of surface areas of a sphere and cylinder of the same radius and height The below given formulas can be used to show that the surface area of a sphere and cylinder of the same radius and height are in the ratio 2 : 3, as follows. Let the radius be r and the height be h (which is 2r for the sphere). The discovery of this ratio is credited to Archimedes. In chemistry Surface area is important in chemical kinetics. Increasing the surface area of a substance generally increases the rate of a chemical reaction. For example, iron in a fine powder will combust, while in solid blocks it is stable enough to use in structures. For different applications a minimal or maximal surface area may be desired. In biology The surface area of an organism is important in several considerations, such as regulation of body temperature and digestion. Animals use their teeth to grind food down into smaller particles, increasing the surface area available for digestion. The epithelial tissue lining the digestive tract contains microvilli, greatly increasing the area available for absorption. Elephants have large ears, allowing them to regulate their own body temperature. In other instances, animals will need to minimize surface area; for example, people will fold their arms over their chest when cold to minimize heat loss. The surface area to volume ratio (SA:V) of a cell imposes upper limits on size, as the volume increases much faster than does the surface area, thus limiting the rate at which substances diffuse from the interior across the cell membrane to interstitial spaces or to other cells. Indeed, representing a cell as an idealized sphere of radius , the volume and surface area are, respectively, and . The resulting surface area to volume ratio is therefore . Thus, if a cell has a radius of 1 μm, the SA:V ratio is 3; whereas if the radius of the cell is instead 10 μm, then the SA:V ratio becomes 0.3. With a cell radius of 100, SA:V ratio is 0.03. Thus, the surface area falls off steeply with increasing volume.
Mathematics
Measurement
null
27743
https://en.wikipedia.org/wiki/Solar%20energy
Solar energy
Solar energy is the radiant energy from the Sun's light and heat, which can be harnessed using a range of technologies such as solar electricity, solar thermal energy (including solar water heating) and solar architecture. It is an essential source of renewable energy, and its technologies are broadly characterized as either passive solar or active solar depending on how they capture and distribute solar energy or convert it into solar power. Active solar techniques include the use of photovoltaic systems, concentrated solar power, and solar water heating to harness the energy. Passive solar techniques include designing a building for better daylighting, selecting materials with favorable thermal mass or light-dispersing properties, and organize spaces that naturally circulate air. In 2011, the International Energy Agency said that "the development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible, and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating global warming .... these advantages are global". Potential The Earth receives 174 petawatts (PW) of incoming solar radiation (insolation) at the upper atmosphere. Approximately 30% is reflected back to space while the rest, 122 PW, is absorbed by clouds, oceans and land masses. The spectrum of solar light at the Earth's surface is mostly spread across the visible and near-infrared ranges with a small part in the near-ultraviolet. Most of the world's population live in areas with insolation levels of 150–300 watts/m2, or 3.5–7.0 kWh/m2 per day. Solar radiation is absorbed by the Earth's land surface, oceans – which cover about 71% of the globe – and atmosphere. Warm air containing evaporated water from the oceans rises, causing atmospheric circulation or convection. When the air reaches a high altitude, where the temperature is low, water vapor condenses into clouds, which rain onto the Earth's surface, completing the water cycle. The latent heat of water condensation amplifies convection, producing atmospheric phenomena such as wind, cyclones and anticyclones. Sunlight absorbed by the oceans and land masses keeps the surface at an average temperature of 14 °C. By photosynthesis, green plants convert solar energy into chemically stored energy, which produces food, wood and the biomass from which fossil fuels are derived. The total solar energy absorbed by Earth's atmosphere, oceans and land masses is approximately 122 PW·year = 3,850,000 exajoules (EJ) per year. In 2002 (2019), this was more energy in one hour (one hour and 25 minutes) than the world used in one year. Photosynthesis captures approximately 3,000 EJ per year in biomass. The potential solar energy that could be used by humans differs from the amount of solar energy present near the surface of the planet because factors such as geography, time variation, cloud cover, and the land available to humans limit the amount of solar energy that we can acquire. In 2021, Carbon Tracker Initiative estimated the land area needed to generate all our energy from solar alone was 450,000 km2 — or about the same as the area of Sweden, or the area of Morocco, or the area of California (0.3% of the Earth's total land area). Solar technologies are categorized as either passive or active depending on the way they capture, convert and distribute sunlight and enable solar energy to be harnessed at different levels around the world, mostly depending on the distance from the Equator. Although solar energy refers primarily to the use of solar radiation for practical ends, all types of renewable energy, other than geothermal power and tidal power, are derived either directly or indirectly from the Sun. Active solar techniques use photovoltaics, concentrated solar power, solar thermal collectors, pumps, and fans to convert sunlight into useful output. Passive solar techniques include selecting materials with favorable thermal properties, designing spaces that naturally circulate air, and referencing the position of a building to the Sun. Active solar technologies increase the supply of energy and are considered supply side technologies, while passive solar technologies reduce the need for alternative resources and are generally considered demand-side technologies. In 2000, the United Nations Development Programme, UN Department of Economic and Social Affairs, and World Energy Council published an estimate of the potential solar energy that could be used by humans each year. This took into account factors such as insolation, cloud cover, and the land that is usable by humans. It was stated that solar energy has a global potential of per year (see table below). Thermal energy Solar thermal technologies can be used for water heating, space heating, space cooling and process heat generation. Early commercial adaptation In 1878, at the Universal Exposition in Paris, Augustin Mouchot successfully demonstrated a solar steam engine but could not continue development because of cheap coal and other factors. In 1897, Frank Shuman, a US inventor, engineer and solar energy pioneer built a small demonstration solar engine that worked by reflecting solar energy onto square boxes filled with ether, which has a lower boiling point than water and were fitted internally with black pipes which in turn powered a steam engine. In 1908 Shuman formed the Sun Power Company with the intent of building larger solar power plants. He, along with his technical advisor A.S.E. Ackermann and British physicist Sir Charles Vernon Boys, developed an improved system using mirrors to reflect solar energy upon collector boxes, increasing heating capacity to the extent that water could now be used instead of ether. Shuman then constructed a full-scale steam engine powered by low-pressure water, enabling him to patent the entire solar engine system by 1912. Shuman built the world's first solar thermal power station in Maadi, Egypt, between 1912 and 1913. His plant used parabolic troughs to power a engine that pumped more than of water per minute from the Nile River to adjacent cotton fields. Although the outbreak of World War I and the discovery of cheap oil in the 1930s discouraged the advancement of solar energy, Shuman's vision, and basic design were resurrected in the 1970s with a new wave of interest in solar thermal energy. In 1916 Shuman was quoted in the media advocating solar energy's utilization, saying: Water heating Solar hot water systems use sunlight to heat water. In middle geographical latitudes (between 40 degrees north and 40 degrees south), 60 to 70% of the domestic hot water use, with water temperatures up to , can be provided by solar heating systems. The most common types of solar water heaters are evacuated tube collectors (44%) and glazed flat plate collectors (34%) generally used for domestic hot water; and unglazed plastic collectors (21%) used mainly to heat swimming pools. As of 2015, the total installed capacity of solar hot water systems was approximately 436 thermal gigawatt (GWth), and China is the world leader in their deployment with 309 GWth installed, taken up 71% of the market. Israel and Cyprus are the per capita leaders in the use of solar hot water systems with over 90% of homes using them. In the United States, Canada, and Australia, heating swimming pools is the dominant application of solar hot water with an installed capacity of 18 GWth as of 2005. Heating, cooling and ventilation In the United States, heating, ventilation and air conditioning (HVAC) systems account for 30% (4.65 EJ/yr) of the energy used in commercial buildings and nearly 50% (10.1 EJ/yr) of the energy used in residential buildings. Solar heating, cooling and ventilation technologies can be used to offset a portion of this energy. Use of solar for heating can roughly be divided into passive solar concepts and active solar concepts, depending on whether active elements such as sun tracking and solar concentrator optics are used. Thermal mass is any material that can be used to store heat—heat from the Sun in the case of solar energy. Common thermal mass materials include stone, cement, and water. Historically they have been used in arid climates or warm temperate regions to keep buildings cool by absorbing solar energy during the day and radiating stored heat to the cooler atmosphere at night. However, they can be used in cold temperate areas to maintain warmth as well. The size and placement of thermal mass depend on several factors such as climate, daylighting, and shading conditions. When duly incorporated, thermal mass maintains space temperatures in a comfortable range and reduces the need for auxiliary heating and cooling equipment. A solar chimney (or thermal chimney, in this context) is a passive solar ventilation system composed of a vertical shaft connecting the interior and exterior of a building. As the chimney warms, the air inside is heated, causing an updraft that pulls air through the building. Performance can be improved by using glazing and thermal mass materials in a way that mimics greenhouses. Deciduous trees and plants have been promoted as a means of controlling solar heating and cooling. When planted on the southern side of a building in the northern hemisphere or the northern side in the southern hemisphere, their leaves provide shade during the summer, while the bare limbs allow light to pass during the winter. Since bare, leafless trees shade 1/3 to 1/2 of incident solar radiation, there is a balance between the benefits of summer shading and the corresponding loss of winter heating. In climates with significant heating loads, deciduous trees should not be planted on the Equator-facing side of a building because they will interfere with winter solar availability. They can, however, be used on the east and west sides to provide a degree of summer shading without appreciably affecting winter solar gain. Cooking Solar cookers use sunlight for cooking, drying, and pasteurization. They can be grouped into three broad categories: box cookers, panel cookers, and reflector cookers. The simplest solar cooker is the box cooker first built by Horace de Saussure in 1767. A basic box cooker consists of an insulated container with a transparent lid. It can be used effectively with partially overcast skies and will typically reach temperatures of . Panel cookers use a reflective panel to direct sunlight onto an insulated container and reach temperatures comparable to box cookers. Reflector cookers use various concentrating geometries (dish, trough, Fresnel mirrors) to focus light on a cooking container. These cookers reach temperatures of and above but require direct light to function properly and must be repositioned to track the Sun. Process heat Solar concentrating technologies such as parabolic dish, trough and Scheffler reflectors can provide process heat for commercial and industrial applications. The first commercial system was the Solar Total Energy Project (STEP) in Shenandoah, Georgia, US where a field of 114 parabolic dishes provided 50% of the process heating, air conditioning and electrical requirements for a clothing factory. This grid-connected cogeneration system provided 400 kW of electricity plus thermal energy in the form of 401 kW steam and 468 kW chilled water and had a one-hour peak load thermal storage. Evaporation ponds are shallow pools that concentrate dissolved solids through evaporation. The use of evaporation ponds to obtain salt from seawater is one of the oldest applications of solar energy. Modern uses include concentrating brine solutions used in leach mining and removing dissolved solids from waste streams. Clothes lines, clotheshorses, and clothes racks dry clothes through evaporation by wind and sunlight without consuming electricity or gas. In some states of the United States legislation protects the "right to dry" clothes. Unglazed transpired collectors (UTC) are perforated sun-facing walls used for preheating ventilation air. UTCs can raise the incoming air temperature up to and deliver outlet temperatures of . The short payback period of transpired collectors (3 to 12 years) makes them a more cost-effective alternative than glazed collection systems. As of 2003, over 80 systems with a combined collector area of had been installed worldwide, including an collector in Costa Rica used for drying coffee beans and a collector in Coimbatore, India, used for drying marigolds. Water treatment Solar distillation can be used to make saline or brackish water potable. The first recorded instance of this was by 16th-century Arab alchemists. A large-scale solar distillation project was first constructed in 1872 in the Chilean mining town of Las Salinas. The plant, which had solar collection area of , could produce up to per day and operate for 40 years. Individual still designs include single-slope, double-slope (or greenhouse type), vertical, conical, inverted absorber, multi-wick, and multiple effect. These stills can operate in passive, active, or hybrid modes. Double-slope stills are the most economical for decentralized domestic purposes, while active multiple effect units are more suitable for large-scale applications. Solar water disinfection (SODIS) involves exposing water-filled plastic polyethylene terephthalate (PET) bottles to sunlight for several hours. Exposure times vary depending on weather and climate from a minimum of six hours to two days during fully overcast conditions. It is recommended by the World Health Organization as a viable method for household water treatment and safe storage. Over two million people in developing countries use this method for their daily drinking water. Solar energy may be used in a water stabilization pond to treat waste water without chemicals or electricity. A further environmental advantage is that algae grow in such ponds and consume carbon dioxide in photosynthesis, although algae may produce toxic chemicals that make the water unusable. Molten salt technology Molten salt can be employed as a thermal energy storage method to retain thermal energy collected by a solar tower or solar trough of a concentrated solar power plant so that it can be used to generate electricity in bad weather or at night. It was demonstrated in the Solar Two project from 1995 to 1999. The system is predicted to have an annual efficiency of 99%, a reference to the energy retained by storing heat before turning it into electricity, versus converting heat directly into electricity. The molten salt mixtures vary. The most extended mixture contains sodium nitrate, potassium nitrate and calcium nitrate. It is non-flammable and non-toxic, and has already been used in the chemical and metals industries as a heat-transport fluid. Hence, experience with such systems exists in non-solar applications. The salt melts at . It is kept liquid at in an insulated "cold" storage tank. The liquid salt is pumped through panels in a solar collector where the focused irradiance heats it to . It is then sent to a hot storage tank. This is so well insulated that the thermal energy can be usefully stored for up to a week. When electricity is needed, the hot salt is pumped to a conventional steam-generator to produce superheated steam for a turbine/generator as used in any conventional coal, oil, or nuclear power plant. A 100-megawatt turbine would need a tank about tall and in diameter to drive it for four hours by this design. Several parabolic trough power plants in Spain and solar power tower developer SolarReserve use this thermal energy storage concept. The Solana Generating Station in the U.S. has six hours of storage by molten salt. In Chile, The Cerro Dominador power plant has a 110 MW solar-thermal tower, the heat is transferred to molten salts. The molten salts then transfer their heat in a heat exchanger to water, generating superheated steam, which feeds a turbine that transforms the kinetic energy of the steam into electric energy using the Rankine cycle. In this way, the Cerro Dominador plant is capable of generating around 110 MW of power. The plant has an advanced storage system enabling it to generate electricity for up to 17.5 hours without direct solar radiation, which allows it to provide a stable electricity supply without interruptions if required. The Project secured up to 950 GW·h per year sale. Another project is the María Elena plant is a 400 MW thermo-solar complex in the northern Chilean region of Antofagasta employing molten salt technology. Electricity production Concentrated solar power Concentrating Solar Power (CSP) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. The concentrated heat is then used as a heat source for a conventional power plant. A wide range of concentrating technologies exists; the most developed are the parabolic trough, the solar tower collectors, the concentrating linear Fresnel reflector, and the Stirling dish. Various techniques are used to track the Sun and focus light. In all of these systems, a working fluid is heated by the concentrated sunlight, and is then used for power generation or energy storage. Designs need to account for the risk of a dust storm, hail, or another extreme weather event that can damage the fine glass surfaces of solar power plants. Metal grills would allow a high percentage of sunlight to enter the mirrors and solar panels while also preventing most damage. Architecture and urban planning Sunlight has influenced building design since the beginning of architectural history. Advanced solar architecture and urban planning methods were first employed by the Greeks and Chinese, who oriented their buildings toward the south to provide light and warmth. The common features of passive solar architecture are orientation relative to the Sun, compact proportion (a low surface area to volume ratio), selective shading (overhangs) and thermal mass. When these features are tailored to the local climate and environment, they can produce well-lit spaces that stay in a comfortable temperature range. Socrates' Megaron House is a classic example of passive solar design. The most recent approaches to solar design use computer modeling tying together solar lighting, heating and ventilation systems in an integrated solar design package. Active solar equipment such as pumps, fans, and switchable windows can complement passive design and improve system performance. Urban heat islands (UHI) are metropolitan areas with higher temperatures than that of the surrounding environment. The higher temperatures result from increased absorption of solar energy by urban materials such as asphalt and concrete, which have lower albedos and higher heat capacities than those in the natural environment. A straightforward method of counteracting the UHI effect is to paint buildings and roads white and to plant trees in the area. Using these methods, a hypothetical "cool communities" program in Los Angeles has projected that urban temperatures could be reduced by approximately 3 °C at an estimated cost of US$1  billion, giving estimated total annual benefits of US$530  million from reduced air-conditioning costs and healthcare savings. Agriculture and horticulture Agriculture and horticulture seek to optimize the capture of solar energy to optimize the productivity of plants. Techniques such as timed planting cycles, tailored row orientation, staggered heights between rows and the mixing of plant varieties can improve crop yields. While sunlight is generally considered a plentiful resource, the exceptions highlight the importance of solar energy to agriculture. During the short growing seasons of the Little Ice Age, French and English farmers employed fruit walls to maximize the collection of solar energy. These walls acted as thermal masses and accelerated ripening by keeping plants warm. Early fruit walls were built perpendicular to the ground and facing south, but over time, sloping walls were developed to make better use of sunlight. In 1699, Nicolas Fatio de Duillier even suggested using a tracking mechanism which could pivot to follow the Sun. Applications of solar energy in agriculture aside from growing crops include pumping water, drying crops, brooding chicks and drying chicken manure. More recently the technology has been embraced by vintners, who use the energy generated by solar panels to power grape presses. Greenhouses convert solar light to heat, enabling year-round production and the growth (in enclosed environments) of specialty crops and other plants not naturally suited to the local climate. Primitive greenhouses were first used during Roman times to produce cucumbers year-round for the Roman emperor Tiberius. The first modern greenhouses were built in Europe in the 16th century to keep exotic plants brought back from explorations abroad. Greenhouses remain an important part of horticulture today. Plastic transparent materials have also been used to similar effect in polytunnels and row covers. Transport Development of a solar-powered car has been an engineering goal since the 1980s. The World Solar Challenge is a biannual solar-powered car race, where teams from universities and enterprises compete over across central Australia from Darwin to Adelaide. In 1987, when it was founded, the winner's average speed was and by 2007 the winner's average speed had improved to . The North American Solar Challenge and the planned South African Solar Challenge are comparable competitions that reflect an international interest in the engineering and development of solar powered vehicles. Some vehicles use solar panels for auxiliary power, such as for air conditioning, to keep the interior cool, thus reducing fuel consumption. In 1975, the first practical solar boat was constructed in England. By 1995, passenger boats incorporating PV panels began appearing and are now used extensively. In 1996, Kenichi Horie made the first solar-powered crossing of the Pacific Ocean, and the Sun21 catamaran made the first solar-powered crossing of the Atlantic Ocean in the winter of 2006–2007. There were plans to circumnavigate the globe in 2010. In 1974, the unmanned AstroFlight Sunrise airplane made the first solar flight. On 29 April 1979, the Solar Riser made the first flight in a solar-powered, fully controlled, man-carrying flying machine, reaching an altitude of . In 1980, the Gossamer Penguin made the first piloted flights powered solely by photovoltaics. This was quickly followed by the Solar Challenger which crossed the English Channel in July 1981. In 1990 Eric Scott Raymond in 21 hops flew from California to North Carolina using solar power. Developments then turned back to unmanned aerial vehicles (UAV) with the Pathfinder (1997) and subsequent designs, culminating in the Helios which set the altitude record for a non-rocket-propelled aircraft at in 2001. The Zephyr, developed by BAE Systems, is the latest in a line of record-breaking solar aircraft, making a 54-hour flight in 2007, and month-long flights were envisioned by 2010. From March 2015 to July 2016, Solar Impulse, an electric aircraft, successfully circumnavigated the globe. It is a single-seat plane powered by solar cells and capable of taking off under its own power. The design allows the aircraft to remain airborne for several days. A solar balloon is a black balloon that is filled with ordinary air. As sunlight shines on the balloon, the air inside is heated and expands, causing an upward buoyancy force, much like an artificially heated hot air balloon. Some solar balloons are large enough for human flight, but usage is generally limited to the toy market as the surface-area to payload-weight ratio is relatively high. Squad Solar vehicle The Squad Solar is a Neighborhood Electric Vehicle that has a solar roof and can be plugged into a normal 120 volt outlet to be charged. Fuel production Solar chemical processes use solar energy to drive chemical reactions. These processes offset energy that would otherwise come from a fossil fuel source and can also convert solar energy into storable and transportable fuels. Solar induced chemical reactions can be divided into thermochemical or photochemical. A variety of fuels can be produced by artificial photosynthesis. The multielectron catalytic chemistry involved in making carbon-based fuels (such as methanol) from reduction of carbon dioxide is challenging; a feasible alternative is hydrogen production from protons, though use of water as the source of electrons (as plants do) requires mastering the multielectron oxidation of two water molecules to molecular oxygen. Some have envisaged working solar fuel plants in coastal metropolitan areas by 2050 the splitting of seawater providing hydrogen to be run through adjacent fuel-cell electric power plants and the pure water by-product going directly into the municipal water system. In addition, chemical energy storage is another solution to solar energy storage. Hydrogen production technologies have been a significant area of solar chemical research since the 1970s. Aside from electrolysis driven by photovoltaic or photochemical cells, several thermochemical processes have also been explored. One such route uses concentrators to split water into oxygen and hydrogen at high temperatures (). Another approach uses the heat from solar concentrators to drive the steam reformation of natural gas thereby increasing the overall hydrogen yield compared to conventional reforming methods. Thermochemical cycles characterized by the decomposition and regeneration of reactants present another avenue for hydrogen production. The Solzinc process under development at the Weizmann Institute of Science uses a 1 MW solar furnace to decompose zinc oxide (ZnO) at temperatures above . This initial reaction produces pure zinc, which can subsequently be reacted with water to produce hydrogen. Energy storage methods Thermal mass systems can store solar energy in the form of heat at domestically useful temperatures for daily or interseasonal durations. Thermal storage systems generally use readily available materials with high specific heat capacities such as water, earth and stone. Well-designed systems can lower peak demand, shift time-of-use to off-peak hours and reduce overall heating and cooling requirements. Phase change materials such as paraffin wax and Glauber's salt are another thermal storage medium. These materials are inexpensive, readily available, and can deliver domestically useful temperatures (approximately ). The "Dover House" (in Dover, Massachusetts) was the first to use a Glauber's salt heating system, in 1948. Solar energy can also be stored at high temperatures using molten salts. Salts are an effective storage medium because they are low-cost, have a high specific heat capacity, and can deliver heat at temperatures compatible with conventional power systems. The Solar Two project used this method of energy storage, allowing it to store in its 68 m3 storage tank with an annual storage efficiency of about 99%. Off-grid PV systems have traditionally used rechargeable batteries to store excess electricity. With grid-tied systems, excess electricity can be sent to the transmission grid, while standard grid electricity can be used to meet shortfalls. Net metering programs give household systems credit for any electricity they deliver to the grid. This is handled by 'rolling back' the meter whenever the home produces more electricity than it consumes. If the net electricity use is below zero, the utility then rolls over the kilowatt-hour credit to the next month. Other approaches involve the use of two meters, to measure electricity consumed vs. electricity produced. This is less common due to the increased installation cost of the second meter. Most standard meters accurately measure in both directions, making a second meter unnecessary. Pumped-storage hydroelectricity stores energy in the form of water pumped when energy is available from a lower elevation reservoir to a higher elevation one. The energy is recovered when demand is high by releasing the water, with the pump becoming a hydroelectric power generator. Development, deployment and economics Beginning with the surge in coal use, which accompanied the Industrial Revolution, energy consumption steadily transitioned from wood and biomass to fossil fuels. The early development of solar technologies starting in the 1860s was driven by an expectation that coal would soon become scarce. However, development of solar technologies stagnated in the early 20th  century in the face of the increasing availability, economy, and utility of coal and petroleum. The 1973 oil embargo and 1979 energy crisis caused a reorganization of energy policies around the world. It brought renewed attention to developing solar technologies. Deployment strategies focused on incentive programs such as the Federal Photovoltaic Utilization Program in the US and the Sunshine Program in Japan. Other efforts included the formation of research facilities in the US (SERI, now NREL), Japan (NEDO), and Germany (Fraunhofer Institute for Solar Energy Systems ISE). Commercial solar water heaters began appearing in the United States in the 1890s. These systems saw increasing use until the 1920s but were gradually replaced by cheaper and more reliable heating fuels. As with photovoltaics, solar water heating attracted renewed attention as a result of the oil crises in the 1970s, but interest subsided in the 1980s due to falling petroleum prices. Development in the solar water heating sector progressed steadily throughout the 1990s, and annual growth rates have averaged 20% since 1999. Although generally underestimated, solar water heating and cooling is by far the most widely deployed solar technology with an estimated capacity of 154  GW as of 2007. The International Energy Agency has said that solar energy can make considerable contributions to solving some of the most urgent problems the world now faces: The development of affordable, inexhaustible, and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible, and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating climate change, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared. In 2011, a report by the International Energy Agency found that solar energy technologies such as photovoltaics, solar hot water, and concentrated solar power could provide a third of the world's energy by 2060 if politicians commit to limiting climate change and transitioning to renewable energy. The energy from the Sun could play a key role in de-carbonizing the global economy alongside improvements in energy efficiency and imposing costs on greenhouse gas emitters. "The strength of solar is the incredible variety and flexibility of applications, from small scale to big scale". In 2021 Lazard estimated the levelized cost of new build unsubsidized utility scale solar electricity at less than 37 dollars per MWh and existing coal-fired power above that amount. The 2021 report also said that new solar was also cheaper than new gas-fired power, but not generally existing gas power. Emerging technologies Experimental solar power Concentrated photovoltaics (CPV) systems employ sunlight concentrated onto photovoltaic surfaces for the purpose of electricity generation. Thermoelectric, or "thermovoltaic" devices convert a temperature difference between dissimilar materials into an electric current. Floating solar arrays Solar-assisted heat pump A heat pump is a device that provides heat energy from a source of heat to a destination called a "heat sink". Heat pumps are designed to move thermal energy opposite to the direction of spontaneous heat flow by absorbing heat from a cold space and releasing it to a warmer one. A solar-assisted heat pump represents the integration of a heat pump and thermal solar panels in a single integrated system. Typically these two technologies are used separately (or only placing them in parallel) to produce hot water. In this system the solar thermal panel performs the function of the low temperature heat source and the heat produced is used to feed the heat pump's evaporator. The goal of this system is to get high COP and then produce energy in a more efficient and less expensive way. It is possible to use any type of solar thermal panel (sheet and tubes, roll-bond, heat pipe, thermal plates) or hybrid (mono/polycrystalline, thin film) in combination with the heat pump. The use of a hybrid panel is preferable because it allows covering a part of the electricity demand of the heat pump and reduces the power consumption and consequently the variable costs of the system. Solar aircraft An electric aircraft is an aircraft that runs on electric motors rather than internal combustion engines, with electricity coming from fuel cells, solar cells, ultracapacitors, power beaming, or batteries. Currently, flying manned electric aircraft are mostly experimental demonstrators, though many small unmanned aerial vehicles are powered by batteries. Electrically powered model aircraft have been flown since the 1970s, with one report in 1957. The first man-carrying electrically powered flights were made in 1973. Between 2015 and 2016, a manned, solar-powered plane, Solar Impulse 2, completed a circumnavigation of the Earth.
Technology
Energy
null
27751
https://en.wikipedia.org/wiki/SVG
SVG
Scalable Vector Graphics (SVG) is an XML-based vector image format for defining two-dimensional graphics, having support for interactivity and animation. The SVG specification is an open standard developed by the World Wide Web Consortium since 1999. SVG images are defined in a vector graphics format and stored in XML text files. SVG images can thus be scaled in size without loss of quality, and SVG files can be searched, indexed, scripted, and compressed. The XML text files can be created and edited with text editors or vector graphics editors, and are rendered by most web browsers. If used for images, SVG can host scripts or CSS, potentially leading to cross-site scripting attacks or other security vulnerabilities. History SVG has been in development within the World Wide Web Consortium (W3C) since 1999 after six competing proposals for vector graphics languages had been submitted to the consortium during 1998 (see below). The early SVG Working Group decided not to develop any of the commercial submissions, but to create a new markup language that was informed by but not really based on any of them. SVG was developed by the W3C SVG Working Group starting in 1998, after six competing vector graphics submissions were received that year: Web Schematics, from CCLRC PGML, from Adobe Systems, IBM, Netscape and Sun Microsystems VML, by Autodesk, Hewlett-Packard, Macromedia, Microsoft, and Vision Hyper Graphics Markup Language (HGML), by Orange UK and PRP WebCGM, from Boeing, PTC, InterCAP Graphics Systems, Inso Corporation, CCLRC, and Xerox DrawML, from Excosoft AB The working group was chaired at the time by Chris Lilley of the W3C. Early adoption was limited due to lack of support in older versions of Internet Explorer. However, as of 2011, all major desktop browsers began to support SVG. Native browser support offers various advantages, such as not requiring plugins, allowing SVG to be mixed with other content, and improving rendering and scripting reliability. Mobile support for SVG exists in various forms, with different devices and browsers supporting SVG Tiny 1.1 or 1.2. SVG can be produced using vector graphics editors and rendered into raster formats. In web-based applications, Inline SVG allows embedding SVG content within HTML documents. The SVG specification was updated to version 1.1 in 2011. Scalable Vector Graphics 2 became a W3C Candidate Recommendation on 15 September 2016. SVG 2 incorporates several new features in addition to those of SVG 1.1 and SVG Tiny 1.2. Version 1.x SVG 1.0 became a W3C Recommendation on 4 September 2001. SVG 1.1 became a W3C Recommendation on 14 January 2003. The SVG 1.1 specification is modularized in order to allow subsets to be defined as profiles. Apart from this, there is very little difference between SVG 1.1 and SVG 1.0. SVG Tiny and SVG Basic (the Mobile SVG Profiles) became W3C Recommendations on 14 January 2003. These are described as profiles of SVG 1.1. SVG Tiny 1.2 became a W3C Recommendation on 22 December 2008. It was initially drafted as a profile of the planned SVG Full 1.2 (which has since been dropped in favor of SVG 2), but was later refactored as a standalone specification. It is generally poorly supported. SVG 1.1 Second Edition, which includes all the errata and clarifications, but no new features to the original SVG 1.1 was released on 16 August 2011. SVG Tiny 1.2 Portable/Secure, a more secure subset of the SVG Tiny 1.2 profile introduced as an IETF draft standard on 29 July 2020. Also known as SVG Tiny P/S. SVG Tiny 1.2 Portable/Secure is a requirement of the BIMI draft standard. Version 2 SVG 2 removes or deprecates some features of SVG 1.1 and incorporates new features from HTML5 and Web Open Font Format (WOFF): For example, SVG 2 removes several font elements such as glyph and altGlyph (replaced by the WOFF). The xml:space attribute is deprecated in favor of CSS. HTML5 features such as translate and data-* attributes have been added. Text handling features from SVG Tiny 1.2 are annotated as to be included, but not yet formalized in text. Some other 1.2 features are cherry picked in, but SVG 2 is not a superset of SVG tiny 1.2 in general. SVG 2 reached the Candidate Recommendation stage on 15 September 2016, and revised versions were published on 7 August 2018 and 4 October 2018. The latest draft was released on 08 March 2023. Features SVG supports interactivity, animation, and rich graphical capabilities, making it suitable for both web and print applications. SVG images can be compressed with the gzip algorithm, resulting in SVGZ files that are typically 20–50% smaller than the original. SVG also supports metadata, enabling better indexing, searching, and retrieval of SVG content. SVG allows three types of graphic objects: vector graphic shapes (such as paths consisting of straight lines and curves), bitmap images, and text. Graphical objects can be grouped, styled, transformed and composited into previously rendered objects. The feature set includes nested transformations, clipping paths, alpha masks, filter effects and template objects. SVG drawings can be interactive and can include animation, defined in the SVG XML elements or via scripting that accesses the SVG Document Object Model (DOM). SVG uses CSS for styling and JavaScript for scripting. Text, including internationalization and localization, appearing in plain text within the SVG DOM, enhances the accessibility of SVG graphics. Printing Though the SVG Specification primarily focuses on vector graphics markup language, its design includes the basic capabilities of a page description language like Adobe's PDF. It contains provisions for rich graphics, and is compatible with CSS for styling purposes. SVG has the information needed to place each glyph and image in a chosen location on a printed page. Scripting and animation SVG drawings can be dynamic and interactive. Time-based modifications to the elements can be described in SMIL, or can be programmed in a scripting language (e.g. JavaScript). The W3C explicitly recommends SMIL as the standard for animation in SVG. A rich set of event handlers such as "onmouseover" and "onclick" can be assigned to any SVG graphical object to apply actions and events. Mobile profiles Because of industry demand, two mobile profiles were introduced with SVG 1.1: SVG Tiny (SVGT) and SVG Basic (SVGB). These are subsets of the full SVG standard, mainly intended for user agents with limited capabilities. In particular, SVG Tiny was defined for highly restricted mobile devices such as cellphones; it does not support styling or scripting. SVG Basic was defined for higher-level mobile devices, such as smartphones. In 2003, the 3GPP, an international telecommunications standards group, adopted SVG Tiny as the mandatory vector graphics media format for next-generation phones. SVGT is the required vector graphics format and support of SVGB is optional for Multimedia Messaging Service (MMS) and Packet-switched Streaming Service. It was later added as required format for vector graphics in 3GPP IP Multimedia Subsystem (IMS). Neither mobile profile includes support for the full Document Object Model (DOM), while only SVG Basic has optional support for scripting, but because they are fully compatible subsets of the full standard, most SVG graphics can still be rendered by devices which only support the mobile profiles. SVGT 1.2 adds a microDOM (μDOM), styling and scripting. SVGT 1.2 also includes some features not found in SVG 1.1, including non-scaling strokes, which are supported by some SVG 1.1 implementations, such as Opera, Firefox, and WebKit. As shared code bases between desktop and mobile browsers increased, the use of SVG 1.1 over SVGT 1.2 also increased. Compression SVG images, being XML, contain many repeated fragments of text, so they are well suited for lossless data compression algorithms. When an SVG image has been compressed with the gzip algorithm, it is referred to as an "SVGZ" image and uses the corresponding .svgz filename extension. Conforming SVG 1.1 viewers will display compressed images. An SVGZ file is typically 20 to 50 percent of the original size. W3C provides SVGZ files to test for conformance. Design The SVG 1.1 specification defines 14 functional areas or feature sets: Paths Simple or compound shape outlines are drawn with curved or straight lines that can be filled in, outlined, or used as a clipping path. Paths have a compact coding. For example, M (for "move to") precedes initial numeric x and y coordinates, and L (for "line to") precedes a point to which a line should be drawn. Further command letters (C, S, Q, T, and A) precede data that is used to draw various Bézier and elliptical curves. Z is used to close a path. In all cases, absolute coordinates follow capital letter commands and relative coordinates are used after the equivalent lower-case letters. Basic shapes Straight-line paths and paths made up of a series of connected straight-line segments (polylines), as well as closed polygons, circles, and ellipses can be drawn. Rectangles and round-cornered rectangles are also standard elements. Text Unicode character text included in an SVG file is expressed as XML character data. Many visual effects are possible, and the SVG specification automatically handles bidirectional text (for composing a combination of English and Arabic text, for example), vertical text (as Chinese or Japanese may be written) and characters along a curved path (such as the text around the edge of the Great Seal of the United States). Painting SVG shapes can be filled and outlined (painted with a color, a gradient, or a pattern). Fills may be opaque, or have any degree of transparency. "Markers" are line-end features, such as arrowheads, or symbols that can appear at the vertices of a polygon. Color Colors can be applied to all visible SVG elements, either directly or via fill, stroke, and other properties. Colors are specified in the same way as in CSS2, i.e. using names like black or blue, in hexadecimal such as #2f0 or #22ff00, in decimal like rgb(255,255,127), or as percentages of the form rgb(100%,100%,50%). Gradients and patterns SVG shapes can be filled or outlined with solid colors as above, or with color gradients or with repeating patterns. Color gradients can be linear or radial (circular), and can involve any number of colors as well as repeats. Opacity gradients can also be specified. Patterns are based on predefined raster or vector graphic objects, which can be repeated in x and y directions. Gradients and patterns can be animated and scripted. Since 2008, there has been discussion among professional users of SVG that either gradient meshes or preferably diffusion curves could usefully be added to the SVG specification. It is said that a "simple representation [using diffusion curves] is capable of representing even very subtle shading effects" and that "Diffusion curve images are comparable both in quality and coding efficiency with gradient meshes, but are simpler to create (according to several artists who have used both tools), and can be captured from bitmaps fully automatically." The current draft of SVG 2 includes gradient meshes. Clipping, masking and compositing Graphic elements, including text, paths, basic shapes and combinations of these, can be used as outlines to define both inside and outside regions that can be painted (with colors, gradients and patterns) independently. Fully opaque clipping paths and semi-transparent masks are composited together to calculate the color and opacity of every pixel of the final image, using alpha blending. Filter effects A filter effect consists of a series of graphics operations that are applied to a given source vector graphic to produce a modified bitmapped result. Interactivity SVG images can interact with users in many ways. In addition to hyperlinks as mentioned below, any part of an SVG image can be made receptive to user interface events such as changes in focus, mouse clicks, scrolling or zooming the image and other pointer, keyboard and document events. Event handlers may start, stop or alter animations as well as trigger scripts in response to such events. Linking SVG images can contain hyperlinks to other documents, using XLink. Through the use of the <view> element or a fragment identifier, URLs can link to SVG files that change the visible area of the document. This allows for creating specific view states that are used to zoom in/out of a specific area or to limit the view to a specific element. This is helpful when creating sprites. XLink support in combination with the <use> element also allow linking to and re-using internal and external elements. This allows coders to do more with less markup and makes for cleaner code. Scripting All aspects of an SVG document can be accessed and manipulated using scripts in a similar way to HTML. The default scripting language is JavaScript and there are defined Document Object Model (DOM) objects for every SVG element and attribute. Scripts are enclosed in <script> elements. They can run in response to pointer events, keyboard events and document events as required. Animation SVG content can be animated using the built-in animation elements such as <animate>, <animateMotion> and <animateColor>. Content can be animated by manipulating the DOM using ECMAScript and the scripting language's built-in timers. SVG animation has been designed to be compatible with current and future versions of Synchronized Multimedia Integration Language (SMIL). Animations can be continuous, they can loop and repeat, and they can respond to user events, as mentioned above. Fonts As with HTML and CSS, text in SVG may reference external font files, such as system fonts. If the required font files do not exist on the machine where the SVG file is rendered, the text may not appear as intended. To overcome this limitation, text can be displayed in an SVG font, where the required glyphs are defined in SVG as a font that is then referenced from the <text> element. Metadata In accord with the W3C's Semantic Web initiative, SVG allows authors to provide metadata about SVG content. The main facility is the <metadata> element, where the document can be described using Dublin Core metadata properties (e.g. title, creator/author, subject, description, etc.). Other metadata schemas may also be used. In addition, SVG defines <title> and <desc> elements where authors may also provide plain-text descriptive material within an SVG image to help indexing, searching and retrieval by a number of means. An SVG document can define components including shapes, gradients etc., and use them repeatedly. SVG images can also contain raster graphics, such as PNG and JPEG images, and further SVG images. This code will produce the colored shapes shown in the image, excluding the grid and labels: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg width="391" height="391" viewBox="-70.5 -70.5 391 391" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> <rect fill="#fff" stroke="#000" x="-70" y="-70" width="390" height="390"/> <g opacity="0.8"> <rect x="25" y="25" width="200" height="200" fill="lime" stroke-width="4" stroke="pink" /> <circle cx="125" cy="125" r="75" fill="orange" /> <polyline points="50,150 50,200 200,200 200,100" stroke="red" stroke-width="4" fill="none" /> <line x1="50" y1="50" x2="200" y2="200" stroke="blue" stroke-width="4" /> </g> </svg> Implementation The use of SVG on the web was limited by the lack of support in older versions of Internet Explorer (IE). Many websites that serve SVG images also provide the images in a raster format, either automatically by HTTP content negotiation or by allowing the user directly to choose the file. Web browsers Konqueror was the first browser to support SVG in release version 3.2 in February 2004. As of 2011, all major desktop browsers, and many minor ones, have some level of SVG support. Other browsers' implementations are not yet complete; see comparison of layout engines for further details. Some earlier versions of Firefox (e.g. versions between 1.5 and 3.6), as well as a few other, now outdated, web browsers capable of displaying SVG graphics, needed them embedded in <object> or <iframe> elements to display them integrated as parts of an HTML webpage instead of using the standard way of integrating images with <img>. However, SVG images may be included in XHTML pages using XML namespaces. Tim Berners-Lee, the inventor of the World Wide Web, was critical of early versions of Internet Explorer for its failure to support SVG. Opera (since 8.0) has support for the SVG 1.1 Tiny specification, while Opera 9 includes SVG 1.1 Basic support and some of SVG 1.1 Full. Opera 9.5 has partial SVG Tiny 1.2 support. It also supports SVGZ (compressed SVG). Browsers based on the Gecko layout engine (such as Firefox, Flock, Camino, and SeaMonkey) all have had incomplete support for the SVG 1.1 Full specification since 2005. The Mozilla site has an overview of the modules which are supported in Firefox and of the modules which are in development. Gecko 1.9, included in Firefox 3.0, adds support for more of the SVG specification (including filters). Pale Moon, which uses the Goanna layout engine (a fork of the Gecko engine), supports SVG. Browsers based on WebKit (such as Apple's Safari, Google Chrome, and The Omni Group's OmniWeb) have had incomplete support for the SVG 1.1 Full specification since 2006. Amaya has partial SVG support. Internet Explorer 8 and older versions do not support SVG. IE9 (released 14 March 2011) supports the basic SVG feature set. IE10 extended SVG support by adding SVG 1.1 filters. Microsoft Edge Legacy supports SVG 1.1. The Maxthon Cloud Browser also supports SVG. There are several advantages to native and full support: plugins are not needed, SVG can be freely mixed with other content in a single document, and rendering and scripting become considerably more reliable. Mobile devices Support for SVG may be limited to SVGT on older or more limited smart phones or may be primarily limited by their respective operating system. Adobe Flash Lite has optionally supported SVG Tiny since version 1.1. At the SVG Open 2005 conference, Sun demonstrated a mobile implementation of SVG Tiny 1.1 for the Connected Limited Device Configuration (CLDC) platform. Mobiles that use Opera Mobile, as well as the iPhone's built in browser, also include SVG support. However, even though it used the WebKit engine, the Android built-in browser did not support SVG prior to v3.0 (Honeycomb). Prior to v3.0, Firefox Mobile 4.0b2 (beta) for Android was the first browser running under Android to support SVG by default. The level of SVG Tiny support available varies from mobile to mobile, depending on the SVG engine installed. Many newer mobile products support additional features beyond SVG Tiny 1.1, like gradient and opacity; this is sometimes referred to as "SVGT 1.1+", though there is no such standard. RIM's BlackBerry has built-in support for SVG Tiny 1.1 since version 5.0. Support continues for WebKit-based BlackBerry Torch browser in OS 6 and 7. Nokia's S60 platform has built-in support for SVG. For example, icons are generally rendered using the platform's SVG engine. Nokia has also led the JSR 226: Scalable 2D Vector Graphics API expert group that defines Java ME API for SVG presentation and manipulation. This API has been implemented in S60 Platform 3rd Edition Feature Pack 1 and onward. Some Series 40 phones also support SVG (such as Nokia 6280). Most Sony Ericsson phones beginning with K700 (by release date) support SVG Tiny 1.1. Phones beginning with K750 also support such features as opacity and gradients. Phones with Sony Ericsson Java Platform-8 have support for JSR 226. Windows Phone has supported SVG since version 7.5. SVG is also supported on various mobile devices from Motorola, Samsung, LG, and Siemens mobile/BenQ-Siemens. eSVG, an SVG rendering library mainly written for embedded devices, is available on some mobile platforms. Authoring SVG images can be hand coded or produced by the use of a vector graphics editor, such as Inkscape, Adobe Illustrator, Adobe Flash Professional, or CorelDRAW, and rendered to common raster image formats such as PNG using the same software. Additionally, editors like Inkscape and Boxy SVG provide tools to trace raster images to Bézier curves typically using image tracing back-ends like potrace, autotrace, and imagetracerjs. Software can be programmed to render SVG images by using a library such as librsvg used by GNOME since 2000, Batik and ThorVG (Thor Vector Graphics) since 2020 for lightweight systems. SVG images can also be rendered to any desired popular image format by using ImageMagick, a free command-line utility (which also uses librsvg under the hood). For web-based applications, the mode of usage termed Inline SVG allows SVG content to be embedded within an HTML document using an <svg> tag. Its graphical capabilities can then be employed to create sophisticated user interfaces as the SVG and HTML share context, event handling, and CSS. Other uses for SVG include embedding for use in word processing (e.g. with LibreOffice) and desktop publishing (e.g. Scribus), plotting graphs (e.g. gnuplot), and importing paths (e.g. for use in GIMP or Blender). The application services Microsoft 365 and Microsoft Office 2019 offer support for exporting, importing and editing SVG images. The Uniform Type Identifier for SVG used by Apple is public.svg-image and conforms to public.image and public.xml. Security As a document format, similar to HTML documents, SVG can host scripts or CSS. This is an issue when an attacker can upload a SVG file to a website, such as a profile picture, and the file is treated as a normal picture but contains malicious content. For instance, if an SVG file is deployed as a CSS background image, or a logo on some website, or in some image gallery, then when the image is loaded in a browser it activates a script or other content. This could lock up the browser (the Billion laughs attack), but could also lead to HTML injection and cross-site scripting attacks. The W3C therefore stipulate certain requirements when SVG is simply used for images: SVG Security. The W3C says that Inline SVG (an SVG file loaded natively on a website) is considered less of a security risk because the content is part of a greater document, and so scripting and CSS would not be unexpected. Related work The MPEG-4 Part 20 standard - Lightweight Application Scene Representation (LASeR) and Simple Aggregation Format (SAF) is based on SVG Tiny. It was developed by MPEG (ISO/IEC JTC 1/SC29/WG11) and published as ISO/IEC 14496-20:2006. SVG capabilities are enhanced in MPEG-4 Part 20 with key features for mobile services, such as dynamic updates, binary encoding, state-of-art font representation. SVG was also accommodated in MPEG-4 Part 11, in the Extensible MPEG-4 Textual (XMT) format - a textual representation of the MPEG-4 multimedia content using XML.
Technology
File formats
null
27752
https://en.wikipedia.org/wiki/Spectroscopy
Spectroscopy
Spectroscopy is the field of study that measures and interprets electromagnetic spectra. In narrower contexts, spectroscopy is the precise study of color as generalized from visible light to all bands of the electromagnetic spectrum. Spectroscopy, primarily in the electromagnetic spectrum, is a fundamental exploratory tool in the fields of astronomy, chemistry, materials science, and physics, allowing the composition, physical structure and electronic structure of matter to be investigated at the atomic, molecular and macro scale, and over astronomical distances. Historically, spectroscopy originated as the study of the wavelength dependence of the absorption by gas phase matter of visible light dispersed by a prism. Current applications of spectroscopy include biomedical spectroscopy in the areas of tissue analysis and medical imaging. Matter waves and acoustic waves can also be considered forms of radiative energy, and recently gravitational waves have been associated with a spectral signature in the context of the Laser Interferometer Gravitational-Wave Observatory (LIGO). Introduction Spectroscopy is a branch of science concerned with the spectra of electromagnetic radiation as a function of its wavelength or frequency measured by spectrographic equipment, and other techniques, in order to obtain information concerning the structure and properties of matter. Spectral measurement devices are referred to as spectrometers, spectrophotometers, spectrographs or spectral analyzers. Most spectroscopic analysis in the laboratory starts with a sample to be analyzed, then a light source is chosen from any desired range of the light spectrum, then the light goes through the sample to a dispersion array (diffraction grating instrument) and captured by a photodiode. For astronomical purposes, the telescope must be equipped with the light dispersion device. There are various versions of this basic setup that may be employed. Spectroscopy began with Isaac Newton splitting light with a prism; a key moment in the development of modern optics. Therefore, it was originally the study of visible light that we call color that later under the studies of James Clerk Maxwell came to include the entire electromagnetic spectrum. Although color is involved in spectroscopy, it is not equated with the color of elements or objects that involve the absorption and reflection of certain electromagnetic waves to give objects a sense of color to our eyes. Rather spectroscopy involves the splitting of light by a prism, diffraction grating, or similar instrument, to give off a particular discrete line pattern called a "spectrum" unique to each different type of element. Most elements are first put into a gaseous phase to allow the spectra to be examined although today other methods can be used on different phases. Each element that is diffracted by a prism-like instrument displays either an absorption spectrum or an emission spectrum depending upon whether the element is being cooled or heated. Until recently all spectroscopy involved the study of line spectra and most spectroscopy still does. Vibrational spectroscopy is the branch of spectroscopy that studies the spectra. However, the latest developments in spectroscopy can sometimes dispense with the dispersion technique. In biochemical spectroscopy, information can be gathered about biological tissue by absorption and light scattering techniques. Light scattering spectroscopy is a type of reflectance spectroscopy that determines tissue structures by examining elastic scattering. In such a case, it is the tissue that acts as a diffraction or dispersion mechanism. Spectroscopic studies were central to the development of quantum mechanics, because the first useful atomic models described the spectra of hydrogen, which include the Bohr model, the Schrödinger equation, and Matrix mechanics, all of which can produce the spectral lines of hydrogen, therefore providing the basis for discrete quantum jumps to match the discrete hydrogen spectrum. Also, Max Planck's explanation of blackbody radiation involved spectroscopy because he was comparing the wavelength of light using a photometer to the temperature of a Black Body. Spectroscopy is used in physical and analytical chemistry because atoms and molecules have unique spectra. As a result, these spectra can be used to detect, identify and quantify information about the atoms and molecules. Spectroscopy is also used in astronomy and remote sensing on Earth. Most research telescopes have spectrographs. The measured spectra are used to determine the chemical composition and physical properties of astronomical objects (such as their temperature, density of elements in a star, velocity, black holes and more). An important use for spectroscopy is in biochemistry. Molecular samples may be analyzed for species identification and energy content. Theory The underlying premise of spectroscopy is that light is made of different wavelengths and that each wavelength corresponds to a different frequency. The importance of spectroscopy is centered around the fact that every element in the periodic table has a unique light spectrum described by the frequencies of light it emits or absorbs consistently appearing in the same part of the electromagnetic spectrum when that light is diffracted. This opened up an entire field of study with anything that contains atoms. Spectroscopy is the key to understanding the atomic properties of all matter. As such spectroscopy opened up many new sub-fields of science yet undiscovered. The idea that each atomic element has its unique spectral signature enabled spectroscopy to be used in a broad number of fields each with a specific goal achieved by different spectroscopic procedures. The National Institute of Standards and Technology maintains a public Atomic Spectra Database that is continually updated with precise measurements. The broadening of the field of spectroscopy is due to the fact that any part of the electromagnetic spectrum may be used to analyze a sample from the infrared to the ultraviolet telling scientists different properties about the very same sample. For instance in chemical analysis, the most common types of spectroscopy include atomic spectroscopy, infrared spectroscopy, ultraviolet and visible spectroscopy, Raman spectroscopy and nuclear magnetic resonance. In nuclear magnetic resonance (NMR), the theory behind it is that frequency is analogous to resonance and its corresponding resonant frequency. Resonances by the frequency were first characterized in mechanical systems such as pendulums, which have a frequency of motion noted famously by Galileo. Classification of methods Spectroscopy is a sufficiently broad field that many sub-disciplines exist, each with numerous implementations of specific spectroscopic techniques. The various implementations and techniques can be classified in several ways. Type of radiative energy The types of spectroscopy are distinguished by the type of radiative energy involved in the interaction. In many applications, the spectrum is determined by measuring changes in the intensity or frequency of this energy. The types of radiative energy studied include: Electromagnetic radiation was the first source of energy used for spectroscopic studies. Techniques that employ electromagnetic radiation are typically classified by the wavelength region of the spectrum and include microwave, terahertz, infrared, near-infrared, ultraviolet-visible, x-ray, and gamma spectroscopy. Particles, because of their de Broglie waves, can also be a source of radiative energy. Both electron and neutron spectroscopy are commonly used. For a particle, its kinetic energy determines its wavelength. Acoustic spectroscopy involves radiated pressure waves. Dynamic mechanical analysis can be employed to impart radiating energy, similar to acoustic waves, to solid materials. Nature of the interaction The types of spectroscopy also can be distinguished by the nature of the interaction between the energy and the material. These interactions include: Absorption spectroscopy: Absorption occurs when energy from the radiative source is absorbed by the material. Absorption is often determined by measuring the fraction of energy transmitted through the material, with absorption decreasing the transmitted portion. Emission spectroscopy: Emission indicates that radiative energy is released by the material. A material's blackbody spectrum is a spontaneous emission spectrum determined by its temperature. This feature can be measured in the infrared by instruments such as the atmospheric emitted radiance interferometer. Emission can also be induced by other sources of energy such as flames, sparks, electric arcs or electromagnetic radiation in the case of fluorescence. Elastic scattering and reflection spectroscopy determine how incident radiation is reflected or scattered by a material. Crystallography employs the scattering of high energy radiation, such as x-rays and electrons, to examine the arrangement of atoms in proteins and solid crystals. Impedance spectroscopy: Impedance is the ability of a medium to impede or slow the transmittance of energy. For optical applications, this is characterized by the index of refraction. Inelastic scattering phenomena involve an exchange of energy between the radiation and the matter that shifts the wavelength of the scattered radiation. These include Raman and Compton scattering. Coherent or resonance spectroscopy are techniques where the radiative energy couples two quantum states of the material in a coherent interaction that is sustained by the radiating field. The coherence can be disrupted by other interactions, such as particle collisions and energy transfer, and so often require high intensity radiation to be sustained. Nuclear magnetic resonance (NMR) spectroscopy is a widely used resonance method, and ultrafast laser spectroscopy is also possible in the infrared and visible spectral regions. Nuclear spectroscopy are methods that use the properties of specific nuclei to probe the local structure in matter, mainly condensed matter, molecules in liquids or frozen liquids and bio-molecules. Quantum logic spectroscopy is a general technique used in ion traps that enables precision spectroscopy of ions with internal structures that preclude laser cooling, state manipulation, and detection. Quantum logic operations enable a controllable ion to exchange information with a co-trapped ion that has a complex or unknown electronic structure. Type of material Spectroscopic studies are designed so that the radiant energy interacts with specific types of matter. Atoms Atomic spectroscopy was the first application of spectroscopy. Atomic absorption spectroscopy and atomic emission spectroscopy involve visible and ultraviolet light. These absorptions and emissions, often referred to as atomic spectral lines, are due to electronic transitions of outer shell electrons as they rise and fall from one electron orbit to another. Atoms also have distinct x-ray spectra that are attributable to the excitation of inner shell electrons to excited states. Atoms of different elements have distinct spectra and therefore atomic spectroscopy allows for the identification and quantitation of a sample's elemental composition. After inventing the spectroscope, Robert Bunsen and Gustav Kirchhoff discovered new elements by observing their emission spectra. Atomic absorption lines are observed in the solar spectrum and referred to as Fraunhofer lines after their discoverer. A comprehensive explanation of the hydrogen spectrum was an early success of quantum mechanics and explained the Lamb shift observed in the hydrogen spectrum, which further led to the development of quantum electrodynamics. Modern implementations of atomic spectroscopy for studying visible and ultraviolet transitions include flame emission spectroscopy, inductively coupled plasma atomic emission spectroscopy, glow discharge spectroscopy, microwave induced plasma spectroscopy, and spark or arc emission spectroscopy. Techniques for studying x-ray spectra include X-ray spectroscopy and X-ray fluorescence. Molecules The combination of atoms into molecules leads to the creation of unique types of energetic states and therefore unique spectra of the transitions between these states. Molecular spectra can be obtained due to electron spin states (electron paramagnetic resonance), molecular rotations, molecular vibration, and electronic states. Rotations are collective motions of the atomic nuclei and typically lead to spectra in the microwave and millimetre-wave spectral regions. Rotational spectroscopy and microwave spectroscopy are synonymous. Vibrations are relative motions of the atomic nuclei and are studied by both infrared and Raman spectroscopy. Electronic excitations are studied using visible and ultraviolet spectroscopy as well as fluorescence spectroscopy. Studies in molecular spectroscopy led to the development of the first maser and contributed to the subsequent development of the laser. Crystals and extended materials The combination of atoms or molecules into crystals or other extended forms leads to the creation of additional energetic states. These states are numerous and therefore have a high density of states. This high density often makes the spectra weaker and less distinct, i.e., broader. For instance, blackbody radiation is due to the thermal motions of atoms and molecules within a material. Acoustic and mechanical responses are due to collective motions as well. Pure crystals, though, can have distinct spectral transitions, and the crystal arrangement also has an effect on the observed molecular spectra. The regular lattice structure of crystals also scatters x-rays, electrons or neutrons allowing for crystallographic studies. Nuclei Nuclei also have distinct energy states that are widely separated and lead to gamma ray spectra. Distinct nuclear spin states can have their energy separated by a magnetic field, and this allows for nuclear magnetic resonance spectroscopy. Other types Other types of spectroscopy are distinguished by specific applications or implementations: Acoustic resonance spectroscopy is based on sound waves primarily in the audible and ultrasonic regions. Auger electron spectroscopy is a method used to study surfaces of materials on a micro-scale. It is often used in connection with electron microscopy. Cavity ring-down spectroscopy Circular dichroism spectroscopy Coherent anti-Stokes Raman spectroscopy is a recent technique that has high sensitivity and powerful applications for in vivo spectroscopy and imaging. Cold vapour atomic fluorescence spectroscopy Correlation spectroscopy encompasses several types of two-dimensional NMR spectroscopy. Deep-level transient spectroscopy measures concentration and analyzes parameters of electrically active defects in semiconducting materials. Dielectric spectroscopy Dual-polarization interferometry measures the real and imaginary components of the complex refractive index. Electron energy loss spectroscopy in transmission electron microscopy. Electron phenomenological spectroscopy measures the physicochemical properties and characteristics of the electronic structure of multicomponent and complex molecular systems. Electron paramagnetic resonance spectroscopy Force spectroscopy Fourier-transform spectroscopy is an efficient method for processing spectra data obtained using interferometers. Fourier-transform infrared spectroscopy is a common implementation of infrared spectroscopy. NMR also employs Fourier transforms. Gamma spectroscopy Hadron spectroscopy studies the energy/mass spectrum of hadrons according to spin, parity, and other particle properties. Baryon spectroscopy and meson spectroscopy are types of hadron spectroscopy. Multispectral imaging and hyperspectral imaging is a method to create a complete picture of the environment or various objects, each pixel containing a full visible, visible near infrared, near infrared, or infrared spectrum. Inelastic electron tunneling spectroscopy uses the changes in current due to inelastic electron-vibration interaction at specific energies that can also measure optically forbidden transitions. Inelastic neutron scattering is similar to Raman spectroscopy, but uses neutrons instead of photons. Laser-induced breakdown spectroscopy, also called laser-induced plasma spectrometry Laser spectroscopy uses tunable lasers and other types of coherent emission sources, such as optical parametric oscillators, for selective excitation of atomic or molecular species. Light scattering spectroscopy (LSS) is a spectroscopic technique typically used to evaluate morphological changes in epithelial cells in order to study mucosal tissue and detect early cancer and precancer. Mass spectroscopy is a historical term used to refer to mass spectrometry. The current recommendation is to use the latter term. The term "mass spectroscopy" originated in the use of phosphor screens to detect ions. Mössbauer spectroscopy probes the properties of specific isotopic nuclei in different atomic environments by analyzing the resonant absorption of gamma rays.
Physical sciences
Analytical chemistry
null
27764
https://en.wikipedia.org/wiki/Systems%20engineering
Systems engineering
Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design, integrate, and manage complex systems over their life cycles. At its core, systems engineering utilizes systems thinking principles to organize this body of knowledge. The individual outcome of such efforts, an engineered system, can be defined as a combination of components that work in synergy to collectively perform a useful function. Issues such as requirements engineering, reliability, logistics, coordination of different teams, testing and evaluation, maintainability, and many other disciplines, aka "ilities", necessary for successful system design, development, implementation, and ultimate decommission become more difficult when dealing with large or complex projects. Systems engineering deals with work processes, optimization methods, and risk management tools in such projects. It overlaps technical and human-centered disciplines such as industrial engineering, production systems engineering, process systems engineering, mechanical engineering, manufacturing engineering, production engineering, control engineering, software engineering, electrical engineering, cybernetics, aerospace engineering, organizational studies, civil engineering and project management. Systems engineering ensures that all likely aspects of a project or system are considered and integrated into a whole. The systems engineering process is a discovery process that is quite unlike a manufacturing process. A manufacturing process is focused on repetitive activities that achieve high-quality outputs with minimum cost and time. The systems engineering process must begin by discovering the real problems that need to be resolved and identifying the most probable or highest-impact failures that can occur. Systems engineering involves finding solutions to these problems. History The term systems engineering can be traced back to Bell Telephone Laboratories in the 1940s. The need to identify and manipulate the properties of a system as a whole, which in complex engineering projects may greatly differ from the sum of the parts' properties, motivated various industries, especially those developing systems for the U.S. military, to apply the discipline. When it was no longer possible to rely on design evolution to improve upon a system and the existing tools were not sufficient to meet growing demands, new methods began to be developed that addressed the complexity directly. The continuing evolution of systems engineering comprises the development and identification of new methods and modeling techniques. These methods aid in a better comprehension of the design and developmental control of engineering systems as they grow more complex. Popular tools that are often used in the systems engineering context were developed during these times, including USL, UML, QFD, and IDEF. In 1990, a professional society for systems engineering, the National Council on Systems Engineering (NCOSE), was founded by representatives from a number of U.S. corporations and organizations. NCOSE was created to address the need for improvements in systems engineering practices and education. As a result of growing involvement from systems engineers outside of the U.S., the name of the organization was changed to the International Council on Systems Engineering (INCOSE) in 1995. Schools in several countries offer graduate programs in systems engineering, and continuing education options are also available for practicing engineers. Concept Systems engineering signifies only an approach and, more recently, a discipline in engineering. The aim of education in systems engineering is to formalize various approaches simply and in doing so, identify new methods and research opportunities similar to that which occurs in other fields of engineering. As an approach, systems engineering is holistic and interdisciplinary in flavor. Origins and traditional scope The traditional scope of engineering embraces the conception, design, development, production, and operation of physical systems. Systems engineering, as originally conceived, falls within this scope. "Systems engineering", in this sense of the term, refers to the building of engineering concepts. Evolution to a broader scope The use of the term "systems engineer" has evolved over time to embrace a wider, more holistic concept of "systems" and of engineering processes. This evolution of the definition has been a subject of ongoing controversy, and the term continues to apply to both the narrower and a broader scope. Traditional systems engineering was seen as a branch of engineering in the classical sense, that is, as applied only to physical systems, such as spacecraft and aircraft. More recently, systems engineering has evolved to take on a broader meaning especially when humans were seen as an essential component of a system. Peter Checkland, for example, captures the broader meaning of systems engineering by stating that 'engineering' "can be read in its general sense; you can engineer a meeting or a political agreement." Consistent with the broader scope of systems engineering, the Systems Engineering Body of Knowledge (SEBoK) has defined three types of systems engineering: Product Systems Engineering (PSE) is the traditional systems engineering focused on the design of physical systems consisting of hardware and software. Enterprise Systems Engineering (ESE) pertains to the view of enterprises, that is, organizations or combinations of organizations, as systems. Service Systems Engineering (SSE) has to do with the engineering of service systems. Checkland defines a service system as a system which is conceived as serving another system. Most civil infrastructure systems are service systems. Holistic view Systems engineering focuses on analyzing and eliciting customer needs and required functionality early in the development cycle, documenting requirements, then proceeding with design synthesis and system validation while considering the complete problem, the system lifecycle. This includes fully understanding all of the stakeholders involved. Oliver et al. claim that the systems engineering process can be decomposed into: A Systems Engineering Technical Process A Systems Engineering Management Process Within Oliver's model, the goal of the Management Process is to organize the technical effort in the lifecycle, while the Technical Process includes assessing available information, defining effectiveness measures, to create a behavior model, create a structure model, perform trade-off analysis, and create sequential build & test plan. Depending on their application, although there are several models that are used in the industry, all of them aim to identify the relation between the various stages mentioned above and incorporate feedback. Examples of such models include the Waterfall model and the VEE model (also called the V model). Interdisciplinary field System development often requires contribution from diverse technical disciplines. By providing a systems (holistic) view of the development effort, systems engineering helps mold all the technical contributors into a unified team effort, forming a structured development process that proceeds from concept to production to operation and, in some cases, to termination and disposal. In an acquisition, the holistic integrative discipline combines contributions and balances tradeoffs among cost, schedule, and performance while maintaining an acceptable level of risk covering the entire life cycle of the item. This perspective is often replicated in educational programs, in that systems engineering courses are taught by faculty from other engineering departments, which helps create an interdisciplinary environment. Managing complexity The need for systems engineering arose with the increase in complexity of systems and projects, in turn exponentially increasing the possibility of component friction, and therefore the unreliability of the design. When speaking in this context, complexity incorporates not only engineering systems but also the logical human organization of data. At the same time, a system can become more complex due to an increase in size as well as with an increase in the amount of data, variables, or the number of fields that are involved in the design. The International Space Station is an example of such a system. The development of smarter control algorithms, microprocessor design, and analysis of environmental systems also come within the purview of systems engineering. Systems engineering encourages the use of tools and methods to better comprehend and manage complexity in systems. Some examples of these tools can be seen here: System architecture System model, modeling, and simulation Mathematical optimization System dynamics Systems analysis Statistical analysis Reliability engineering Decision making Taking an interdisciplinary approach to engineering systems is inherently complex since the behavior of and interaction among system components is not always immediately well defined or understood. Defining and characterizing such systems and subsystems and the interactions among them is one of the goals of systems engineering. In doing so, the gap that exists between informal requirements from users, operators, marketing organizations, and technical specifications is successfully bridged. Scope The principles of systems engineering – holism, emergent behavior, boundary, et al. – can be applied to any system, complex or otherwise, provided systems thinking is employed at all levels. Besides defense and aerospace, many information and technology-based companies, software development firms, and industries in the field of electronics & communications require systems engineers as part of their team. An analysis by the INCOSE Systems Engineering Center of Excellence (SECOE) indicates that optimal effort spent on systems engineering is about 15–20% of the total project effort. At the same time, studies have shown that systems engineering essentially leads to a reduction in costs among other benefits. However, no quantitative survey at a larger scale encompassing a wide variety of industries has been conducted until recently. Such studies are underway to determine the effectiveness and quantify the benefits of systems engineering. Systems engineering encourages the use of modeling and simulation to validate assumptions or theories on systems and the interactions within them. Use of methods that allow early detection of possible failures, in safety engineering, are integrated into the design process. At the same time, decisions made at the beginning of a project whose consequences are not clearly understood can have enormous implications later in the life of a system, and it is the task of the modern systems engineer to explore these issues and make critical decisions. No method guarantees today's decisions will still be valid when a system goes into service years or decades after first conceived. However, there are techniques that support the process of systems engineering. Examples include soft systems methodology, Jay Wright Forrester's System dynamics method, and the Unified Modeling Language (UML)—all currently being explored, evaluated, and developed to support the engineering decision process. Education Education in systems engineering is often seen as an extension to the regular engineering courses, reflecting the industry attitude that engineering students need a foundational background in one of the traditional engineering disciplines (e.g. aerospace engineering, civil engineering, electrical engineering, mechanical engineering, manufacturing engineering, industrial engineering, chemical engineering)—plus practical, real-world experience to be effective as systems engineers. Undergraduate university programs explicitly in systems engineering are growing in number but remain uncommon, the degrees including such material are most often presented as a BS in Industrial Engineering. Typically programs (either by themselves or in combination with interdisciplinary study) are offered beginning at the graduate level in both academic and professional tracks, resulting in the grant of either a MS/MEng or Ph.D./EngD degree. INCOSE, in collaboration with the Systems Engineering Research Center at Stevens Institute of Technology maintains a regularly updated directory of worldwide academic programs at suitably accredited institutions. As of 2017, it lists over 140 universities in North America offering more than 400 undergraduate and graduate programs in systems engineering. Widespread institutional acknowledgment of the field as a distinct subdiscipline is quite recent; the 2009 edition of the same publication reported the number of such schools and programs at only 80 and 165, respectively. Education in systems engineering can be taken as systems-centric or domain-centric: Systems-centric programs treat systems engineering as a separate discipline and most of the courses are taught focusing on systems engineering principles and practice. Domain-centric programs offer systems engineering as an option that can be exercised with another major field in engineering. Both of these patterns strive to educate the systems engineer who is able to oversee interdisciplinary projects with the depth required of a core engineer. Systems engineering topics Systems engineering tools are strategies, procedures, and techniques that aid in performing systems engineering on a project or product. The purpose of these tools varies from database management, graphical browsing, simulation, and reasoning, to document production, neutral import/export, and more. System There are many definitions of what a system is in the field of systems engineering. Below are a few authoritative definitions: ANSI/EIA-632-1999: "An aggregation of end products and enabling products to achieve a given purpose." DAU Systems Engineering Fundamentals: "an integrated composite of people, products, and processes that provide a capability to satisfy a stated need or objective." IEEE Std 1220-1998: "A set or arrangement of elements and processes that are related and whose behavior satisfies customer/operational needs and provides for life cycle sustainment of the products." INCOSE Systems Engineering Handbook: "homogeneous entity that exhibits predefined behavior in the real world and is composed of heterogeneous parts that do not individually exhibit that behavior and an integrated configuration of components and/or subsystems." INCOSE: "A system is a construct or collection of different elements that together produce results not obtainable by the elements alone. The elements, or parts, can include people, hardware, software, facilities, policies, and documents; that is, all things required to produce systems-level results. The results include system-level qualities, properties, characteristics, functions, behavior, and performance. The value added by the system as a whole, beyond that contributed independently by the parts, is primarily created by the relationship among the parts; that is, how they are interconnected." ISO/IEC 15288:2008: "A combination of interacting elements organized to achieve one or more stated purposes." NASA Systems Engineering Handbook: "(1) The combination of elements that function together to produce the capability to meet a need. The elements include all hardware, software, equipment, facilities, personnel, processes, and procedures needed for this purpose. (2) The end product (which performs operational functions) and enabling products (which provide life-cycle support services to the operational end products) that make up a system." Systems engineering processes Systems engineering processes encompass all creative, manual, and technical activities necessary to define the product and which need to be carried out to convert a system definition to a sufficiently detailed system design specification for product manufacture and deployment. Design and development of a system can be divided into four stages, each with different definitions: Task definition (informative definition) Conceptual stage (cardinal definition) Design stage (formative definition) Implementation stage (manufacturing definition) Depending on their application, tools are used for various stages of the systems engineering process: Using models Models play important and diverse roles in systems engineering. A model can be defined in several ways, including: An abstraction of reality designed to answer specific questions about the real world An imitation, analog, or representation of a real-world process or structure; or A conceptual, mathematical, or physical tool to assist a decision-maker. Together, these definitions are broad enough to encompass physical engineering models used in the verification of a system design, as well as schematic models like a functional flow block diagram and mathematical (i.e. quantitative) models used in the trade study process. This section focuses on the last. The main reason for using mathematical models and diagrams in trade studies is to provide estimates of system effectiveness, performance or technical attributes, and cost from a set of known or estimable quantities. Typically, a collection of separate models is needed to provide all of these outcome variables. The heart of any mathematical model is a set of meaningful quantitative relationships among its inputs and outputs. These relationships can be as simple as adding up constituent quantities to obtain a total, or as complex as a set of differential equations describing the trajectory of a spacecraft in a gravitational field. Ideally, the relationships express causality, not just correlation. Furthermore, key to successful systems engineering activities are also the methods with which these models are efficiently and effectively managed and used to simulate the systems. However, diverse domains often present recurring problems of modeling and simulation for systems engineering, and new advancements are aiming to cross-fertilize methods among distinct scientific and engineering communities, under the title of 'Modeling & Simulation-based Systems Engineering'. Modeling formalisms and graphical representations Initially, when the primary purpose of a systems engineer is to comprehend a complex problem, graphic representations of a system are used to communicate a system's functional and data requirements. Common graphical representations include: Functional flow block diagram (FFBD) Model-based design Data flow diagram (DFD) N2 chart IDEF0 diagram Use case diagram Sequence diagram Block diagram Signal-flow graph USL function maps and type maps Enterprise architecture frameworks A graphical representation relates the various subsystems or parts of a system through functions, data, or interfaces. Any or each of the above methods is used in an industry based on its requirements. For instance, the N2 chart may be used where interfaces between systems are important. Part of the design phase is to create structural and behavioral models of the system. Once the requirements are understood, it is now the responsibility of a systems engineer to refine them and to determine, along with other engineers, the best technology for a job. At this point starting with a trade study, systems engineering encourages the use of weighted choices to determine the best option. A decision matrix, or Pugh method, is one way (QFD is another) to make this choice while considering all criteria that are important. The trade study in turn informs the design, which again affects graphic representations of the system (without changing the requirements). In an SE process, this stage represents the iterative step that is carried out until a feasible solution is found. A decision matrix is often populated using techniques such as statistical analysis, reliability analysis, system dynamics (feedback control), and optimization methods. Other tools Systems Modeling Language Systems Modeling Language (SysML), a modeling language used for systems engineering applications, supports the specification, analysis, design, verification and validation of a broad range of complex systems. Lifecycle Modeling Language Lifecycle Modeling Language (LML), is an open-standard modeling language designed for systems engineering that supports the full lifecycle: conceptual, utilization, support, and retirement stages. Related fields and sub-fields Many related fields may be considered tightly coupled to systems engineering. The following areas have contributed to the development of systems engineering as a distinct entity: Cognitive systems engineering Cognitive systems engineering (CSE) is a specific approach to the description and analysis of human-machine systems or sociotechnical systems. The three main themes of CSE are how humans cope with complexity, how work is accomplished by the use of artifacts, and how human-machine systems and socio-technical systems can be described as joint cognitive systems. CSE has since its beginning become a recognized scientific discipline, sometimes also referred to as cognitive engineering. The concept of a Joint Cognitive System (JCS) has in particular become widely used as a way of understanding how complex socio-technical systems can be described with varying degrees of resolution. The more than 20 years of experience with CSE has been described extensively. Configuration management Like systems engineering, configuration management as practiced in the defense and aerospace industry is a broad systems-level practice. The field parallels the taskings of systems engineering; where systems engineering deals with requirements development, allocation to development items and verification, configuration management deals with requirements capture, traceability to the development item, and audit of development item to ensure that it has achieved the desired functionality and outcomes that systems engineering and/or Test and Verification Engineering have obtained and proven through objective testing. Control engineering Control engineering and its design and implementation of control systems, used extensively in nearly every industry, is a large sub-field of systems engineering. The cruise control on an automobile and the guidance system for a ballistic missile are two examples. Control systems theory is an active field of applied mathematics involving the investigation of solution spaces and the development of new methods for the analysis of the control process. Industrial engineering Industrial engineering is a branch of engineering that concerns the development, improvement, implementation, and evaluation of integrated systems of people, money, knowledge, information, equipment, energy, material, and process. Industrial engineering draws upon the principles and methods of engineering analysis and synthesis, as well as mathematical, physical, and social sciences together with the principles and methods of engineering analysis and design to specify, predict, and evaluate results obtained from such systems. Production Systems Engineering Production Systems Engineering (PSE) is an emerging branch of Engineering intended to uncover fundamental principles of production systems and utilize them for analysis, continuous improvement, and design. Interface design Interface design and its specification are concerned with assuring that the pieces of a system connect and inter-operate with other parts of the system and with external systems as necessary. Interface design also includes assuring that system interfaces are able to accept new features, including mechanical, electrical, and logical interfaces, including reserved wires, plug-space, command codes, and bits in communication protocols. This is known as extensibility. Human-Computer Interaction (HCI) or Human-Machine Interface (HMI) is another aspect of interface design and is a critical aspect of modern systems engineering. Systems engineering principles are applied in the design of communication protocols for local area networks and wide area networks. Mechatronic engineering Mechatronic engineering, like systems engineering, is a multidisciplinary field of engineering that uses dynamic systems modeling to express tangible constructs. In that regard, it is almost indistinguishable from Systems Engineering, but what sets it apart is the focus on smaller details rather than larger generalizations and relationships. As such, both fields are distinguished by the scope of their projects rather than the methodology of their practice. Operations research Operations research supports systems engineering. Operations research, briefly, is concerned with the optimization of a process under multiple constraints. Performance engineering Performance engineering is the discipline of ensuring a system meets customer expectations for performance throughout its life. Performance is usually defined as the speed with which a certain operation is executed or the capability of executing a number of such operations in a unit of time. Performance may be degraded when operations queued to execute are throttled by limited system capacity. For example, the performance of a packet-switched network is characterized by the end-to-end packet transit delay or the number of packets switched in an hour. The design of high-performance systems uses analytical or simulation modeling, whereas the delivery of high-performance implementation involves thorough performance testing. Performance engineering relies heavily on statistics, queueing theory, and probability theory for its tools and processes. Program management and project management Program management (or project management) has many similarities with systems engineering, but has broader-based origins than the engineering ones of systems engineering. Project management is also closely related to both program management and systems engineering. Both include scheduling as engineering support tool in assessing interdisciplinary concerns under management process. In particular, the direct relationship of resources, performance features, and risk to the duration of a task or the dependency links among tasks and impacts across the system lifecycle are systems engineering concerns. Proposal engineering Proposal engineering is the application of scientific and mathematical principles to design, construct, and operate a cost-effective proposal development system. Basically, proposal engineering uses the "systems engineering process" to create a cost-effective proposal and increase the odds of a successful proposal. Reliability engineering Reliability engineering is the discipline of ensuring a system meets customer expectations for reliability throughout its life (i.e. it does not fail more frequently than expected). Next to the prediction of failure, it is just as much about the prevention of failure. Reliability engineering applies to all aspects of the system. It is closely associated with maintainability, availability (dependability or RAMS preferred by some), and integrated logistics support. Reliability engineering is always a critical component of safety engineering, as in failure mode and effects analysis (FMEA) and hazard fault tree analysis, and of security engineering. Risk management Risk management, the practice of assessing and dealing with risk is one of the interdisciplinary parts of Systems Engineering. In development, acquisition, or operational activities, the inclusion of risk in tradeoffs with cost, schedule, and performance features, involves the iterative complex configuration management of traceability and evaluation to the scheduling and requirements management across domains and for the system lifecycle that requires the interdisciplinary technical approach of systems engineering. Systems Engineering has Risk Management define, tailor, implement, and monitor a structured process for risk management which is integrated into the overall effort. Safety engineering The techniques of safety engineering may be applied by non-specialist engineers in designing complex systems to minimize the probability of safety-critical failures. The "System Safety Engineering" function helps to identify "safety hazards" in emerging designs and may assist with techniques to "mitigate" the effects of (potentially) hazardous conditions that cannot be designed out of systems. Security engineering Security engineering can be viewed as an interdisciplinary field that integrates the community of practice for control systems design, reliability, safety, and systems engineering. It may involve such sub-specialties as authentication of system users, system targets, and others: people, objects, and processes. Software engineering From its beginnings, software engineering has helped shape modern systems engineering practice. The techniques used in the handling of the complexities of large software-intensive systems have had a major effect on the shaping and reshaping of the tools, methods, and processes of Systems Engineering.
Technology
Disciplines
null
27772
https://en.wikipedia.org/wiki/Sandstone
Sandstone
Sandstone is a clastic sedimentary rock composed mainly of sand-sized (0.0625 to 2 mm) silicate grains, cemented together by another mineral. Sandstones comprise about 20–25% of all sedimentary rocks. Most sandstone is composed of quartz or feldspar, because they are the most resistant minerals to the weathering processes at the Earth's surface. Like uncemented sand, sandstone may be imparted any color by impurities within the minerals, but the most common colors are tan, brown, yellow, red, grey, pink, white, and black. Because sandstone beds can form highly visible cliffs and other topographic features, certain colors of sandstone have become strongly identified with certain regions, such as the red rock deserts of Arches National Park and other areas of the American Southwest. Rock formations composed of sandstone usually allow the percolation of water and other fluids and are porous enough to store large quantities, making them valuable aquifers and petroleum reservoirs. Quartz-bearing sandstone can be changed into quartzite through metamorphism, usually related to tectonic compression within orogenic belts. Origins Sandstones are clastic in origin (as opposed to either organic, like chalk and coal, or chemical, like gypsum and jasper). The silicate sand grains from which they form are the product of physical and chemical weathering of bedrock. Weathering and erosion are most rapid in areas of high relief, such as volcanic arcs, areas of continental rifting, and orogenic belts. Eroded sand is transported by rivers or by the wind from its source areas to depositional environments where tectonics has created accommodation space for sediments to accumulate. Forearc basins tend to accumulate sand rich in lithic grains and plagioclase. Intracontinental basins and grabens along continental margins are also common environments for deposition of sand. As sediments continue to accumulate in the depositional environment, older sand is buried by younger sediments, and it undergoes diagenesis. This mostly consists of compaction and lithification of the sand. Early stages of diagenesis, described as eogenesis, take place at shallow depths (a few tens of meters) and are characterized by bioturbation and mineralogical changes in the sands, with only slight compaction. The red hematite that gives red bed sandstones their color is likely formed during eogenesis. Deeper burial is accompanied by mesogenesis, during which most of the compaction and lithification takes place. Compaction takes place as the sand comes under increasing pressure from overlying sediments. Sediment grains move into more compact arrangements, ductile grains (such as mica grains) are deformed, and pore space is reduced. In addition to this physical compaction, chemical compaction may take place via pressure solution. Points of contact between grains are under the greatest strain, and the strained mineral is more soluble than the rest of the grain. As a result, the contact points are dissolved away, allowing the grains to come into closer contact. Lithification follows closely on compaction, as increased temperatures at depth hasten deposition of cement that binds the grains together. Pressure solution contributes to cementing, as the mineral dissolved from strained contact points is redeposited in the unstrained pore spaces. Mechanical compaction takes place primarily at depths less than . Chemical compaction continues to depths of , and most cementation takes place at depths of . Unroofing of buried sandstone is accompanied by telogenesis, the third and final stage of diagenesis. As erosion reduces the depth of burial, renewed exposure to meteoric water produces additional changes to the sandstone, such as dissolution of some of the cement to produce secondary porosity. Components Framework grains Framework grains are sand-sized ( diameter) detrital fragments that make up the bulk of a sandstone. Most framework grains are composed of quartz or feldspar, which are the common minerals most resistant to weathering processes at the Earth's surface, as seen in the Goldich dissolution series. Framework grains can be classified into several different categories based on their mineral composition: Quartz framework grains are the dominant minerals in most clastic sedimentary rocks; this is because they have exceptional physical properties, such as hardness and chemical stability. These physical properties allow the quartz grains to survive multiple recycling events, while also allowing the grains to display some degree of rounding. Quartz grains evolve from plutonic rock, which are felsic in origin and also from older sandstones that have been recycled. Feldspathic framework grains are commonly the second most abundant mineral in sandstones. Feldspar can be divided into alkali feldspars and plagioclase feldspars, which can be distinguished under a petrographic microscope. Alkali feldspar range in chemical composition from KAlSi3O8 to NaAlSi3O8. Plagioclase feldspar range in composition from NaAlSi3O8 to CaAl2Si2O8. Lithic framework grains (also called lithic fragments or lithic clasts) are pieces of ancient source rock that have yet to weather away to individual mineral grains. Lithic fragments can be any fine-grained or coarse-grained igneous, metamorphic, or sedimentary rock, although the most common lithic fragments found in sedimentary rocks are clasts of volcanic rocks. Accessory minerals are all other mineral grains in a sandstone. These minerals usually make up just a small percentage of the grains in a sandstone. Common accessory minerals include micas (muscovite and biotite), olivine, pyroxene, and corundum. Many of these accessory grains are more dense than the silicates that make up the bulk of the rock. These heavy minerals are commonly resistant to weathering and can be used as an indicator of sandstone maturity through the ZTR index. Common heavy minerals include zircon, tourmaline, rutile (hence ZTR), garnet, magnetite, or other dense, resistant minerals derived from the source rock. Matrix Matrix is very fine material, which is present within interstitial pore space between the framework grains. The nature of the matrix within the interstitial pore space results in a twofold classification: Arenites are texturally clean sandstones that are free of or have very little matrix. Wackes are texturally dirty sandstones that have a significant amount of matrix. Cement Cement is what binds the siliciclastic framework grains together. Cement is a secondary mineral that forms after deposition and during burial of the sandstone. These cementing materials may be either silicate minerals or non-silicate minerals, such as calcite. Silica cement can consist of either quartz or opal minerals. Quartz is the most common silicate mineral that acts as cement. In sandstone where there is silica cement present, the quartz grains are attached to cement, which creates a rim around the quartz grain called overgrowth. The overgrowth retains the same crystallographic continuity of quartz framework grain that is being cemented. Opal cement is found in sandstones that are rich in volcanogenic materials, and very rarely is in other sandstones. Calcite cement is the most common carbonate cement. Calcite cement is an assortment of smaller calcite crystals. The cement adheres to the framework grains, cementing the framework grains together. Other minerals that act as cements include: hematite, limonite, feldspars, anhydrite, gypsum, barite, clay minerals, and zeolite minerals. Sandstone that becomes depleted of its cement binder through weathering gradually becomes friable and unstable. This process can be somewhat reversed by the application of tetraethyl orthosilicate (Si(OC2H5)4) which will deposit amorphous silicon dioxide between the sand grains. The reaction is as follows. Si(OC2H5)4 (l) + 2 H2O (l) → SiO2 (s) + 4 C2H5OH (g) Pore space Pore space includes the open spaces within a rock or a soil. The pore space in a rock has a direct relationship to the porosity and permeability of the rock. The porosity and permeability are directly influenced by the way the sand grains are packed together. Porosity is the percentage of bulk volume that is inhabited by interstices within a given rock. Porosity is directly influenced by the packing of even-sized spherical grains, rearranged from loosely packed to tightest packed in sandstones. Permeability is the rate in which water or other fluids flow through the rock. For groundwater, work permeability may be measured in gallons per day through a one square foot cross section under a unit hydraulic gradient. Types of sandstone Sandstones are typically classified by point-counting a thin section using a method like the Gazzi-Dickinson Method. This yields the relative percentages of quartz, feldspar, and lithic grains and the amount of clay matrix. The composition of a sandstone can provide important information on the genesis of the sediments when used with a triangular quartz, feldspar, lithic chart (QFL diagrams). However, geologists have not been able to agree on a set of boundaries separating regions of the QFL triangle. Visual aids are diagrams that allow geologists to interpret different characteristics of a sandstone. For example, a QFL chart can be marked with a provenance model that shows the likely tectonic origin of sandstones with various compositions of framework grains. Likewise, the stage of textural maturity chart illustrates the different stages that a sandstone goes through as the degree of kinetic processing of the sediments increases. A QFL chart is a representation of the framework grains and matrix that is present in a sandstone. This chart is similar to those used in igneous petrology. When plotted correctly, this model of analysis creates a meaningful quantitative classification of sandstones. A sandstone provenance chart is typically based on a QFL chart but allows geologists to visually interpret the different types of places from which sandstones can originate. A stage of textural maturity chart shows the differences between immature, submature, mature, and supermature sandstones. As the sandstone becomes more mature, grains become more rounded, and there is less clay in the matrix of the rock. Dott's classification scheme Dott's (1964) sandstone classification scheme is one of many such schemes used by geologists for classifying sandstones. Dott's scheme is a modification of Gilbert's classification of silicate sandstones, and it incorporates R.L. Folk's dual textural and compositional maturity concepts into one classification system. The philosophy behind combining Gilbert's and R. L. Folk's schemes is that it is better able to "portray the continuous nature of textural variation from mudstone to arenite and from stable to unstable grain composition". Dott's classification scheme is based on the mineralogy of framework grains, and on the type of matrix present in between the framework grains. In this specific classification scheme, Dott has set the boundary between arenite and wackes at 15% matrix. In addition, Dott also breaks up the different types of framework grains that can be present in a sandstone into three major categories: quartz, feldspar, and lithic grains. Arenites are types of sandstone that have less than 15% clay matrix in between the framework grains. Quartz arenites are sandstones that contain more than 90% of siliceous grains. Grains can include quartz or chert rock fragments. Quartz arenites are texturally mature to supermature sandstones. These pure quartz sands result from extensive weathering that occurred before and during transport. This weathering removed everything but quartz grains, the most stable mineral. They are commonly affiliated with rocks that are deposited in a stable cratonic environment, such as aeolian beaches or shelf environments. Quartz arenites emanate from multiple recycling of quartz grains, generally as sedimentary source rocks and less regularly as first-cycle deposits derived from primary igneous or metamorphic rocks. Feldspathic arenites are sandstones that contain less than 90% quartz, and more feldspar than unstable lithic fragments, and minor accessory minerals. Feldspathic sandstones are commonly immature or sub-mature. These sandstones occur in association with cratonic or stable shelf settings. Feldspathic sandstones are derived from granitic-type, primary crystalline, rocks. If the sandstone is dominantly plagioclase, then it is igneous in origin. Lithic arenites are characterised by generally high content of unstable lithic fragments. Examples include volcanic and metamorphic clasts, though stable clasts such as chert are common in lithic arenites. This type of rock contains less than 90% quartz grains and more unstable rock fragments than feldspars. They are commonly immature to submature texturally. They are associated with fluvial conglomerates and other fluvial deposits, or in deeper water marine conglomerates. They are formed under conditions that produce large volumes of unstable material, derived from fine-grained rocks, mostly shales, volcanic rocks, and metamorphic rock. Wackes are sandstones that contain more than 15% clay matrix between framework grains. Quartz wackes are uncommon because quartz arenites are texturally mature to supermature. Felspathic wackes are feldspathic sandstone that contain a matrix that is greater than 15%. Lithic wacke is a sandstone in which the matrix greater than 15%. Arkose sandstones are more than 25 percent feldspar. The grains tend to be poorly rounded and less well sorted than those of pure quartz sandstones. These feldspar-rich sandstones come from rapidly eroding granitic and metamorphic terrains where chemical weathering is subordinate to physical weathering. Greywacke sandstones are a heterogeneous mixture of lithic fragments and angular grains of quartz and feldspar or grains surrounded by a fine-grained clay matrix. Much of this matrix is formed by relatively soft fragments, such as shale and some volcanic rocks, that are chemically altered and physically compacted after deep burial of the sandstone formation. Quartzite When sandstone is subjected to the great heat and pressure associated with regional metamorphism, the individual quartz grains recrystallize, along with the former cementing material, to form the metamorphic rock called quartzite. Most or all of the original texture and sedimentary structures of the sandstone are erased by the metamorphism. The grains are so tightly interlocked that when the rock is broken, it fractures through the grains to form an irregular or conchoidal fracture. Geologists had recognized by 1941 that some rocks show the macroscopic characteristics of quartzite, even though they have not undergone metamorphism at high pressure and temperature. These rocks have been subject only to the much lower temperatures and pressures associated with diagenesis of sedimentary rock, but diagenesis has cemented the rock so thoroughly that microscopic examination is necessary to distinguish it from metamorphic quartzite. The term orthoquartzite is used to distinguish such sedimentary rock from metaquartzite produced by metamorphism. By extension, the term orthoquartzite has occasionally been more generally applied to any quartz-cemented quartz arenite. Orthoquartzite (in the narrow sense) is often 99% SiO2 with only very minor amounts of iron oxide and trace resistant minerals such as zircon, rutile and magnetite. Although few fossils are normally present, the original texture and sedimentary structures are preserved. The typical distinction between a true orthoquartzite and an ordinary quartz sandstone is that an orthoquartzite is so highly cemented that it will fracture across grains, not around them. This is a distinction that can be recognized in the field. In turn, the distinction between an orthoquartzite and a metaquartzite is the onset of recrystallization of existing grains. The dividing line may be placed at the point where strained quartz grains begin to be replaced by new, unstrained, small quartz grains, producing a mortar texture that can be identified in thin sections under a polarizing microscope. With increasing grade of metamorphism, further recrystallization produces foam texture, characterized by polygonal grains meeting at triple junctions, and then porphyroblastic texture, characterized by coarse, irregular grains, including some larger grains (porphyroblasts.) Uses Sandstone has been used since prehistoric times for construction, decorative art works and tools. It has been widely employed around the world in constructing temples, churches, homes and other buildings, and in civil engineering. Although its resistance to weathering varies, sandstone is easy to work. That makes it a common building and paving material, including in asphalt concrete. However, some types that have been used in the past, such as the Collyhurst sandstone used in North West England, have had poor long-term weather resistance, necessitating repair and replacement in older buildings. Because of the hardness of individual grains, uniformity of grain size and friability of their structure, some types of sandstone are excellent materials from which to make grindstones, for sharpening blades and other implements. Non-friable sandstone can be used to make grindstones for grinding grain, e.g., gritstone. A type of pure quartz sandstone, orthoquartzite, with more of 90–95 percent of quartz, has been proposed for nomination to the Global Heritage Stone Resource. In some regions of Argentina, the orthoquartzite-stoned facade is one of the main features of the Mar del Plata style bungalows.
Physical sciences
Petrology
null